id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
27609423 | pes2o/s2orc | v3-fos-license | Fusobacterium necrophorum presenting as isolated lung nodules
Fusobacterium necrophorum causes Lemierre's syndrome - a dramatic and distinct condition beginning with pharyngitis before proceeding to internal jugular vein septic thrombophlebitis and respiratory tract infection in otherwise healthy individuals. It is rare, but by far the most common pathway to parenchymal lung disease with this organism. Here we describe we a 34 year old healthy lady who was nontoxic without any antecedent illness who presented with lung nodules due to fusobacterium necrophorum as the sole manifestation of disease. Leading diagnostic consideration prior to culture data was pulmonary vasculitis. Identifying her disease process was a somewhat chance occurrence, and it began to resolve prior to antibiotic therapy. Though it would be difficult to recommend keen awareness of this organism given its rarity, it is important to consider that its scope may be broader than traditionally considered.
Introduction
Lemierre's syndrome is the most common condition associated with the gram negative anaerobe Fusobacterium necrophorum, though several other patterns of infection can occur. The infrequency with which those rarer presentations are encountered make consideration of this organism in settings outside of pharyngitis a challenge. Pulmonary involvement after persistent pharyngitis and internal jugular septic thrombophlebitis is well described [1], but here we describe a case of lung nodules related to Fusobacterium necrophorum without any antecedent illness.
Case
A healthy 34 year old woman presented to medical attention after experiencing several days of moderately severe pleuritic chest pain without other respiratory or constitutional symptoms. She was Ecuadorian where she lived until the age of 5 and had not recently been, worked at a local supermarket (although not in food handling capacity) and had moderate persistent asthma well controlled on combination long acting beta-agonist and medium dose inhaled corticosteroid. Contrast CT scan of the chest demonstrated several bilateral pulmonary nodules without infiltrate, adenopathy or effusion. The largest was 1.5 Â 1.5 cm in the left upper lobe with cavitation ( Fig. 1). Active tuberculosis was ruled out. Blood cultures were negative and transthoracic echocardiogram did not demonstrate valvular vegetations. Nonspecific inflammatory markers were elevated: C-rp 61 mg/L (normal range: 0e3) and ESR 28 MM/h (normal range: 0e22) and ANA was positive with titer 1:80; however, ANCA were within the normal range. She did not have a leukocytosis (WBC 8300/uL). Serologic evaluation for fungal disease was negative. She had a mild transaminitis with alanine aminotransferase > aspartate aminotransferase and imaging of the liver via ultrasound and CT scan revealed steatosis without discrete lesions. Expectorated sputum grew Mycobacterium avium complex; however the clinical scenario seemed largely incompatible. Airway examination on bronchoscopy and bronchoalveolar lavage were also unrevealing. The patient underwent a surgical biopsy of the left upper lobe nodule -our leading diagnostic consideration of which at the time was seronegative pulmonary vasculitis. Pathology did not show granuloma, malignancy or AFB; rather, was an abscess (Fig. 3). Culture of the material grew Fusobacterium necrophorum. Notably, she had no upper respiratory symptoms, neck pain, febrile illness or dental work around the time of this presentation. On further detailed questioning she recalled a sore throat 5 months prior without fever, and enjoyed several months of good health in the intervening time. CT of sinus/neck were normal and in fact, on repeat chest CT several weeks later lung nodules contralateral to biopsy and prior to any therapy were smaller in size (Fig. 2). We elected to treat with ertapenem for 6 weeks and CT scan after therapy showed resolution of all dominant nodules. Of note, chest pain resolved well prior to initiation of therapy and has not recurred.
Discussion
Lemierre's syndrome is rare disease of healthy young individuals [1]. Case definition, though not universally agreed upon, includes a constellation of findings: persistent pharyngitis, internal jugular vein septic thrombophlebitis along with evidence of metastatic lesions with isolation of Fusobacterium necrophorum (or rarely other oral anaerobes [2]) from an infected site [3]. The lung is the site of metastasis in up to 97% of cases [4], appearing as nodules, infiltrate, empyema and ARDS [5]; occasionally joints, the CNS and the liver are involved as well. Although mortality is lower than originally described by Dr. Lemierre in 1936, infection can still be lethal with a mortality rate around 5% [3]. Little is known about the pathogenesis; for example, there is limited data to support the traditional hypothesis that the organism is a normal oropharyngeal resident [6]. Infection may in fact be precipitated by consuming food or water contaminated with fecal material [7] during periods of pharyngeal epithelial inflammation from viral infection [8], although in most individuals, Lemierre's syndrome occurs without clear precipitating condition or discernible risk factor.
Normally invasive Fusobacterium necrophorum disease is dramatic and highly characteristic. Fever, tachycardia and marked leukocytosis are typical features, in contrast to our patient's largely asymptomatic presentation. Additionally, the presence of septic pulmonary emboli has been used as a surrogate for the presence of thrombophlebitis [3,9], thought to be required for the development of lung disease as either the origin of emboli or through direct extension of infectious material to the lung or pleural space. This was also absent in our case.
One possible explanation for our patient's atypical presentation is that her initial bacteremia was the consequence of subclinical rather than overt pharyngitis. There is a suggestion in the literature that the disease has a male predominance and peak age of incidence 16-23 [8] for reasons that are not clear. Perhaps disease, if present, is less likely to be severe and therefore underrecognized in a phenotype similar to our patient: female and slightly older. There is a small body of evidence supporting the existence of more minor forms of the disease e for example, in one study 10% of throat swabs in uncomplicated pharyngitis grew Fusobacterium necrophorum when cultured anaerobically (not common practice) [10], suggesting that full blown Lemierre's syndrome is not necessarily the only outcome of upper respiratory tract infection. Metastatic Fusobacterium necrophorum lesions separated from pharyngitis by weeks to months has been rarely described [11,12]: in one report, the organism was isolated from a liver abscess two months after a URI. Our patient's sore throat was five months prior, not associated with fever and therefore hard for us to imagine related to her lung lesions.
Another possibility is infection not coming from the oropharynx. The GI or GU tracts have been speculated to be additional sites of origin [13]; often described as "necrobacillosis" not fitting the case definition of Lemierre's syndrome. But, the lung is often not the metastatic site in invasive Fusobacterium necrophorum disease without sore throat [13,14] and more commonly the liver, bone and joints are involved. This perhaps reflects the routes of venous and lymphatic spread from the tissue of origin. We doubt that the organism was a bystander in a polymicrobial abscess as it was the sole organism that grew from a surgical biopsy specimen. Additionally, Fusobacterium necrophorum has never to our knowledge been isolated and thought to be a secondary passenger.
Though Fusobacterium necrophorum infections are curable, they are potentially lethal and mortality in the pre-antibiotic era was substantial. Their low incidence is thought in part related to widespread use of potent antibiotics; therefore, we were surprised to find resolution of nodules prior to antibiotics. Perhaps she would have cleared infection on her own (though we deemed it inappropriate to withhold treatment). It would be inconsistent based on our appraisal of the literature to recommend keen awareness of this organism in an otherwise unexplained pleuropulmonary syndrome given its rarity. Nevertheless, our patient had metastatic Fusobacterium necrophorum lung lesions, the source of which we can only speculate.
Conclusions
Metastatic spread of fusobacterium is usually related to overt pharyngitis. Here we describe necrobacillosis with pulmonary nodules, one cavitary, as the sole manifestation of disease in a nontoxic patient without pharyngitis. Identifying her disease process was a somewhat chance occurrence, and it began to resolve prior to antibiotic therapy.
Contributorship
Rajiv Sonti and Christine Fleury cared for the patient; Rajiv Sonti wrote the manuscript with edits from Christine Fleury. | 2016-05-04T20:20:58.661Z | 2015-05-21T00:00:00.000 | {
"year": 2015,
"sha1": "3ec7f60e39722ae18cd12ffc3d433065443caaad",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.rmcr.2015.05.011",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ec7f60e39722ae18cd12ffc3d433065443caaad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
139473980 | pes2o/s2orc | v3-fos-license | EXPERIMENTAL STUDIES OF TWIST RATIO EFFECT TO THE HEAT TRANSFER ENHANCEMENT USING SQUARE CUT TAPE AND CLASSICAL TAPE INSERT
Bergles, A.E., 1985, Techniques to Augment Heat Transfer. In:Rohsenow, W.M., Hartnett, J.P.,Ganie, E. (Eds.), Handbook of Heat Transfer Application, McGraw-Hill, New York. Cengel, Y.A., Cimbala, J.M., 2006, Fluid Mechanics: Fundamental and Applications, 1st edition, McGraw–Hill, New York Holman, J.P., 2010, Heat Transfer, 10th ed., McGraw–Hill, New York Lavine, A.S., Incropera, F., and De Witt, D., 2006, Fundamentals of Heat and Mass Transfer. John Wiley & Sons; 6th edition (March 10, 2006). Kreith, F., Manglik, R., and Bohn, M., 2010, Principles of Heat Transfer, Cengage learning. Manglik, R.M., and Bergles, A.E., 1992, Heat Transfer Enhancement and Pressure Drop in Viscous Liquid Flows in Isothermal Tubes with Twisted-Tape Inserts, Warme-und Stoffubertragung, Vol. 27 (4), pp. 249-257. Murugesan, P., Mayilsamy, K., and Suresh, S., 2010, Turbulent Heat Transfer and Pressure Drop in Tube Fitted with Square-Cut Twisted Tape, Chinese Journal of Chemical Engineering, Vol. 18 (4), pp. 609-617. Murugesan, P., Mayilsamy, K., and Suresh, S., 2011a, Heat Transfer and Friction Factor in a Tube Equipped with U-Cut Twisted Tape Insert, Jordan Journal of Mechanical and Industrial Engineering, Vol. 5 pp. 559-565. Murugesan, P., Mayilsamy, K., Suresh, S., and Srinivasan, P., 2009, Heat Transfer and Pressure Drop Characteristics of Turbulent Flow in a Tube Fitted with Trapezoidal-Cut Twisted Tape Insert, International journal of academic research, Vol. 1 (1). Pratik P. Ganorkar, R.M. Warkhedkar,2015, Heat Transfer Enhancement in a Tube Using Elliptical-Cut Twisted Tape Inserts, International journal of mechanical engineering, Vol 2. Quazi, I., and Mohite, V. 2015, Heat Transfer Enhancement in a Heat Exchanger Using Punched and V-Cut Twisted Tape Inserts. Suresh, K.P., Raju, K., Mahanta, P., and Dewan, A., 2005, Review of Passive Heat Transfer Augmentation Techniques, Proceedings of the Institution of Mechanical Engineers (I-Mech-E) Part A, Journal of Power and Energy, Vol. 218 (7), pp. 509-527. Petukhov, B., 1970, Heat Transfer and Friction in Turbulent Pipe Flow with Variable Physical Properties, Advances in heat transfer, Vol. 6 (503), pp. i565. Salam, B., Biswas, S., Saha, S., and Bhuiya, M.M.K., 2013, Heat Transfer Enhancement in a Tube Using Rectangular-Cut Twisted Tape Insert, Procedia Engineering, Vol. 56 pp. 96-103. White, F.M., 2003, Fluid Mechanics,(2003). McGraw-Hill. Yunus, A.C., 2003, Heat Transfer: A Practical Approach, MacGraw Hill, New York, Vol.
INTRODUCTION
Heat transfer enhancement techniques, especially in heat exchanger, can substantially improve its performance. The general objective of this technique is to reduce the heat exchanger size, increase its capacity, and cut down heat exchanger pumping power. It can be classified into three groups; techniques of passive, active, and mix. Its passively technique is acquired without providing additional energy flow. The active one is conducted by providing extra energy to the fluid flow, so that the active technique requires higher costs than passive techniques. In a mix of techniques, two or more of the active and passive techniques are used simultaneously to generate heat transfer enhancement, where it is higher than other techniques which are operated separately.
Twisted tape inserts is one of the techniques which practiced to boost a passively heat transfer on heat exchanger. It became most popular due to their low cost and easy installation. On the other hand, twisted tape inserts which purposed as a continuous flow twisting device to enhance the heat transfer rate.
Research on heat transfer enhancement of heat exchanger has ever performed by employing inserts with v-cut punched and twisted tapes. Test section has 700 mm of display dimension, 26 mm of inner diameter, 30 mm of outside diameter, 2 mm of thickness. Punched and v-cut twisted tapes operated variations in 9, 10, and 11 of twist ratio. The results portrayed that its heat transfer rising in comparison with the plaint tube from 3.34 to 14.4% for 9 of twist ratio and 13.35 to 25% with a twist ratio of 11. Its maximum friction factor was 52% at 9 of twist ratio and 66% at 11 of twisted ratio that compared with plain tube. (Quazi and Mohite-2015).
Previously, also conducted an experimental research on heat transfer characteristics, friction factor, and thermal performance of turbulent flow in a round pipe with a rectangular-cut twisted tape insert. Twisted tape was made of stainless steel with 2 mm of thickness and 20 mm of width, 105 mm of length so the twist pitch ratio was 5.25. The results displayed that at the same Reynolds number, Nusselt number in rectangular-cut pipe with twisted tape inserts roses 2.3 to 2.9 compared with plain tube, with the average expansion of 2.6 times. The pipe friction factor for a rectangular-cut twisted tape insert were 39% to 80% higher than the plain tube friction factor. The thermal performance ranges were from 1.9 to 2.3. (Salam, et al 2013).
The experimental study on heat transfer characteristics of a heat exchanger with Elliptical-cut twisted tape insert on twist ratio (y= 8.0) and five ratio of major to minor (Z) were 5; 4; 3.3; 3, and 2.5.
Mekanika : majalah ilmiah mekanika 40 Volume 16 Nomor 2 September 2017
The Reynolds number variations of 10000-19000, 14-22kw/ m 2 of heat flux variations on a plain tube and 23-40 kW/ m2 for inserted pipes. The results exhibited that the Nusselt numbers average was rised with the increassing number of Z = 5; 4; 3.3; 3, and 2.5 at 19.3%; 41.8%; 53.83%; 68.5%; 73.16%; and 84.5% respectively, it were higher compared to the plain tube. The performance of thermally-cut elliptical twisted tapes ranges from 0.91 to 1.25 for Z = 5.0; 4.0; 3.3; 3 and 2.5. (Ganorkar, et.al-2015) This study was conducted to examine the effect of the Reynolds number variations and the effect of adding squere-cut twisted tape insert in the pipe in the (inner tube) on the heat exchanger pipe concentric annular channel on the characteristics of heat transfer and friction factor. It is expected with the addition of inserts squere-cut twisted tape inserts and twisted tape insert with a classical twist variation ratio can improve convection heat transfer coefficient of heat exchanger pipe in pipe concentric with the increase in pressure drop is acceptable.
Testing Equipment and Research Procedure
Research equipments consist of three systems, such as, the hot water flow system tracks in the pipes, measurement, and path of the cold water flow in the annulus. The electric water heater with a total power of 4,000 watts was employed to heat the water in the hot water tank. On it, the hot water temperature was setup using thermocontroller to keep constant at 60°C to the hot water pipes temperature enter through inside of the concentric pipe heat exchanger. The water heat pumps for pumping hot water from the hot water tank, passing through heat exchanger test equipment and then hot water returned to the hot water tank. The testing equipment scheme can be seen in Figure 1. The fluid flow direction on the pipe and the annulus were in opposite directions.
Bypass valve was used to regulate water flow variations in the amount of heat water which entering the pipe and it values can be read with a rotameter. Cold water which flowing into the annular was constantly maintained during the test. It flowing by gravity method was the cold water flow which coming from the cold water tank that located above the cold water surface elevation in the cold water tank to keep constant by employing the overflow pipe. The cold water that comes out of the test equipment heat exchanger immediately discarded.
The U shape Manometer was employed to measure the pressure difference on the hot water which flowing at a side entrance and the exit. Water was used inside manometer. Trapped water was used to store the water-borne upon the manometer pressure measurement in order not to get into the manometer.
The temperature measurement of the cold water at in and out annular, outer wall temperature and the hot water temperature in the pipe at in and out was using K type thermocouple temperature measurement of the outer walls of the pipes which totaling 10 points that measured alternately. Thermocouple reader was exploited to read the thermocouple. The U pipe manometer fluid with water was used in measurement of the pressure drop in the pipeline. A concentric heat exchanger pipe with a one pass pipe was made of aluminum. The dimensions of the concentric pipe heat exchanger can be seen in Figure 2.
Square cut twisted tape inserts and classical tape inserts nomenclature in a pipe can be seen in Figure 3. In Figure 3, W is thick, y is the pitch square cut twisted tape insert, d is the cutting height, W is the width (twist). In the previous research the inside part of pipe was made of aluminum with 25 mm of inner diameter and 2000 mm of length. Twisted tape inserts made of aluminum with 1.5 mm of thickness and 23.5 mm of width. Plain twisted tape insert and a square-cut twisted tape which used has a twist ratio of 2.0; 4.4; and 6.0 (Murugesan, et al., 2010).
A flow Reynolds number of water in the pipe was varied by adjusting the flow rate of 2-6 LPM to pipe in without STT (plain tube) and inner pipes with STT. Data was obtained by the inlet and outlet water temperature of annulus, outer wall temperature, the water mass flow rate, and the annulus pressure drop in the pipeline. For each test variation data was collected every 10 minutes to obtain a steady state. The data of these steady state conditions were used in the research data computation and analysis. For comparison, there was performed testing on the pipeline without STT (plain tube) and with the square-cut tape inserts addition and classical twisted tape insert.
Characteristic Calculation, Friction Factor, and Increasing Ratio of Heat Exchanger on Concentric Pipe
Heat transfer rate on innert tube can be stated as formula (1): Therefore, median of convection heat transfer coefficient on the innert tube side could be defined as following formula (10): − i Heat transfer rate to cold water on the annulus side h o could be defined as following formula (2): An average Nusselt number on the innert tube side (2) could be defined as this following formula (11): In which to define bulk temperature on concentric inner pipe as this following formula (3): Heat exchanger effectiveness could be defined as following formula (12): Heat loss percentage (%Qloss) could be defined as following formula (4): Q c pressure drop of innert tube could be defined as following formula (13): Pumping power could be defined as following formula (14): Overall heat transfer coefficient based on surface area on innert tube which could be defined as formula (5): Friction factor on the innert tube side could be defined as this following formula (15) berikut ini:: In which a counter flow heat exchanger TLMTD number could be defined as formula (6): Median of convection heat transfer coefficient on the annulus side could be defined by this following formula (7): where of the hot water on innert tubes properties (, kfi, dan ) was evaluated by average bulk hot water temperature (Tb, h). ho = Q c (7) Heat transfer enhancement ratio factor at constant Median of convection heat transfer coefficient on the innert tube side could be defined by this following formula (8): pumping power is the comparison of average convection heat transfer coefficient ratio of the innert tube in with a plain tube inserts which can be written by equation (17) below: pp Then, to define innert tube overall heat exchanger by formula (9): Heat transfer characteristics, friction factor, and heat empirical correlations. Characteristics of plain tube friction factor and innert tube friction factor (f) can be seen in Figure 5 below. Heat transfer characteristics validation on insert pipe without plain tube inserts as following Blasius equation (21): Blasius equation (21) for number of 4x10 3 <Re<10 5 .
RESULT DAN DISCUSION a. Plain Tube Validation
In this research, heat transfer characteristics validation for a plain tube was demonstrated by Gnielinski and Dittus-Boelter empirical correlations. Figure 4 below shows Nui to Re correlation on a plain tube. Figure 4 shows the average deviation of plain tube friction factor correlations to Dittus-boelter and Gnielinski, 13.8% and 3.4% respectively. Average deviation of Nui plain tube to Gnielinski and Dittusboelter the correlations were less than 10% and 25%. ON actual plain tube average deviation to Dittus-Boelter correlation of 13.8%, Gnelienski by 3.4%, while the plain tube correlation of about 1.8%. There was an average deviation value which compared with Nu Gnielinski correlation, Dittus-Boelter correlation, and plain tube were still small enough,
b. Twist Ratio Effect to Heat Transfer of Heat
Exchanger Characteristics with Square Cut Tape Insert and Classical Tape Insert. Heat transfer characteristics of the heat exchanger tube in the concentric pipe can be seen in Figure 6. Figure 6 portrays that the larger Reynolds number, average Nusselt number (Nui) were increasing. It was met a plain tube or innert tube with square cut tape inserts and classical tape insert. Extra inserts square cut tape insert made Nui on innert tube larger than Nui plain tube. It proves that cut tape innert square inserts can increase the convection tube heat transfer rate. Extra square cut tape inserts caused the fluid flow turbulence intensity which passing through the tube wall was greater. It was Mekanika : majalah ilmiah mekanika 44 Volume 16 Nomor 2 September 2017 generating the fluid mixing very well which resulting in an increased heat transfer (Murugesan, 2010).
The greater the twist ratio, pipe Nui will be decreased. It was caused by the greater twist ratio, to decrease the turbulence flow intensity on innert tube and also fluid dwell time becomes faster which causes Nui decreased. In Figure 6, the testing results showed that 5.400<Re<17.500, the average Nusselt number (Nui) on innert tube with square cut tape insert for twist ratio variations of 2.7; 4.5; and 6.5 successively increased in 79%; 58.2%; 43.5% which compared to the plain tube.
c. Twist Ratio Effect to Friction Factor Characteristic and Square Cut Tape Insert dan Classical Tape Insert Heat Exchanger
The pressure drop comparison characteristics between the plain tubes and square cut classical twisted twisted tape and tape were completed. In Figure 7 below shows Re to a pressure drop correlation at each variations. Reynolds number of 5.400<Re<17,500 with a smaller twist ratio, the greater pressure drop on innert tube will occur. This could be originated due to the narrower twist ratio, the larger contact surface area and greater flow obstruction which causing a higher pressure loss. Average pressure drop on innert tube of inserts square inserts with cut tape twist ratio of 2.7; 4.5; and 6.5 were 2.41%; 1.96%, 1.44% respectively, greater than the pressure drop across the plain tube.
d. Twist Ratio Effect to Heat Exchanger Friction
Factor Characteristic of Square Cut Tape Insert and Classical Tape Insert.
Friction factor characteristics with of square cut tape insert twist variations ratio of 2.7; 4.5; and 6.5 on innert tube of the concentric pipe heat exchanger can be seen in Figure 8. The larger the Reynolds number, the friction factor decreases both in plain tube or square cut tape insert. It was caused by the higher Reynolds number, the water flow rate on it will further increase as the friction factor was inversely proportional to its water flow squared velocity. Friction factor value on innert tube of square cut tape insert has a greater value than the plain tube. Furthermore, to recognize the friction factor to STT and CTT numbers correlation is shown in Figure 8 below. Figure 8. f to Re correlation graph Figure 8 displays that the square cut tape insert with a smaller twist ratio, the greater friction factor. It may be caused by small twist ratio geometry and larger surface area to reduce the fluid free flow, which causing friction between the insert and the pipe wall getting larger (Suresh, 2012). In 5.300<Re<17.500 he friction factor was boosted on innert pipe of inserts square inserts with a cut tape twist ratio variations of 2.7; 4.5; and 6.5 were 2.58; 2.22; 1.73 respectively which is greater than the friction factor on a plain tube.
e. Twist Ratio Effect to Ratio Characteristic of
Heat Transfer Enhancement of Heat Exchanger of Helical Screw Tape Insert Extra inserts Square cut tape inserts provide variations in twist ratio affect the ratio of the heat exchanger with a pumping power enhancement. Figure 10 below shows the pumping power efficiency to Reynold number correlation. twist ratio was getting smaller due to the fluid turbulence flow which larger along with the lack of twist ratio. It portrays that the square cut tape insert with a twist ratio less storing energy capable of given operating condition. Figure 10 exhibits that adding effect of square inserts cut tape and innert tube into the heat transfer rate was a significantly improved.
Heat transfer enhancement ratio of heat exchanger of square cut tape inserts 2.7; 4.5; and 6.5 were 1.3; 1.2; 1.1 respectively. It means that at the same pumping power, convection heat transfer average coefficient value of the tube of square cut tape insert was greater than the convection heat transfer average coefficient on plain tube.It was appropriated to research (Murugesan, 2009).
CONCLUSION
Based on the testing results, analysis, and discussion by the previous chapter about the heat transfer and friction factor characteristics of the heat exchanger concentric pipe of square cute twist tape insert with 2.7, 4.5, and 6.5 of variations ratio can be concluded as follows: 1.
The square cut tape inserts in 5500<Re<17500, has a Nui value, 79%; 58.2%; 43.5% respectively which compared to the plain tube. The friction factor of 2.58%; 2.22%; 1.73% and the heat transfer enhancement ratio of 1.3; 1.2; 1.1 times were larger than the plain tube.
2.
The heat transfer characteristics, the friction factor, and a leverage ratio of twisted tape inserts heat was increased along the little twist ratio than without plain tube inserts.
3.
A range of 5,500<Re<17,500 square cut tape insert has heat transfer characteristics value, friction factor, and heat transfer enhancement ratio was higher than the classical twisted tape. | 2019-04-30T13:09:09.998Z | 2017-03-04T00:00:00.000 | {
"year": 2017,
"sha1": "882d6eadef6faa8922b3152a466513129bc1f551",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.20961/mekanika.v16i2.35055",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "536ccf5e263a05303c9ab39316015e04e6f47ef5",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
136521790 | pes2o/s2orc | v3-fos-license | Laser photothermolysis of single blood vessels in the chick chorioallantoic membrane (CAM)
Individual blood vessels in the chick chorioallantoic membrane (CAM) were selectively coagulated through photothermolysis, using pulsed laser irradiation at 585 nm. Pulse durations were chosen to be 0.45 ms and 10 ms, which correspond to the thermal relaxation times in blood vessels of 30 micrometers and 150 micrometers diameter, respectively. The dose vs diameter (D vs d) relationship for coagulation was calculated for the two pulse shapes. The energy deposited in a cylindrical absorber of diameter d by an optical field, incident perpendicular to the vessel, was expressed analytically and compared with the energy required to coagulate a blood vessel of the same lumen diameter. When thermal diffusion is incorporated into the model, our findings can be accounted for quantitatively. This information will be of use for improving the laser treatment of port wine stains and other vasculopathies.
L INTRODUCTION
The objective of this study is to gain an understanding of the biophysical principles underlying permanent coagulation of blood vessels, through conversion of selectively absorbed radiant energy into thermal energy. Such understanding is of importance in dermatological laser treatments, such as port wine stains (PWS)"2'3'4, telangiectasias5'6, or hemangiomas7 and may also be of benefit in the treatment of choroidal neovascularization8'9.
The chick chorioallantoic membrane (CAM) is an established in vivo model for studying vascular effects'0. The CAM vasculature is located in a transparent matrix" which allows direct visualization of blood flow as well as real-time observation of photothermal effects on blood vessels, such as vessel dilation, constriction, hemostasis and rupture. The CAM matrix does not significantly absorb or scatter radiation. Thus, the influence of pertinent laser parameters (wavelength, pulse duration and energy density) can be studied conveniently. Moreover, the CAM is a self-contained system which lends itself to mathematical modeling of optical and thermal effects'2. Preliminary considerations are presented here for the choice of laser parameters, which are fully described in the Materials and Methods section.
Wavelength (Xl
In the transparent CAM, light anywhere within the visible spectrum may be used to irradiate the blood vessels, without encountering absorption in overlying epidermis which normally accompanies irradiation of dermal tissues13'14. The absorption of blood in the yellow-red spectral region around 585 nm was chosen for irradiation, in accordance with general considerations associated with increased light penetration in tissue at longer wavelengths1"4'15"6. A singular feature of the CAM is the possibility to study coagulation patterns in individual arterioles and venules. We seek, therefore, to equalize photothermal effects due to light absorption by the two endogenous chromophores, oxyhemoglobin (Hb02) and hemoglobin (Hb). Equal absorption is achieved at the isosbestic point (X=585 nm) in the absorption spectra of these two target chromophores. Because of deeper tissue penetration, this wavelength is of clinical interest, even though the absorption coefficient at 585 nm is 50% lower than the absorption peak of Hb02 at X=577 nm'5"6.
1.2.-Pulse duration (l
The pulse duration governs the spatial confinement of the thermal energy within the targeted vessel1"7. Ideally, the pulse duration (y should be compatible with the diameter (d) of the vessel and be about equal to the thermal relaxation time ('re) for that dimension (td=d/l6x, where is the thermal diffusivity). This is defined as the time required for the instantaneous temperature, generated inside the target after exposure to the laser pulse, to decrease by 50% (see Discussion). Taking =1.4x1O7 ms1, typical values for 'td are 0.2 ms for d=20 tm and 4.5 ms for d=100 rim. If ht diffuses outside the vessel during the laser exposure, reducing the target specificity, and can cause additional thermal damage to surrounding tissue. A very short pulse, ti,<<td, will generate a high-peak intravascular temperature rise, leading to localized explosive vaporization of tissue water, or to photoacoustic transients which will result in vessel rupture'8. In such cases, repair mechanisms may revascularize the tissu&.
Laser parameters
A continuous wave (CW) Ar-ion pumped dye laser (model 920, Coherent, Palo Alto, CA) was used for single-vessel irradiation. The laser delivered a maximum power of 1.4 W at 585 urn, as measured with a power meter (model 210, Coherent). The laser beam was transmitted through an 80 jim core-diameter multimode fiber terminated with an adjustable focusing microlens positioned in a hand piece. The diameter of the beam at the focus was 500 Jim, giving an energy density of lOOt,, (Jcm2). The CW laser was pulsed with a foot pedal controlled mechanical shutter; the pulse duration was preselected at the shortest attainable setting of t,,=10 ms. The resulting energy density D = 7 Jcm2 was marginally sufficient to cause injury and vessel damage was observed to occur only in the beam center (spot size 200 jim).
A flash lamp-pumped dye laser (model SPTL-1, Candela, Weyland, MA) was used for multiple-vessel irradiation. The laser was tuned to A-585 nm and delivered pulses of duration t,1,=0.45 ms. The beam was coupled into a 1 mm core-diameter multimode fiber terminated with a microlens that focused the laser output to a 5 mm diameter circular spot of uniform light intensity. The optical energy exiting the fiber was varied between 0.7 and 1.3 J, as measured with a calibrated energy meter (Ophir, model bA-?), giving energy densities on the CAM membrane ranging between D = 3 and 6 Jcm2.
L2. Preparation of intact CAMs
The protocol for CAM preparation was a modification of a previously described technique'°. Fertilized eggs (Hy-line W36 white leghorn) were washed with 70% alcohol, incubated at 37 °C in 60% humidity, and rolled over hourly. On day 3-4 of embryonic development, a hole was drilled in the apex and 2-3 ml albumin was aspirated from each egg to create a false air sac. On the following day, part of the CAM was exposed by opening a round window of 20 mm diameter in the shell which was covered with a Petri dish. The eggs were placed in a stationary incubator until the CAM was fully developed and ready for experimentation. On day 10-12, sterile teflon 0-rings (6.2 mm inner diameter, 9 mm outer diameter and 1.4 mm annular width) were placed on the surface of the CAM, each demarcating a location where individual blood vessels and capillaries were clearly visible and to which the laser beam was directed. A drop of normal saline was added within the ring area to reduce spurious light reflection and to prevent desiccation of the CAM during the experiment'2. Outside of the incubator, eggs were kept at 37 OC in a heating block filled with glass beads. At the time of irradiation, the CAM was illuminated with a cold white-light fiber optic source (Volpi, Intralux, model 100 HL) and placed under a stereomicroscope (Olympus, model SZH), equipped with a video camera (Panasonic, model AC-2510), giving a total magnification of lOx on a color monitor (Sony, model KV-1393R).
Vessel Selection
It was convenient to subdivide the extensive microvascular network of the CAM according to the following branching pattern19. The capillaries served as a reference and were designated vessels of "order-O". The smallest precapillary vessels (arteriolesa) as well as the smallest postcapillary vessels (venules -v) were assigned "order-I".
The convergence of two order-i vessels was assigned as an "order-2" vessel and similarly two order-2 vessels formed an "order-3" vessel. Table I presents, for each order, the mean number N of blood vessels per cm2, the mean vessel length I (jim) and diameter d (jim) in a mature CAM at day iO'. The CAM area, viewed at a magnification lOx on the monitor during laser irradiation had a diameter of 3 mm and typically comprised 1-2 vessels of order-3, six of order-2 and approximately fifteen of order-i; these were about equally divided between precapillaries (a) and postcapillaries (v). The mean number of vessels in the capillary bed (order-O) in the field of view, at the magnification used, was estimated to be one hundred.
Irradiation Procedures
The long-pulse laser was used for precise microspot irradiation of individual target vessels of a given type (a or v) and order (1, 2 or 3). Focusing adjustment and diameters of specific vessels to be irradiated were ascertained in situ by videotaping the field of view with the aiming beam in place and comparing it with the L4 mm annular width of the teflon ring. Laser exposures were performed under standard conditions: t,, = 10 ms, spot size 200 jim, and D= 7 Jcm2. Each vessel was exposed 3 times at the same site, keeping the time interval between sequential exposures to 30 5, so that the subsequent irradiation interacted with a vessel that had cooled down to ambient temperature and in which the exposed blood had been replaced. Repeated exposures caused cumulative thermal damage to the vessel wall, eventually leading to occlusion or, inadvertently, to hemorrhage (when the exposures were stopped).
The short-pulse laser was used to irradiate a field of vessels located inside a teflon ring on the CAM. The energy densities ranged from a sub-threshold-damage dose D=3 Jcm2 to D=6 Jcm2, in 0.5 Jcm2 increments. Each field was irradiated with a series of three laser pulses at 30 s intervals, unless hemorrhage occurred in an order-i or higher-order vessel following the first or second exposure -atwhich point irradiation was stopped. When irradiating additional fields in the same CAM, care was taken to assure that the arterial and venous trees were not compromised by the previous exposures in an adjacent field.
After laser irradiation, the eggs were covered and returned to a stationary incubator. Selected specimens (among those that had not undergone massive hemorrhage) were inspected 24 h later for re-perfusion of the vessels.
Damage assessment and statistical analysis
The laser-induced vascular damage was graded as follows: Stepwise logistic regression analysis2° was used to assess the statistical significance of vessel type (arteriole vs venule), vessel order (1 vs 2 and 3; 1 and 2 vs 3) and energy level (for the short-pulse laser only). The two dependent variables analyzed were the number of exposures to any vessel damage (grade> 0) and the number of exposures to moderate or severe damage (grade> 1). When occlusion or hemorrhage occurred after 1 or 2 pulses, irradiation was stopped. For the short-pulse irradiation, this resulted in some vessels in the same exposure field which were not graded at all three exposures.
Order4 vessels had significantly less damage compared to higher order vessels (p=.0001). Forty-three percent of order-i vessels were damaged after a single pulse compared to 75%for vessels of order-2 or 3; this differential was preserved after additional exposures (Figure 1). Arterioles were significantly more sensitive to moderate or severe damage than venules (p = .001). Fifteen out of 89(17%) arterioles sustained moderate or severe damage after three exposures. By contrast, only 3/102(3%) of the venules sustained comparable damage.
Short-pulse laser
Order-i vessels were most sensitive to damage, vessels of order-3 were most resistant. Arterioles were more sensitive than venules and higher energy exposures resulted in more damage. Upon multiple exposures, injury first occurred to the capillary system, next to arterioles al, and then to vessels a2 and vi. All p-values were less than .0001. Figures 2a and 2b show the percentage of arterioles and venules with any (grade > 0) damage and with moderate or severe (grade > 1) damage, respectively, after 1, 2 and 3 exposures. Notably arterioles of order-3 have a similar damage profile as venules of order-2; nearly all arterioles of order-i are damaged after a single exposure. Figures 2c and 2d show the effect of energy density on vessel (arterioles and venules) damage.
All damage to capillaries (less than order-I) was graded as either moderate or severe. Of 159 capillaries, 49 (31%) were damaged after one pulse, 6 additional capillaries (4%) were damaged on the second exposure and 2(1%) were damaged on the third exposure; 102 (64%) were not damaged. Laser energy density was not statistically correlated with capillary damage. 4I_
Absorption by a cylindrical vessel in a uniform optical field
Consider a cylindrically shaped blood vessel, with outer diameter d and inner (lumen) diameter d, lying along the y-direction and exposed over length 1 to a pulsed, collimated light beam propagating in the z-direction as illustrated in Fig. 3. For convenience, it is assumed that the diameter of the beam (and thus 1)is larger than d.
Denaturation of the vessel wall occurs through heat conducted from the erythrocytes which have absorbed incoming light. ---. To a first approximation, the energy q required for thermally induced coagulation of blood in a vessel of unit length is given in equation 1 qil = cpic(d12)2(7} 7)
4-5.Jcm
(1) Here p is the mass density (gcm3) and c is the specific heat of blood (4.2 Jg'K') taken to be equal to the corresponding values for water; T. and Tf denote, ispective1y, the temperature before (35 °C) and immediately after the laser pulse. The critical temperature at the vessel wall required for coagulation of blood vessels is expected to be higher than that causing transient hemostasis (Tf= 70 °C)21 and has to be maintained over relatively long times (>0.01 s) at hyperthermic conditions13'2. In the sample calculations of q presented in Table II we have assumed Tf =90 °C. When considering long.-pulse microspot irradiation, values of q in Table II should be corrected for the cooling effect of blood flow in the axial direction during the time t,. Measured flow velocities (I))in the CAM vessels ranged from ii=O.7 mms' when d=40 p.m to 11=1.4 mms' when d=100 jim12. For example, during t=1O ms, the blood flow in a vessel of 1=200 xm and d=100 un replaces about tufl =7%of the total volume in the exposed part of the vessel and q is correspondingly larger (neglected in Table II). Perfusion is even less important with shortpulse exposure since t, is 20 times shorter and I is generally much larger than in the long-pulse microspot exposure. The cooling effect due to thermal diffusion at the vessel periphery will cause a non-uniform spatial distribution of temperature within the vessel, with a peak temperature in the center of the vessel (displaced somewhat toward the upper part facing the light beam) and lower temperawres at the periphery of the vesse1. When we neglect light scattering and also reflections at the air/CAM and the CAM/vessel interfaces, the energy per pulse, QA' deposited in a blood vessel due to absorption from a uniform optical field, can be expressed analytically. In a cylindrical vessel of lumen diameter d1 and length I the net absorbed energy is
Blood
Here D (Jcm2) is the incident energy density; Id1 is the target area intercepting the light beam; a = a(X) is the absorption coefficient of blood; Jj and L1 are, respectively, the first-order modified Bessel (3) and plot f = QA/Dldi versus c(d (s Fig. 4). The result represents the fraction of incident optical energy absorbed by the vessel and is completely general. It gives the fraction of energy absorbed by a cylinder containing a homogeneous absorber, e.g. blood for which a(585 nm)=170 cm1 or a(577 nm)=430 cm' 16, or blood with added chromophores such as fluorescein or indocyanine green. Blood has a rather large absorbance at A= 585 nm and for d> 100 jim (ad> 1.7) it can be seen (Figure 4) that more than 70% of the light incident on the upper surface of the vessel is absorbed. in Table II, Q denotes the energy absorbed by a blood vessel when irradiated with D=1 Jcm2 at ?L= 585 nm. The values show that for a blood vessel with a lumen diameter d =20 jim, the coagulation energy q is equal to the energy intercepted by that vessel from an optical field having D 1.4 Jcm2. When d1 = 120 jim, an incident energy density D 2.9 Jcm2 is predicted to effect coagulation.
Effects of thermal diffusion
The effect of thermal diffusion out of the heated vessel into the surrounding tissue will now be considered. It is particularly relevant for small diameter vessels and/or long irradiation times, i.e. t,,>>td=&/16X. We make the ansatz that thermal energy diffuses out of a vessel in an exponential fashion, so that for t> t', dQ(t,t' ) = dQA (t' )ett' (4) Here dQA(t') denotes the incremental amount of optical energy absorbed in the exposed lumen during dt' at a time t'; dQ(t,t') denotes the corresponding thermal energy after the time interval (t-t'). The thermal energy remaining in the vessel at time t is found by integrating Eq. where 4. 18 and 3.50 Jcm3K' are, respectively, the specific heat of the lumen and vessel wall cellular materials and the mass density is taken to be p=l gcm3.
In Figure 5 we have plotted the temperature rise AT given by Eq. (5b) for the long-pulse laser (t,,=O.O1s and D=7 Jcm2) and for the short-pulse laser (t,,=O.45 ms and D=3 Jcm2). For the two curves in Figure 5 a striking feature is the different dependence of iT on the vessel diameter d. The long-pulse exposure causes a monotonic temperature rise with d over the given range d<130 p.m. At larger d the temperature rise will reach a maximum and, eventually, decrease as lid. In contrast, the temperature rise due to the short-pulse exposure reaches its maximum at a smaller diameter. Consequently, for a critical temperature T=9O °C (T =55°C), Figure 5 indicates that the short-pulse exposure affects predominantly small diameter vessels (of order-i and -2) whereas the long-pulse exposure will damage the larger diameter vessels. For a short laser pulse (t,, << td), Q QA (in Eq. 5a). The absorbed optical energy gives rise to a nonuniform temperature distribution in the lumen where dQA/dV is the absorbed energy density (Jcm3) and the other parameters are as defined in Eqs. 1,2. Upon substituting z(x), as defined in equation 7 (see Fig. 3), z(x) = z -d12 + (d/2)2 x2 (7) into Eq. 6 the spatial distribution of the temperature rise in the lumen is obtained. In Fig. 6, the cross sections of vessels of order-i, 2 and 3 with (see Table 1) d = 50, 80, and 1 10 j.tm, respectively, are diagrammed. Inside each diagram are ploued the isotherm curves (iVF =55°C) for D = 3 and 5 Jcm2. The curves define the transition zones between damaged (Tf> 90 °C) and non-damaged (T < 90 °C) lumen volume. It is evident that smaller caliber vessels are more easily coagulated than larger ones. According to the hemodynamic criterion, total occlusion occurs when at least 61 % of the lumen has coagulated,.i.e. the critical temperature Tf = T1+55 K = 90°C reaches half way through the vessel diameter'6.
Also, we note that at the lowest energy density (D = 3 Jcm2) only the small diameter vessels show thrombosis (>61 % of lumen has coagulated), whereas at D =5 Jcm2 the small and medium vessels will have coagulated completely while the larger vessels of order-3 will show partial occlusion. This behavior is borne out by by the results for the short-pulse laser presented in Fig. 2.
Long-pulse photothermolysis
For the long-pulse exposures the results given above must be modified to include the full effects of thermal diffusion. This situation has been modeled mathematically'6' but is beyond the scope of this discussion. Our main aim has been to present analytical results which provide direct answers for single vessel exposure.
4S. Arterial and venous response
One of the more salient phenomena observed in the present study was the higher vulnerability for thermal injury of arterioles as compared to venules. This occurred for the three vessel calibers considered and for both shortpulse and long-pulse exposures. In the CAM, arterial (oxygen poor) and venous (oxygen rich) blood possess equal light absorbance at 585 nm; thus, vessels of the same lumen diameter are expected to undergo similar thermal stress. In a study of prolonged heating of blood vessels it was reported that arteries showed less vasoconstriction (and faster post-heating recovery) than veins22. This is at variance with our observations. A possible explanation of our findings might be based on considerations of vascular anatomy. The arteriolar walls consist of three concentric layers: an endothelial tube, an intermediate layer of smooth muscle cells, and an outer coat of fibrous elements. The thickness of the arteriolar wall varies with vessel caliber and function; the walls of venules are always thinner than those of arterioles of equal caliber. When this difference is significant, a comparison of thermal damage may be made between arterioles of "order-n" and venules of "order n-i". However, in the CAM the difference in vascular cross section is generally small. If we Lake as typical wall thicknesses O.06d and O.035d for arterioles and venules, respectively, the ratio for the lumen volume of arterioles and venules for two vessels with the same outer diameter will be (0.88/0.93)2=0.9. This ratio is too close to unity to explain our observations. Moreover, contrary to the observations, the long-pulse laser would then be expected (see Fig. 5) todamage preferentially the venules, while the short-pulse laser would preferentially damage the anerioles.
Another point of difference is the platelet aggregation initiated by the chain of biochemical reactions triggered by thermal trauma, which is different in arterioles and venules. This seems to be consistent with reports of PDT-induced vasoconstriction, where it was shown that 90% of the arterioles were affected by photochemical injury versus 70% of the venules. However, on the time scale of photothermolysis (t 10 ms) no platelet aggregation is expected to occur in real time.
Finally, it should be noted that blood clots can be transported downstream in venules, but not in arterioles since they get blocked in the capillaries. This difference in permanent clotting is a component in our interpretation for the lower threshold for arteriolar photothermolysis.
In conclusion, this study described the first controlled experiment of photothermolysis in single blood vessels in vivo. The observations of damage threshold vs vessel diameter were quantified. They were interpreted using a theoretical model which has wide applicability to a number of vasculopathies. Table II Thermal relaxation times td in blood vessels with lumen diameter d1. The coagulation energy q for a vessel of unit length (1 cm) is compared with the optical energy Q absorbed when that vessel is exposed to pulsed light (t << td) at A = 585 nm at an energy density D = 1 Jcm2. | 2019-04-28T13:09:17.520Z | 1994-02-01T00:00:00.000 | {
"year": 1994,
"sha1": "d6326bc74a43002a255ff34125185bf35db30962",
"oa_license": "CCBY",
"oa_url": "https://cloudfront.escholarship.org/dist/prd/content/qt70v7x7z7/qt70v7x7z7.pdf?t=pmf2bc",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "22ac5b604035a49d1270ea73a1fbc0b343c61261",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Engineering"
]
} |
256282916 | pes2o/s2orc | v3-fos-license | Reliability and validity of multicentre surveillance of surgical site infections after colorectal surgery
Surveillance is the cornerstone of surgical site infection prevention programs. The validity of the data collection and awareness of vulnerability to inter-rater variation is crucial for correct interpretation and use of surveillance data. The aim of this study was to investigate the reliability and validity of surgical site infection (SSI) surveillance after colorectal surgery in the Netherlands. In this multicentre prospective observational study, seven Dutch hospitals performed SSI surveillance after colorectal surgeries performed in 2018 and/or 2019. When executing the surveillance, a local case assessment was performed to calculate the overall percentage agreement between raters within hospitals. Additionally, two case-vignette assessments were performed to estimate intra-rater and inter-rater reliability by calculating a weighted Cohen’s Kappa and Fleiss’ Kappa coefficient. To estimate the validity, answers of the two case-vignettes questionnaires were compared with the answers of an external medical panel. 1111 colorectal surgeries were included in this study with an overall SSI incidence of 8.8% (n = 98). From the local case assessment it was estimated that the overall percent agreement between raters within a hospital was good (mean 95%, range 90–100%). The Cohen’s Kappa estimated for the intra-rater reliability of case-vignette review varied from 0.73 to 1.00, indicating substantial to perfect agreement. The inter-rater reliability within hospitals showed more variation, with Kappa estimates ranging between 0.61 and 0.94. In total, 87.9% of the answers given by the raters were in accordance with the medical panel. This study showed that raters were consistent in their SSI-ascertainment (good reliability), but improvements can be made regarding the accuracy (moderate validity). Accuracy of surveillance may be improved by providing regular training, adapting definitions to reduce subjectivity, and by supporting surveillance through automation.
Introduction
Surgical site infections (SSI) are one of the most common healthcare-associated infections (HAI) [1], and are associated with substantial morbidity and mortality, increased length of hospital stay and costs [2][3][4][5][6]. The highest SSI incidences are reported after colorectal Page 2 of 9 Verberk et al. Antimicrobial Resistance & Infection Control (2022) 11:10 surgeries, possibly due to the risk of (intra-operative) bacterial contamination and post-operative complications [7][8][9]. Worldwide, incidence rates range from 5 to 30% and are affected by several risk factors, including the type of surgery, age, sex, underlying health status, diabetes mellitus, blood transfusion, ostomy creation, prophylactic antibiotic use [10][11][12] and by the definition used to identify SSIs [4,13]. Surveillance is an important component of prevention initiatives and most surveillance programs include colorectal surgeries [14]. Large variabilities in SSI rates between centres remain, even after correction for factors that increase the risk of SSIs. Previous studies reported significant variability in surveillance methodology and in inter-rater agreement, introducing uncertainty regarding whether observed differences in colorectal SSI rates reflect real differences in hospital performance [15][16][17][18][19][20][21].
For the purpose of comparing SSI rates between hospitals, accurate adherence to standardized surveillance protocols is required. Furthermore, case definitions should be unambiguous to avoid subjective interpretation. To reduce subjectivity the Dutch national surveillance network (PREZIES) has modified the case-definition on two criteria as compared to the definitions set out by the (European) Center of Disease Control and Prevention ((E)CDC) [22][23][24][25]. First, the diagnosis of an SSI made by a surgeon or attending physician only is not incorporated in the Dutch definitions. Second, in case of anastomotic leakage or bowel perforation, a deep or organ-space SSI can only be scored by purulent drainage from the deep incision, or when there is an abscess or other evidence of infection involving the deep soft tissues found on direct examination. A positive culture obtained from the (deep) tissue is not applicable in case of anastomotic leakage. Moreover, to increase standardization, the Dutch surveillance only includes primary resections of the large bowel and rectum, in contrast to the (E)CDC, who also allows biopsy procedures, incisions, colostomies or secondary resections.
Awareness of the correctness of applying the definition and vulnerability to inter-rater variation is crucial for correct interpretation and use of surveillance data. The aim of this study was to investigate the reliability and validity of SSI surveillance after colorectal surgery using the Dutch (PREZIES) SSI definitions and protocol. Secondary aims were to report the accuracy of determining anastomotic leakage and to provide insights in the SSI incidence and epidemiology in the Netherlands.
Study design
In this multicentre prospective observational study, seven Dutch hospitals (academic (tertiary referral university hospital) n = 2; teaching n = 3; general n = 2) collected surveillance data for occurrence of SSI after colorectal surgeries performed in 2018 and/or 2019, according to the Dutch PREZIES surveillance protocol [23,25,26]. Three hospitals had no prior experience in performing SSI surveillance after colorectal surgeries and four hospitals already performed this surveillance for more than five years as part of their quality program. Participation in SSI surveillance after colorectal surgery is voluntary, hence not all hospitals include this in their surveillance programme. When executing the surveillance, additionally intra-and inter-rater reliability and validity were determined by two case-vignette assessments and a local case assessment. Reliability refers to the consistency and reproducibility of SSI-ascertainment and was determined by three agreement measures: 1) the intra-rater reliability, reflecting the agreement within one single rater over time; 2) the inter-rater reliability, which is the agreement between two raters within one hospital; and 3) the overall inter-rater reliability between all 14 raters of seven hospitals [27,28]. Validity refers to how accurately the surveillance definition is applied and was determined by the correctness of ascertainment compared to a medical panel as described in detail below. The Medical Ethical Committee of the University Medical Centre Utrecht approved this study and waived the requirement of informed consent (reference number 19-493/C). All data were processed in accordance with the General Data Protection Regulation. Hospitals were randomly assigned the letters A-G for reporting of the results.
SSI surveillance after colorectal surgery
All hospitals included all primary colorectal resections of the large bowel and rectum performed in 2018 and/or 2019 in patients above the age of 1 year. Per hospital two raters, mostly ICPs, manually reviewed the electronic medical records for all included procedures retrospectively and classified procedures into three categories: (1) no SSI, (2) superficial SSI or (3) deep SSI or organ-space SSI within a follow-up period of 30 days post-surgery. SSIs were registered in their own hospital's surveillance registration system. All identified SSIs and questionable cases were validated and discussed with each facility's medical microbiologist or surgeon after completing the assessments which are described below.
Case-vignette assessment
Case-vignettes were used to assess the validity, intra-rater and inter-rater reliability. Four medical doctors developed standardised case-vignettes in Dutch language, based on 20 patients selected from a previous study [29]. Each vignette described demographics, the medical history, type of surgical procedure and the postoperative course. An external medical panel of seven experts in the field of colorectal surgeries and surveillance classified the case-vignettes as a superficial SSI, deep SSI, or no SSI according to the Dutch SSI definition, and indicated presence or absence of anastomotic leakage. Their conclusion was considered the reference standard. Each rater who performed surveillance completed the casevignettes individually through an online questionnaire. Three months later, the same vignettes were judged once more by the same raters, but presented in a different random order.
Local case assessment
The reliability of surveillance data also depends on the ability to find the information necessary for case-ascertainment in the medical records. As this is not measured by the case-vignettes, we additionally performed a local case assessment: within each hospital, 25 consecutive colorectal surgeries included in surveillance were scored independently by the two raters, on separate digital personal forms. After sending the completed forms to the research team, raters discussed the results and entered the final decision into their hospital's surveillance registration system.
Training
Before starting the surveillance activities, a training session was organized to ensure the quality of the data collection and to practice SSI case-ascertainment. Thereby, before starting the reliability assessments, each ICP had to complete at least 20 inclusions for surveillance to assure familiarity with the surveillance procedure. In case of any questions, the research team was available to provide assistance.
Statistical analyses
Descriptive statistics were generated to describe the surveillance period, number of inclusions and epidemiology. The number of SSIs per hospital were reported and displayed in funnel plots. The primary outcomes of this study were the reliability and validity of the surveillance. From the case-vignette assessments, the intra-rater and inter-rater reliability were analysed by calculating a weighted Cohen's Kappa coefficient (κ). The scale used to interpret the κ estimates was as follows: ≤ 0, no agreement; 0.01-0.20, slight agreement; 0.21-0.40, fair agreement; 0.41-0.60, moderate agreement; 0.61-0.80, substantial agreement; 0.81-1.00, almost perfect agreement [27]. For the inter-rater reliability within a hospital, we used the second questionnaire round of the casevignettes, to account for a possible learning curve over time. The overall inter-rater reliability among all 14 raters was estimated using a weighted Fleiss' Kappa. For all Kappa's, 95%-confidence intervals were estimated using bootstrapping methods (1000 repetitions). Inter-rater reliability was also measured from the local case assessment, from which the overall percentage agreement was calculated per hospital. Validity was determined by comparing the answers of the two case-vignettes questionnaires with the answers of the medical panel. The same comparison was performed to investigate the accuracy related to the determination of anastomotic leakage. Analyses were performed with R version 3.6.1 (R Foundation for Statistical Computing, Vienna, Austria) [30] with the use of packages irr [31] for inter-rater reliability and the boot [32] package for bootstrapping. Table 2.
Reliability and validity
All 14 raters completed the two rounds of online questionnaire with case-vignettes. Of those, two had less than one year of experience with HAI surveillance, six had 2-5 years of experience, five persons 6-15 years and one more than 25 years. The estimated Cohen's Kappa for agreement within a rater (intra-rater reliability) calculated from the case-vignette assessment varied from 0.73 to 1.00, indicating substantial to perfect agreement ( Table 3). The inter-rater reliability within hospitals showed more variation, with lowest estimates reported for hospital A (κ = 0.61, 95%-CI 0.23-0.83) and the highest in hospital C (κ = 0.94, 95%-CI 0.75-1.00). The overall inter-rater agreement of all 14 raters in the second round case-vignettes was 0.72 (95%-CI 0.59-0.83). From the local case assessment it was estimated that the overall percent agreement between raters within a hospital was almost perfect (mean = 95%, range 90-100%).
Regarding the accuracy of determining SSIs correctly, 87.9% (range 70%-95%) of the answers given by the raters were in accordance with the medical panel: 3 raters had similar SSI rates compared to the medical panel, five raters underestimated the number of SSIs, four had higher SSI rates because of incorrect ascertainment and there were two raters who had overestimated SSI in the first round, and an underestimation in the second round. Presence of anastomotic leakage was accurately scored in the vignettes where it was present, however misclassified in cases where anastomotic leakage was absent (Table 3).
Discussion
In this study we observed good reliability of SSI surveillance after colorectal surgeries in seven Dutch hospitals. Based on the case-vignette assessment, the intra-rater reliability was estimated substantial to perfect (κ = 0.71-1.00) and the inter-rater agreement within hospitals was substantial, but varied between hospitals (κ = 0.61-0.94). The local case assessment showed 95% agreement within hospitals. Despite the fact that individual raters were consistent in their scoring, validity was moderate: in 12.1% (range 5%-30%) the case-ascertainment was not correct as compared to the conclusions of the medical panel. The SSI rate determined by surveillance would therefore be under-or overestimated.
To the best of our knowledge, there is only one other study assessing the inter-rater reliability explicitly for SSI after colorectal surgeries. Hedrick et al. [18] concluded from their results that SSIs could not reliable be assigned and reproduced: they demonstrated large variation in SSI incidence between raters with only modest inter-rater reliability (i.e. κ = 0.64). They therefore opt for alternative definitions such as the ASEPSIS score [33]. In the present study similar estimates for inter-reliability were found in 2 out of 7 hospitals (κ = 0.61 in hospital A and κ = 0.65 in hospital E), for the other five hospital we found estimates above 0.69. The higher reliability estimates found in the present study may be explained by several factors. First, the definitions and method used in the Netherlands aim to be more objective: a previous study has shown that surgeon's diagnosis -not included the Dutch definitionlead to biased results [34,35]. Another factor that may influence reliability is the years of surveillance experience of the raters and their ability to find information in the electronic health records needed for case-ascertainment [36]. From Table 3 it seems that more experienced raters produce more consistent results. However, the design of this study did not allow to investigate this type of causal relationships.
The reliability estimates of this study show that SSIs after colorectal surgery are an appropriate measure to use for surveillance: the same result can be consistently achieved, making them reproducible and suitable for monitoring trends and detecting changes in SSI rates within a hospital. However, at this moment, using SSI incidence as a quality measure for benchmarking may be hampered because of three reasons. First, we found that on average 12.1% of patients in the case-vignettes were misclassified: one rater misclassified 6 out of 20 vignettes while another had only one misclassification. This will lead to unreliable comparisons of SSI rates, although in practice difficult cases may be discussed in a team hence improving accuracy. As superficial SSIs rely on more subjective criteria, focusing on deep SSI may improve accuracy and comparability. Additionally, we observed that anastomotic leakage was too often assigned while it was actually absent. This may lead to an underestimation as these cases cannot be scored by a positive culture anymore according to the Dutch definition (as explained in the introduction). Second, Kao et al. [16] and Lawson et al. [15] investigated whether SSI surveillance after colorectal surgeries has good ability to differentiate high and low quality performance (i.e. the statistical reliability of SSIs). They both concluded that the measure can only be used as hospital quality measure when an adequate number of cases have been reported, which can be challenging for some hospitals as shown in Table 1. Third, another challenge in using SSI rates for interhospital comparisons is the lack of a sufficient method for risk adjustment. To obtain valid SSI comparisons, you have to correct for differences in the surveillance population and their risk factors. However, to date no method has been proven generalizable and appropriate [12,37]. The points raised above show that the overall SSI incidence of 8.8% in this study is difficult to compare to others. Overall, the SSI incidence was lower compared to other studies, but in line with numbers previously reported to the Dutch national surveillance network [13,38,39]. When SSIs after colorectal surgery are used for monitoring and perhaps benchmarking, continuous training of raters is required to assure correct use and alignment of surveillance definitions and methodology. Reliability and validity of surveillance may be improved by automatization methods as they can help to support casefinding [40][41][42]. Furthermore, hospitals should perform a certain number of colorectal surgeries to generate representative estimates of performance. If there is no appropriate case-mix correction, comparisons should be made with caution, preferably between similar types of hospitals with comparable patient groups.
Strengths and limitations
This study was performed within multiple Dutch centres, including different types of hospitals. The 14 raters in this study were well-trained according to standardized methods to minimalize differences possibly caused by years of surveillance experiences between hospitals. Unfortunately, this design was not suitable for explaining which factors enhance SSI-ascertainment or will improve reliability and validity estimates. Second, we aimed to produce Cohen's Kappa coefficients from the local case assessment as well, however it appeared that there was too little variation in outcomes and number of cases hindering this calculation.
Conclusion
Awareness of the validity of surveillance and vulnerability to inter-rater variation is crucial for correct interpretation and use of surveillance data. This study showed that raters were consistent in their SSI-ascertainment, but improvements can be made regarding the accuracy. Hence, SSI surveillance results for colorectal surgery are reproducible and thus suitable for monitoring trends, but not necessarily correct and therefore less adequate for benchmarking. Based on prior literature, accuracy of surveillance may be improved by providing regular training, adapting definitions to reduce subjectivity, and by supporting case-finding by automation. | 2023-01-27T14:20:22.077Z | 2022-01-21T00:00:00.000 | {
"year": 2022,
"sha1": "127311f6269934c6fed409eea4fe5b54f6e96b8c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13756-022-01050-w",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "127311f6269934c6fed409eea4fe5b54f6e96b8c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
225911092 | pes2o/s2orc | v3-fos-license | Waste Peel of Durian as Solid Cataysts for Biodiesel Production
In this study, peel durian ash was used as a solid catalyst to convert rubber seed oil into biodiesel production via transesterification reaction. The catalyst was fabricated by simple burning and calcined at 600°C for 8 h. The morphology of particles was confirmed by Scanning Electron Microscopy (SEM). The Energy-dispersive X-ray Analysis (EDX) was used to obtain the atomic composition and percentage of the catalyst. The reaction was carried out at 65°C for 1 h. Whereas, the catalyst loading were varied of 1.0%, 2.5%, 5.0%, 10% and 15% wt.%. From EDX data was confirmed that the ash catalyst composes of the potassium oxide (K2O) as the dominant compound. The performance of the catalyst was evaluated from a transesterification reaction for biodiesel production. The highest biodiesel yield of 96.5% was obtained in the reaction using a catalyst of 10 wt.%. The density obtained was 0.865 gr/cm3 confirming the produced biodiesel complies with the SNI standard.
Introduction
The widespread using of fossil fuels in power plants, transport vehicles, generators, and mining equipment has required higher energy consumption [1]. The demand energy in the transportation sector has hiked with the growth of the population and it is expected to keep on increasing. This has led to a rapid depletion in the world's reserves of fossil fuels. The environmental pollution generated by fossil fuels is currently increasing [2][3]. So that we need to look for other energy sources as alternative fuel which possess all the properties such as renewability, accessibility, sustainable nature, and clean fuel that can meet all the challenges caused by fossil fuels, one of it is biodiesel [4][5][6][7].
Biodiesel right now has been used in most countries in the world. Triglycerides that generally exist in vegetable oil (edible or non-edible) or animal fats are reacted with methanol besides either alkali or acid catalyst for industrial-scale manufacturing of biodiesel. The biodiesel can be produced from various natural sources including waste cooking oil, jatropha oil, palm oil, neem oil, soybean oil, rapeseed oil and other vegetable oils. The economics of biodiesel production is highly sensitive feedstock and it comprises a very substantial portion of overall production cost. Hence, the use of nonedible oil sources or waste products of the edible oil industry as biodiesel feedstock to adjust the * To whom any correspondence should be addressed. international standards is a recently focused area of research. In this research, the rubber seed oil is used as a raw material for biodiesel production because easy to cultivate [8][9][10][11][12][13][14][15][16][17].
To accelerate the transesterification reaction of vegetable oils into biodiesel, various catalysts have been used. The catalyst could be significantly affected the reaction rate and also may favor the formation of the product. There are three types of catalysts that are often used, namely homogeneous, heterogeneous and biocatalyst. The use of heterogeneous catalysts is more preferred than a homogeneous catalyst. The homogeneous catalyst still has a limitation on the separation process that leads to an increase in the total cost of biodiesel production. The heterogeneous catalysts offers more advantages, such as easily separation, recyclable, high glycerol purity without corrosion in the reactor. Several heterogeneous catalysts have been studied for the conversion of vegetable oils into biodiesel, such as SrO, CaO, TiO2-based, K2O, showing relatively high activity [18][19][20][21][22].
Generally, the exploration of solid catalysts based on non-renewable sources. Durian wastes as a renewable source can be used as a solid catalyst due to it contains K2O compound. When the durian peel was burned at a moderate temperature, it would produce an ash residue. The inorganics parts of the ash residue are K2O composition as the dominant component. The present work aimed at exploring the possibility of using durian peel as a source of K2O as a catalyst for biodiesel production. The ash catalyst was characterized by SEM and EDX. The effect of catalyst loading to oil ratio was also evaluated.
Material
Durian peel is obtained from waste durian, in Banda Aceh City, Indonesia. Methanol was provided by Merck (Germany), and rubber seed oil was purchased from the market. The rubber seed was obtained from a rubber farm in East Aceh, Indonesia. The oil was extracted by pressing the rubber seed using a mechanic press.
Preparation of Catalyst
Waste durian peel was dried in open area. After drying, durian peel was cut and crushed using a mechanical crusher. The durian peel was put in stainless steel plate and burned until forming ash. The ash of durian peel was calcined at 600 ºC for 8 hours. Then the catalyst was cooled at room temperature.
Catalyst Characterization
Analysis of catalysts was carried out to determine its morphology and chemical composition. The catalysts were analyzed by scanning electron microscopy (SEM). The catalyst element were characterized by energy dispersive X-ray (EDX).
Transesterification Reaction
The biodiesel production was carried out via transesterification reaction of rubber seed oil, ash catalyst, and methanol. In each experiment the catalyst were varied of 1.0%, 2.5%, 5.0%, 10%, and 15 wt.% to oil. This reaction was conducted in a three-neck round-bottom flask batch reactor equipped with a heater, magnetic stirrer and a water-cooled condense [23]. The rubber seed oil and methanol put into a three-neck flask. Whereas the catalyst was varied of 1.0%, 2.5%, 5.0%, 10%, and 15 wt.% to rubber seed oil. The reaction was kept at a temperature of 65ºC for 1 h. The generated biodiesel was separated from the catalyst residue and glycerol using separating funnel and filter paper. After being separated, the biodiesel was then dry-washed in an oven at 80ºC for 12 h. The methanol to oil molar ratio was 8:1. The yield of biodiesel was determined using Eq.
Scanning Electron Microscopy (SEM) and Energy-dispersive X-ray (EDX) Analysis
The morphology of the durian peel ash catalyst was carried out to determine by SEM analysis, as shown in Figure 1. As can be seen from SEM analysis, the surface morphology of the ash durian peel is inhomogeneous particle size. The shape of the particles of around 300 nm -2µm. This irregular shape of the ash particle might be due to an agglomeration when burning the durian peel. The spectrums of the EDX were composed of potassium, magnesium, calcium, silica, aluminium, carbon, oxygen, and phosphor, as can be seen from Fig. 1. It appears that the highest peak of the spectrum is potassium. The data analysis reported that the weight percent of potassium is 27.25%.
Influences of Catalyst Loading
Catalyst is one factor is greatly affects the effectiveness of biodiesel production [24]. In this experiment, the ratio of the catalyst to the rubber seed oil was varied of 1.0%, 2.5%, 5.0%, 10.0 and 15.0 wt.%. The effect of the weight ratio of the catalyst to the yield of biodiesel is shown in Figure 3. As shown in Fig. 3, the yield of biodiesel tends to increase with an increase in the loading of catalyst. As catalyst loading increased from 1.0 % to 10%, the yield biodiesel also increased from 81.21% to 96.5% %. With further increase in the catalyst loading, above 10 wt%, the biodiesel yield decreased due to emulsification. With an increase in catalyst loading to 15.0 wt.%, the biodiesel yield slightly decreased to 93.8%. The excess catalyst could increase the solubility of glycerol, lead to reduced separation of biodiesel and glycerol and decreased biodiesel yield [25].
Biodiesel Properties
The properties of biodiesel have been carried out to determine the physical-chemical characteristics of the process using a durian peel catalyst is shown in Table 1. The properties of biodiesel from ash peel durian catalysts such as kinematic viscosity and density were also analyzed. The kinematic viscosity of biodiesel is ranging from 4.42-5.9 mm 2 /s (cp), while, the biodiesel density of biodiesel from this study of 865-890 kg/m 3 which were in accordance with SNI standards. Thus, the properties of biodiesel produced in this study using heterogeneous catalysts of durian peels ash are in accordance with Indonesian National Standards. The high fuel properties cause mass injection to be higher in the engine. Because of this, its energy content in the combustion chamber and engine performance are greatly influenced by fuel properties [26][27].
Conclusion
The heterogeneous catalyst from durian peel ash has been prepared by a simple combustion process. The catalyst has been applied for the transesterification reaction of rubber seed oil. From EDX data was confirmed that the ash catalyst composes of the potassium oxide (K2O) as the dominant IOP Conf. Series: Materials Science and Engineering 845 (2020) 012033 IOP Publishing doi:10.1088/1757-899X/845/1/012033 5 compound. The prepared sample has been successfully produced biodiesel with a high yield of 96.5% using 10% catalyst at 65ºC for 1 h with methanol to oil ratio of 8:1. Durian peel ash contains potassium oxide which owns good ability and stability makes it has the potential to be utilized as a material for catalyst production.
Acknowledgement
The author thanks to the ministry of research and technology of higher education of Indonesia for financial support through the research project Master Program. | 2020-06-25T09:06:51.294Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "d62eec55807dbb4d0f6829d7c2284cde8cec41bc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/845/1/012033",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "26e2fc7fbccf6298c2130f9d960d103ddac9f206",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
261447440 | pes2o/s2orc | v3-fos-license | Comparison of Tumour-Specific Phenotypes in Human Primary and Expandable Pancreatic Cancer Cell Lines
There is an ongoing need for patient-specific chemotherapy for pancreatic cancer. Tumour cells isolated from human tissues can be used to predict patients’ response to chemotherapy. However, the isolation and maintenance of pancreatic cancer cells is challenging because these cells become highly vulnerable after losing the tumour microenvironment. Therefore, we investigated whether the cells retained their original characteristics after lentiviral transfection and expansion. Three human primary pancreatic cancer cell lines were lentivirally transduced to create expandable (Ex) cells which were then compared with primary (Pri) cells. No obvious differences in the morphology or epithelial–mesenchymal transition (EMT) were observed between the primary and expandable cell lines. The two expandable cell lines showed higher proliferation rates in the 2D and 3D models. All three expandable cell lines showed attenuated migratory ability. Differences in gene expression between primary and expandable cell lines were then compared using RNA-Seq data. Potential target drugs were predicted by differentially expressed genes (DEGs), and differentially expressed pathways (DEPs) related to tumour-specific characteristics such as proliferation, migration, EMT, drug resistance, and reactive oxygen species (ROS) were investigated using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. We found that the two expandable cell lines expressed similar chemosensitivity and redox-regulatory capability to gemcitabine and oxaliplatin in the 2D model as compared to their counterparts. In conclusion, we successfully generated expandable primary pancreatic cancer cell lines using lentiviral transduction. These expandable cells not only retain some tumour-specific biological traits of primary cells but also show an ongoing proliferative capacity, thereby yielding sufficient material for drug response assays, which may provide a patient-specific platform for chemotherapy drug screening.
Introduction
Recent studies have shown that pancreatic tumour cells can be singularised and cultured from tumour specimens.These may be used to predict the response towards systemic treatment in clinical settings [1].To allow their use in therapy response prediction assays, it is important that cells retain and express most of the differentiated properties typical of their original source [2,3].The culture efficacy of primary tumour cell lines depends on the availability of surgical materials [4], and the yields of cell cultivation are normally low depending on the isolation techniques and tumour biology.In pancreatic cancer, some primary cell lines lose their original traits over time [5].In particular, pancreatic ductal adenocarcinoma cells are highly sensitive and lack robustness against changes in the tumour microenvironment after isolation [6].In addition, these primary cells are divided only a limited number of times [7], which further complicates their use.
To yield sufficient material for the experiment, the immortalisation of primary cells seems to be an alternative.The transformation of a primary cell into an immortalised cell can be induced by a second oncogene [8], or, at a low frequency, by chromosomal mutations [9].Although immortalised cells grow faster than primary cells [10], traditional approaches for establishing immortalised cell lines usually require genome manipulation, which results in changes in essential biological and genetic characteristics [11].Compared with immortalised cells, expandable cells, as defined by the introduction of intercalating targeted genes into primary cells to gain robustness, can provide sufficient material and retain the majority of their original cell biological characteristics.Currently, we have established a method using a small lentiviral gene library to expand primary cells derived from different tissues, donors, and species [12].In only 6 weeks, personalised cell lines can be generated from only 1×10 6 primary cells.Therefore, this novel approach not only allows for the reproducible expansion of primary cells but also overcomes the unpredictability that is typically correlated with previous cell line development procedures [12].
Considering that expandable cells have also undergone some genomic mutations, it remains to be shown whether these cells can be used for response prediction in terms of planning individualised therapy as their genome is different from that of primary cells [13,14].Therefore, in our study, expandable cell lines were created from human primary pancreatic cancer cell lines via lentiviral transduction using a small gene library.We compared the differences in primary and expandable cell lines using RNA-Seq data and tumourspecific phenotypes, such as morphology, proliferation, migration, epithelial-mesenchymal transition (EMT), chemotherapeutic response, and redox-regulatory capability.
Growth Characteristics of Primary and Expandable Pancreatic Cancer Cell Lines
First of all, Figure 1 shows the flowchart of this study.The cell morphology was then determined using a 2D culture (Figure 2A).Primary MaPac107 (Pri-MaPac107) exhibited epithelial monolayer characteristics with a fusiform-organised pattern.Cells grew as clusters before reaching 100% confluency.Another characteristic was the presence of irregularly shaped nuclei.A fusiform-organised pattern did not appear when the cells reached confluence.Expandable MaPac107 (Ex-MaPac107) cells shared the same cell and nuclear morphology.Primary PaCaDD159 (Pri-PaCaDD159) cells proliferated as an epithelial monolayer with a lumpy organised pattern and grew in the form of a cluster-like unit.The cells were ovoid and were characterised by the presence of small and round cell nuclei.Expandable PaCaDD159 (Ex-PaCaDD159) shares the same cell and nuclear morphology.Primary PaCaDD165 (Pri-PaCaDD165) cells were grown in an epithelial monolayer as small polygonal cells with a cobblestone pattern.These cells exhibited prominent nuclei and rapid proliferation.No morphological differences were detected between the expandable PaCaDD165 (Ex-PaCaDD165) and Pri-PaCaDD165. Figure 2B,C show the growth curves and doubling times of all the cell lines.Ex-MaPac107 and Ex-PaCaDD165 expressed higher proliferation rates than their counterparts.The proliferation rate of Ex-PaCaDD159 cells was significantly lower than that of Pri-PaCaDD159 cells.Although the lower proliferation rate of Ex-PaCaDD159 cells was unexpectable, the remaining two expandable cell lines showed higher proliferation rates in 2D culture.
Int. J. Mol.Sci.2023, 24, x FOR PEER REVIEW 3 of 24 cells was significantly lower than that of Pri-PaCaDD159 cells.Although the lower proliferation rate of Ex-PaCaDD159 cells was unexpectable, the remaining two expandable cell lines showed higher proliferation rates in 2D culture.cells was significantly lower than that of Pri-PaCaDD159 cells.Although the lower proliferation rate of Ex-PaCaDD159 cells was unexpectable, the remaining two expandable cell lines showed higher proliferation rates in 2D culture.The cells were further evaluated in 3D cultures.Figure 3A shows that both Pri-MaPac107 and Ex-MaPac107 formed compact spherical structures on day 1.This structure became tighter over time, resulting in a smooth morphology on day three.Moreover, we observed the formation of a dark core in the centre of the spheroid on day 5, which gradually expanded until day 7.Although the diameters of Pri-MaPac107 and Ex-MaPac107 spheroids increased over time, the diameter of Ex-MaPac107 spheroids was larger than that of Pri-MaPac107 spheroids during the observation period (Figure 3B).Pri-PaCaDD159 did not form spheroids but formed irregularly shaped cell aggregates.Ex-PaCaDD159 did not form spheroids until day seven.Pri-PaCaDD165 formed a relatively loose spherical structure on day 1, subsequently becoming tighter and shrinking in diameter.A smooth spheroid was formed on day 3. Ex-PaCaDD165 formed loose spherical structures on day 1 and spheroids on day 3. Dissociation of the outer layer of the Ex-PaCaDD165 spheroids was observed on day 7.The size of the spheroids decreased on day 2 and gradually increased until day 7 (Figure 3B).In summary, neither Pri-PaCaDD159 nor Ex-PaCaDD159 formed spheroids in this study.The diameters of Ex-MaPac107 and Ex-PaCaDD165 spheroids were larger than those of their primary counterparts from day 1 to day 7. from the logarithmic growth curve according to v = lgN − lgN0/lg2 (t − t0), with doubling time = 1/v, where N = number of cells and t = time.** p < 0.01, *** p < 0.001.
The cells were further evaluated in 3D cultures.Figure 3A shows that both Pri-MaPac107 and Ex-MaPac107 formed compact spherical structures on day 1.This structure became tighter over time, resulting in a smooth morphology on day three.Moreover, we observed the formation of a dark core in the centre of the spheroid on day 5, which gradually expanded until day 7.Although the diameters of Pri-MaPac107 and Ex-MaPac107 spheroids increased over time, the diameter of Ex-MaPac107 spheroids was larger than that of Pri-MaPac107 spheroids during the observation period (Figure 3B).Pri-Pa-CaDD159 did not form spheroids but formed irregularly shaped cell aggregates.Ex-Pa-CaDD159 did not form spheroids until day seven.Pri-PaCaDD165 formed a relatively loose spherical structure on day 1, subsequently becoming tighter and shrinking in diameter.A smooth spheroid was formed on day 3. Ex-PaCaDD165 formed loose spherical structures on day 1 and spheroids on day 3. Dissociation of the outer layer of the Ex-Pa-CaDD165 spheroids was observed on day 7.The size of the spheroids decreased on day 2 and gradually increased until day 7 (Figure 3B).In summary, neither Pri-PaCaDD159 nor Ex-PaCaDD159 formed spheroids in this study.The diameters of Ex-MaPac107 and Ex-PaCaDD165 spheroids were larger than those of their primary counterparts from day 1 to day 7.To compare the migration abilities of primary and expandable pancreatic cancer cell lines, scratch assays were performed.Time-lapse images were taken at five time points (Figure 4A).Gap closure was faster in all three primary cell lines than in their expandable To compare the migration abilities of primary and expandable pancreatic cancer cell lines, scratch assays were performed.Time-lapse images were taken at five time points (Figure 4A).Gap closure was faster in all three primary cell lines than in their expandable counterparts from 3 h to 9 h.At 24 h, all gaps were fully closed, except for those in Pri-PaCaDD165 and Ex-PaCaDD165 (Figure 4B).In addition to cell migration, the invasiveness of a cell line determines its aggressiveness in cancer progression.EMT is a hallmark of tumour cell invasion.The expression levels of EMT-related genes of primary and expandable cells were determined by qPCR. Figure 5 depicts the expression levels of different EMT-related genes with GAPDH as an internal reference.In addition to cell migration, the invasiveness of a cell line determines its aggressiveness in cancer progression.EMT is a hallmark of tumour cell invasion.The expression levels of EMT-related genes of primary and expandable cells were determined by qPCR. Figure 5 depicts the expression levels of different EMT-related genes with GAPDH as an internal reference.
Bioinformatics Analysis Based on RNA-Seq Data
To determine differences in gene expression between primary and expandable pancreatic cancer cell lines, we performed bioinformatics analysis of RNA-Seq data.Hierarchical clustering using heatmaps depicted the primary and expandable samples as distinct clusters (Supplementary Figure S1A).In total, 2742, 649, and 1952 significantly DEGs were identified in MaPac107, PaCaDD159, and PaCaDD165, respectively.These were used to generate volcano plots (Supplementary Figure S1B) and Venn diagrams (Supplementary Figure S2).Three-dimensional principal component analysis (PCA) showed the distribution pattern of the primary and expandable cell lines (Supplementary Figure S3).Several differentially expressed pathways (DEPs) related to tumour-specific phenotypes were observed in a cell-line-specific manner (Table 1).The Hippo signalling pathway, related to drug resistance and proliferation, was enriched in MaPac107.Focal adhesion was enriched in MaPac107, influencing pathways related to EMT and migration.The NF-kappa B signalling pathway was enriched in PaCaDD159, which is related to drug resistance, EMT, proliferation, reactive oxygen species (ROS), and migration.In addition, the PI3K-Akt signalling pathway, enriched in PaCaDD159, was correlated with drug resistance, EMT, proliferation, ROS, and migration.The AMPK signalling pathway was enriched in PaCaDD165, which affects drug resistance and ROS production.The adjusted p-values for the aforementioned pathways were all <0.05.Int.J. Mol.Sci.2023, 24, x FOR PEER REVIEW Figure 5. Expression of genes related to EMT was evaluated in primary and expandable pa cancer cell lines by qPCR using SYBR Green I and GAPDH used as internal reference.* p < p < 0.001.ns: not significant.
Bioinformatics Analysis Based on RNA-Seq Data
To determine differences in gene expression between primary and expandab creatic cancer cell lines, we performed bioinformatics analysis of RNA-Seq data.H chical clustering using heatmaps depicted the primary and expandable samples as d
Chemosensitivity of Primary and Expandable Pancreatic Cancer Cell Lines
Based on the DEGs derived from RNA-Seq data with logFC > 3 or <−3 and adjusted p-values < 0.05, we predicted potential target drugs using DGIdb.This analysis showed that 94 potential target drugs were shared between MaPac107, PaCaDD159, and PaCaDD165 (Figure 6A).Fourteen compounds are being clinically used (Supplementary Table S1), and gemcitabine and oxaliplatin are administered in adjuvant or palliative settings in pancreatic cancer.Therefore, we focused on the effects of gemcitabine and oxaliplatin in this study.For both compounds, different concentrations (Supplementary Table S2) were chosen to perform IC50 assays.The dose-response curves of primary and expandable cells aligned for MaPac107 and PaCaDD165 after 48 and 72 h of incubation with gemcitabine and oxaliplatin (Figure 6B,D).The dose-response curves for Pri-PaCaDD159 and Ex-PaCaDD159 were not aligned at these two time points after exposure to either compound (Figure 6C).The IC50 values are summarised in Supplementary Table S3.The DEGs related to gemcitabine and oxaliplatin are listed in Supplementary Table S4.Both primary and expandable cell lines display identical drug response behaviour.The detailed comparison of cell line response to drugs -gemcitabine and oxaliplatin and statistical calculations including hypothesis definition, group description Table S5, variance Table S6, outlier percent calculation Table S7, example of various regression methods employed Figure S6, and the descriptive parameters for the regression methods Table S8-S10 and finally an example for results of the t-test for residuals Figure S7 are presented in the Supplementary Materials under section cell line comparison.
Cellular Redox Status Assessment
To assess the cellular redox status of primary and expandable cells under oxidative stress, the cells were transfected with Grx1-roGFP3.The fluorescence intensity was determined by calculating the ratio of EGSH = Ex395/Ex485.We initially monitored the dynamics Therefore, we focused on the effects of gemcitabine and oxaliplatin in this study.For both compounds, different concentrations (Supplementary Table S2) were chosen to perform IC 50 assays.The dose-response curves of primary and expandable cells aligned for MaPac107 and PaCaDD165 after 48 and 72 h of incubation with gemcitabine and oxaliplatin (Figure 6B,D).The dose-response curves for Pri-PaCaDD159 and Ex-PaCaDD159 were not aligned at these two time points after exposure to either compound (Figure 6C).The IC 50 values are summarised in Supplementary Table S3.The DEGs related to gemcitabine and oxaliplatin are listed in Supplementary Table S4.Both primary and expandable cell lines display identical drug response behaviour.The detailed comparison of cell line response to drugs -gemcitabine and oxaliplatin and statistical calculations including hypothesis definition, group description Table S5, variance Table S6, outlier percent calculation Table S7, example of various regression methods employed Figure S6, and the descriptive parameters for the regression methods Tables S8-S10 and finally an example for results of the t-test for residuals Figure S7 are presented in the Supplementary Materials under section cell line comparison.
Cellular Redox Status Assessment
To assess the cellular redox status of primary and expandable cells under oxidative stress, the cells were transfected with Grx1-roGFP3.The fluorescence intensity was determined by calculating the ratio of E GSH = E x 395/E x 485.We initially monitored the dynamics of E GSH using H 2 O 2 and DTT to validate the function of this sensor after sorting (Supplementary Figure S4).We then compared the redox-regulatory capability of primary and expandable cells after chemotherapy in 2D culture.Significant differences in E GSH activity were found between Pri-MaPac107-roGFP3+ and Ex-MaPac107-roGFP3+ cells after 48 h of treatment with gemcitabine and oxaliplatin (Figure 7A,B).Figure 7C,D show the responses of Pri-PaCaDD159-roGFP3+ and Ex-PaCaDD159-roGFP3+ cells to gemcitabine and oxaliplatin after 48 h of treatment, which showed a similar fluorescence intensity coefficient for E GSH .Figure 7E,F show that E GSH for Pri-PaCaDD165-roGFP3+ and Ex-PaCaDD165-roGFP3+ after 48 h of treatment showed differences after exposure to gemcitabine, while exposure to oxaliplatin led to similar E GSH .(Supplementary Figure S4).We then compared the redox-regulatory capability of primary and expandable cells after chemotherapy in 2D culture.Significant differences in EGSH activity were found between Pri-MaPac107-roGFP3+ and Ex-MaPac107-roGFP3+ cells after 48 h of treatment with gemcitabine and oxaliplatin (Figure 7A,B).Subsequently, we compared the redox-regulatory capability of primary and expandable cells in 3D culture after incubation with gemcitabine and oxaliplatin for 48 h. Figure 8A,B show obvious differences between Pri-MaPac107-roGFP3+ and Ex-MaPac107-roGFP3+.A disaggregated outer layer of spheroids of Pri-MaPac107-roGFP3+ cells was Subsequently, we compared the redox-regulatory capability of primary and expandable cells in 3D culture after incubation with gemcitabine and oxaliplatin for 48 h. Figure 8A,B show obvious differences between Pri-MaPac107-roGFP3+ and Ex-MaPac107-roGFP3+.A disaggregated outer layer of spheroids of Pri-MaPac107-roGFP3+ cells was observed after incubation with gemcitabine, which was not observed in Ex-MaPac107-roGFP3+ cells (Supplementary Figure S5A).Figure 8C,D depict clear differences between Pri-PaCaDD165-roGFP3+ cells and Ex-PaCaDD165-roGFP3+ cells.No obvious disaggregation on the outer layer of spheroids was observed in both treatment groups of Pri-PaCaDD165-roGFP3+ and Ex-PaCaDD165-roGFP3+ (Supplementary Figure S5B).observed after incubation with gemcitabine, which was not observed in Ex-MaPac107-roGFP3+ cells (Supplementary Figure S5A).Figure 8C,D depict clear differences between Pri-PaCaDD165-roGFP3+ cells and Ex-PaCaDD165-roGFP3+ cells.No obvious disaggregation on the outer layer of spheroids was observed in both treatment groups of Pri-Pa-CaDD165-roGFP3+ and Ex-PaCaDD165-roGFP3+ (Supplementary Figure S5B).
Discussion
Primary cells derived from tumour specimens have been widely used for studies of drug metabolism and toxicity in vitro because they endogenously express drug targets of interest at levels consistent with in vivo conditions [3].One of the drawbacks is that it usually takes a long time for primary tumour cells to expand [63].Moreover, some cell lines become static or senescent after a certain number of passages [6].Although immortalised cells can be an option for infinite proliferation, they generally require highly expressed viral oncogenes and lead to alterations in the cellular phenotype and chromosomal instability [64,65].To overcome these shortcomings, a new method called targeted transgenic transfection may be an alternative.In our study, we used a lentiviral library consisting of 33 genes to establish three expandable primary pancreatic cancer cell lines and expected them to maintain the prominent biological characteristics of their primary counterparts.This method is widely used to expand human primary chondrocytes, epithelial cells, endothelial cells, and murine primary hepatocytes.Expansion targets diverse cellular processes, such as cell cycle progression and apoptosis, but allows cells to maintain stem cell properties and overcome the problem of chromosomal instability [12].No signs of senescence or crisis, tumour formation, or pluripotent phenotype were observed in the investigated cell lines during the extended cultivation periods.Expansion enables
Discussion
Primary cells derived from tumour specimens have been widely used for studies of drug metabolism and toxicity in vitro because they endogenously express drug targets of interest at levels consistent with in vivo conditions [3].One of the drawbacks is that it usually takes a long time for primary tumour cells to expand [63].Moreover, some cell lines become static or senescent after a certain number of passages [6].Although immortalised cells can be an option for infinite proliferation, they generally require highly expressed viral oncogenes and lead to alterations in the cellular phenotype and chromosomal instability [64,65].To overcome these shortcomings, a new method called targeted transgenic transfection may be an alternative.In our study, we used a lentiviral library consisting of 33 genes to establish three expandable primary pancreatic cancer cell lines and expected them to maintain the prominent biological characteristics of their primary counterparts.This method is widely used to expand human primary chondrocytes, epithelial cells, endothelial cells, and murine primary hepatocytes.Expansion targets diverse cellular processes, such as cell cycle progression and apoptosis, but allows cells to maintain stem cell properties and overcome the problem of chromosomal instability [12].No signs of senescence or crisis, tumour formation, or pluripotent phenotype were observed in the investigated cell lines during the extended cultivation periods.Expansion enables primary tumour cells to survive and proliferate outside the cellular tumour microenvironment.The advantage is the ongoing proliferative capacity and, thus, the availability of sufficient material for therapy.The question is whether these expandable cells can be used as reliable platforms for therapy response prediction.To evaluate the differences between primary and expandable cells, we compared three pancreatic carcinoma primary and expandable cell lines in terms of morphology, proliferation, migration, EMT, RNA-Seq, susceptibility towards chemotherapy reagents, and redox status assessment.
Although the morphologies of all expandable cell lines shared a similar pattern with their counterparts, we showed that Ex-MaPac107 and Ex-PaCaDD165 proliferated faster than their counterparts.Unexpectedly, Ex-PaCaDD159 exhibited slower proliferation and a longer doubling time than Pri-PaCaDD159.For optimal growth, cells in a solid tumour are dependent on the tumour microenvironment and interact with other cell types, such as fibroblasts, immune cells (T and B lymphocytes, natural killer cells, and tumour-associated macrophages), blood vessels, extracellular matrix (ECM), and other signal molecules [66,67].Therefore, the relatively low proliferation rate of Pri-PaCaDD159 may be explained by its extraction history, that is, from a solid tumour, as compared with Pri-MaPac107 and Pri-PaCaDD165 which originate from pleural effusion and ascites.Subsequently, we investigated whether the primary and expandable cells could form 3D spheroids.In our study, both primary and expandable MaPac107 and PaCaDD165 formed homotypic tumour spheroids (single-cell type).Both Pri-PaCaDD159 and Ex-PaCaDD159 failed to form spheroids.For spheroid formation, tumour cells interconnect with each other through the formation of desmosomes and dermal junctions [68], as well as the secretion and deposition of proteoglycans and ECM proteins such as collagen, fibronectin, tenascin, and laminin [69].The growth of spheroids normally shows an initial phase of volume increase, followed by a period known as the 'spheroidization/stabilization time' [70].During spheroidisation, spheroids transformed into a more regular shape, which was observed in primary and expandable MaPac107 and PaCaDD165.Moreover, spheroids have a well-defined spatial structure that encompasses an actively proliferative outer layer as a result of the high availability of oxygen and nutrients, a middle layer consisting of quiescent and senescent cells, and an inner apoptotic/necrotic core due to the restricted distribution of nutrients and oxygen [71,72].The proliferative outer layer and the apoptotic/necrotic core were clearly observed in the spheroids of Ex-PaCaDD165, Pri-MaPac107 and Ex-MaPac107 on day 7, respectively.Thereafter, differences in cell migration and EMT between primary and expandable cells were examined.The migration of tumour cells is a prerequisite for tumour metastasis.Cell protrusions, chemotaxis, and cell polarity are significant molecular bases for migration in 2D tumour cell cultures [73].We found that the migratory ability of the three expandable cell lines was attenuated in 2D culture compared to that of the primary cell lines.The lentiviral transduction technology used in this study may affect the biological function of expandable cell lines from the gene level.Apart from this, some DEPs may play a critical role in the attenuated migratory ability, which was listed below.EMT is a biological process in which non-motile polarised epithelial cells undergo a series of biochemical alterations, turning into motile non-polarised mesenchymal cells with invasive capability, resistance to apoptosis, and adjusted biosynthesis of ECM components [74].SNAI2, TWIST1, and ZEB1 are the key regulatory genes contributing to the EMT process [75], where SNAI2 silencing substantially inhibits the EMT process [76], and overexpression of constitutively active TWIST1 in tumour cells promotes the acquisition of conspicuous EMT characteristics [77].The overexpression of ZEB1 results in the enhanced migration, invasion, and EMT of pancreatic cancer cells [78].While TJP1 expression in tumour cells is repressed in activated EMT [79], we found that the expression levels of SNAI2 and TWIST1 were significantly decreased in Ex-MaPac107 and the expression of ZEB1 was significantly increased in Ex-MaPac107 compared with Pri-MaPac107.The expression patterns of SNAI2, TWIST1, and ZEB1 in Ex-PaCaDD159 were comparable to those in Ex-MaPac107.As for PaCaDD165, we found that SNAI2 expression was significantly enhanced in Ex-PaCaDD165 and ZEB1 expression was significantly lower in Ex-PaCaDD165.The expression levels of TWIST1 in the primary and expandable cells were not significantly different.Therefore, EMT was not obviously affected by Ex-PaCaDD165.Interestingly, multiple DEPs in our KEGG database were related to proliferation, migration, and EMT, with adjusted p-values < 0.05.Examples include the Toll-like receptor [26,44,56], PI3K-Akt [27,35,54], Hedgehog [28,40,60], JAK-STAT [29,41,55], TGF-β [30,43,61], focal adhesion [31,58], Hippo [33,38,57], NF-kappa B [34,36,59], EGFR tyrosine kinase inhibitor resistance [37,53], MAPK [39], Ras [42], and chemokine [45,62] pathways found in the KEGG database of MaPac107.The Toll-like receptor [26,44,56], PI3K-Akt [27,35,54], JAK-STAT [29,41,55], focal adhesion [31,58], neutrophil extracellular trap formation [32], NF-kappa B [34,36,59], and chemokine [45,62] pathways were found in the KEGG database of PaCaDD159.All these pathways may play a critical role in the mediation of proliferation, migration, and EMT.No related pathways were identified in the KEGG database of PaCaDD165.
To predict drugs which might be therapeutically efficient, we analysed RNA-Seq data using the DGIdb.Fourteen potential target drugs related to cancer were identified.Gemcitabine and oxaliplatin were used to determine the chemosensitivity of the cells.While Pri-MaPac107 and Ex-MaPac107 and Pri-PaCaDD165 and Ex-PaCaDD165 shared similar chemosensitivities towards gemcitabine and oxaliplatin, differences in chemosensitivity were found between Pri-PaCaDD159 and Ex-PaCaDD159.Interestingly, two DEGs were identified which may help explain the difference in sensitivity towards gemcitabine and oxaliplatin in PaCaDD159.Tumour necrosis factor-alpha (TNF-α) expression correlates with sensitivity towards gemcitabine.Its expression by Ex-PaCaDD159 was up-regulated compared to Pri-PaCaDD159.TNF-α enhances the invasiveness of pancreatic cancer cells in vitro and promotes tumour growth and metastasis in mouse models of orthotopic pancreatic cancer [80].It has been shown that the combination of etanercept (an inhibitor of TNF-α) and gemcitabine failed to enhance gemcitabine efficacy in advanced pancreatic cancer [81].The combination of AdEgr.TNF.11D(adenoviral vector expressing human TNF-α) and gemcitabine enhanced antitumour activity in human pancreatic tumour models [82].The IC 50 values of Ex-PaCaDD159 for gemcitabine at 48 h and 72 h were higher than those of Pri-PaCaDD159, which correlates with a higher expression of TNF-α.In this context, gemcitabine treatment has been shown to increase TNF-α mRNA expression in tumour cells [83].Another upregulated gene of interest in Ex-PaCaDD159 is CXCL10, which is related to sensitivity to oxaliplatin.High expression of CXCL10 mRNA in vivo correlates with higher sensitivity to oxaliplatin and capecitabine [84].CXCL10 is an important angiostatic chemokine involved in tumour growth and new vessel formation.CXCL10-derived peptides do not only inhibit vessel formation and induce the involution of newly formed vessels [85]; these deformed and abnormal vessels may decrease the ingestion of chemotherapeutic drugs in tumour lesions too.In addition, some DEPs related to drug resistance may also help to explain the differences in the chemo-responsiveness of Pri-PaCaDD159 and Ex-PaCaDD159, including NF-kappa B [15], PI3K-Akt [17], necroptosis [20], nucleotide excision repair [21], and JAK-STAT [23], with adjusted p-values of less than 0.05.
Chemotherapy may cause oxidative stress, and high ROS levels may be detrimental to cancer cells.This is the dominant mechanism in many types of chemotherapeutics [86].ROS are not only continually produced and removed in biological systems but are also required to drive certain regulatory pathways which are also related to cell survival [87].We found that some DEPs in our KEGG database were also related to ROS production or ROS-induced apoptosis in pancreatic cancer cells, with an adjusted p-value of less than 0.05.The JAK-STAT [46], MAPK [47,48], PI3K/Akt [49], NF-kappa B [50], Lysosome [51], AMPK [52], and FOXO1 [25] pathways were found in the KEGG database of MaPac107.The JAK-STAT [46], PI3K/Akt [49], and NF-kappa B [50] pathways were identified in the KEGG database of PaCaDD159.The AMPK [52] pathway was identified using the KEGG database of PaCaDD165.A roGFP redox sensor was used to determine the influence of chemotherapeutics on the redox environment in primary and expandable cells.This sensor is a flexible platform for dynamic in situ measurements of cellular redox environment [88].In particular, the Grx1-roGFP2 fusion protein has been reported to allow the dynamic live imaging of E GSH to take place in different cellular compartments with high sensitivity and temporal resolution.The glutaredoxin1 (Grx1) confers dynamic responsiveness to the glutathione (GSH)/glutathione disulfide (GSSG) redox state [89].E GSH depends on the concentration of GSH and the ratio of GSH to GSSG [90], and an increased ratio of GSH/GSSG is indicative of greater oxidative stress.Furthermore, cells expressing Grx-roGFP3 showed an improved fluorescence signal intensity compared with cells expressing Grx-roGFP2 [91].In our study, tumour cells were transfected with Grx1-roGFP3 and exposed to gemcitabine and oxaliplatin for 48 h to monitor the variation in E GSH in 2D and 3D cultures.Significant differences were identified between Pri-MaPac107-roGFP3+ and Ex-MaPac107-roGFP3+ under exposure to the two drugs regardless of whether the cells were maintained in 2D or 3D culture.One explanation may be the differential expression of members of oxidative and antioxidative genes.We found that ABCG2 was upregulated in Ex-MaPac107 cells and was related to gemcitabine and oxaliplatin.It has been reported that cells expressing ABCG2 under oxidative stress elevate superoxide radical levels and further influence ROS production to the point of oxidant stress [92,93].This might be a protective mechanism of Ex-MaPac107, which can protect cells from death caused by excessive ROS.A recent study has proven that ABCG2 is capable of protecting cells from ROS-mediated cell damage and death [94].Interestingly, the fluorescence intensity of Ex-PaCaDD159 decreased after incubation with gemcitabine in 2D cultures.This decrease was correlated with increasing concentrations of gemcitabine.In this context, it has been shown that the treatment of tumour cells with Se-Gem, a selenoprodrug of gemcitabine, also triggers a dose-dependent decrease in the GSH/GSSG ratio [95], which is in line with our data.A possible explanation is that the ratio of GSH/GSSG decreases during cell death because of NADPH oxidation and GSH extrusion, and the exogenous addition of GSSG has also been shown to induce apoptosis [96,97].While oxaliplatin is usually applied in combination with 5-FU, leucovorin and irinotecan for pancreatic cancer treatment (FOLFIRINOX) [98], the treatment with oxaliplatin alone still triggered oxidative stress in our study since the fluorescence intensity ratios in Pri-PaCaDD159-roGFP3+ and Ex-PaCaDD159-roGFP3+ and Pri-PaCaDD165-roGFP3+ and Ex-PaCaDD165-roGFP3+ were enhanced in correlation to the concentration of oxaliplatin.It has also been reported that an increased GSH/GSSG ratio was found in the human ovarian cancer cell line A2780 after combination treatment with LH3, a new monofunctional planaramine platinum (II) complex with curcumin [99].This is also in accordance with our findings because oxaliplatin is also a platinum compound.
Interestingly, more differences in fluorescence ratios between Pri-PaCaDD165-roGFP3+ and Ex-PaCaDD165-roGFP3+ were observed in 3D culture compared to 2D culture after incubation with oxaliplatin for over 48 h.Generally, drugs or stimuli can readily reach cells in 2D culture, in which cells are exposed to drugs in a monolayer structure.In spheroids, the transport of nutrients, oxygen, and drugs depends on diffusion and concentration gradients, hydraulic conductivity, and pressure gradients [100].The diameter of a spheroid is an essential factor which defines the extent to which a substance can reach cells within the spheroid [101].Drugs may easily penetrate the outer layer of the spheroid and hardly penetrate the middle or deeper layer of the spheroid, which implies that the drugs cannot affect cells within the spheroids.The differences observed in our study between 2D and 3D cell cultures may also be attributed to the different assays used, since it is impossible for the detector of the SPARK Plate Reader to capture all the single-cell fluorescence from the spheroids.
Cell Culture
Human primary pancreatic cancer cell lines (MaPac107 and PaCaDD159.PaCaDD165) were originally derived from patient tumour tissues [102] (Table 2).The cells were cul-tured in Dresden medium [6] consisting of Dulbecco's Modified Eagle Medium (DMEM, 4.5 g/L glucose, Sigma-Aldrich, Taufkirchen, Germany) and Keratinocyte serum-free medium (KSFM, Thermo Fisher Scientific, Waltham, MA, USA) at a ratio of 2:1.DMEM was supplemented with 1% penicillin/streptomycin (P/S; Sigma-Aldrich, Taufkirchen, Germany) and 20% foetal bovine serum (FBS; Thermo Fisher Scientific, Waltham, MA, USA).KSFM was supplemented with human recombinant epidermal growth factor (rEGF) and bovine pituitary extract (BPE).The cells were maintained in an atmosphere of 5% CO 2 at 37 • C and passaged at a 1:3 ratio.The cell culture medium was changed every two days.STR DNA profiling was carried out for the MaPac107 primary cell line, after establishment, using fluorescent PCR in combination with capillary electrophoresis, as described previously [103].Using different alternate colours, the PowerPlex VR 1.2 system (Promega, Mannheim, Germany) was modified in order to run a two-colour DNA profiling, allowing the simultaneous single-tube amplification of eight polymorphic STR loci and Amelogenin for gender determination.The STR loci of CSF1PO, TPOX, TH01, vWA, and Amelogenin were amplified by primers labelled with the Beckman/Coulter dye D3 (green; Sigma-Aldrich, Taufkirchen, Germany), while the STR loci D16S539, D7S820, D13S317, and D5S818 were amplified using primers labelled with D2 (black).All the loci except the Amelogenin gene in this set are true tetranucleotide repeats.All primers are identical to the PowerPlexVR 1.2 system except the fluorescent colour.Data were analysed with the CEQ 8800 software (Beckman-Coulter, Krefeld, Germany), which enables an automatic assignment of genotypes and automatic export of resulting numeric allele codes into the reference DNA database of the DSMZ.
Expandable Cells
Expandable cells were produced using InSCREENeX via infection with a lentiviral library consisting of 33 genes [12].Virus production was performed for each lentiviral vector individually by the transient transfection of HEK 293T cells using plasmids encoding helper functions (gagpol, rev, VSV-G) and the respective lentiviral vectors [104].All the expandable cells were cultured in Dresden medium at 37 • C in a 5% CO 2 atmosphere, being passaged at a 1:3 ratio.
Doubling Time
The doubling time was assessed by counting the number of viable cells from freshly trypsinised monolayers using a haemocytometer.Cell viability was determined by trypan blue staining (Sigma-Aldrich, Taufkirchen, Germany).A total of 50,000 cells from each cell line were seeded in 6-well plates with 2 mL of Dresden medium per well and counted every 24 h for 7 days.The culture medium was changed every three days.The doubling time was calculated from the logarithmic growth curve using the following formula [102]: v = logN − logN 0 /log2 (t − t 0 ), with doubling time = 1/v, where N is the number of cells and t is the time.Each well of the plate was counted three times.Experiments were performed in triplicate.
Three-Dimensional Spheroid Establishment
In total, 3000 cells in 100 µL of Dresden medium containing 20% methylcellulose (Sigma-Aldrich, Taufkirchen, Germany) were seeded in 96-well U-bottom plates (Corning, NY, USA) and centrifuged at 2000× g for 15 min.The plates were incubated at 37 • C in a 5% CO 2 atmosphere.Every two days, 50% Dresden medium was changed to maintain proliferation and viability in all plates.
Scratch Assay
A total of 100,000 cells were seeded in 12-well plates with 1 mL of Dresden medium per well.The cells were allowed to grow to approximately total confluence.A scratch was made by scraping the cell monolayer in a straight line using a 200 µL pipet tip.The debris and lost cells from the scratch margins were removed by washing the cells twice with 1 mL of Dresden medium.Images were taken by Carl Zeiss Axio Vert.A1 microscope (431030-9040-000; Jena, Germany) at 0, 3, 6, 9, and 24 h.Wound closure was determined as follows: wound closure (%) = (original wound area-area at each time point)/original wound area.Experiments were performed in triplicate.
RNA Isolation and Sequencing
RNA was isolated following the manufacturer's instructions using a RNeasy Mini Kit (Qiagen, Hilden, Germany).Each cell line contained three RNA samples obtained from three sequential passages.RNA samples were quantified using nucleic acid quantification analysis.The purity was determined by measuring the absorbance ratio at 260 and 280 nm with acceptable values of 1.7-2.1 using a SPARK Plate Reader (Tecan V2.3, Männedorf, Switzerland).RNA integrity was assessed and analysed by capillary electrophoresis using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA).An RNA Integrity Number (RIN) ≥7.0 indicated sufficient RNA quality.All samples were then sent to BGI Tech Solution Co., Ltd.(Hong Kong, China) for RNA-Seq.
Real-Time Quantitative PCR (qPCR)
RNA (500 ng) was used for cDNA synthesis using the Bio-Rad T100 (Hercules, CA, USA) following the manufacturer's instructions using the QuantiTect Rev. Transcription Kit (Qiagen, Hilden, Germany).qPCR was performed using a LightCycler ® 96 (Roche, Basel, Switzerland) with the FastStart Essential DNA Green Master Kit (Roche, Basel, Switzerland) and primers (Qiagen, Hilden, Germany).Original data were obtained using the LightCycler ® 96 SW 1.1 (Roche, Basel, Switzerland).Relative mRNA expression levels were determined using the comparative Ct method after normalisation to GADPH expression levels.The expression levels of each gene were analysed in triplicate.The primers used in this study are listed in Table 3.
Table 3. Primers used in the study.
Gene Symbol
Qiagen Category Number 4.9.RNA-Seq Data Bioinformatics Analysis In brief, the R statistical programming language and Bioconductor tools were used by employing the NGS analysis package system PipeR [105].Quality control of the raw sequencing reads was performed using FastQC (version 0.11.5).Low-quality reads were removed using the Trim Galore (version 0.6.4).The resulting reads were aligned to the reference human genome (GRCh38.p13)from the gene code and counted using Kallisto (version 0.46.1)[106].The count data were transformed to log2 counts per million (logCPM) using the voom function of the limma package [107].Differential expression analysis was performed using the limma package in R software (version 3.6.3).A false-positive rate of α = 0.05 with FDR correction was taken as the level of significance.Volcano plots and heat maps were created using the ggplot2 package (version 2.2.1) and a complex heat map (version 2.0.0)[108].Pathway analysis was performed using the fgsea package [109] and the enrichment browser package [110] in R software (version 3.6.3)using the pathway information from the Kyoto Encyclopedia of Genes and Genomes (KEGG) database (accessed on 1 March 2021 from https://www.genome.jp/kegg/pathway.html).
IC 50 Assay
IC 50 assays were used to evaluate the chemosensitivity of the primary and expandable cells.Gemcitabine and oxaliplatin were diluted to their desired concentrations.In total, 10,000 cells of MaPac107, 10,000 cells of PaCaDD159, and 5000 cells of PaCaDD165 were seeded into 96-well flat plates.After 48 and 72 h of treatment, 3-(4,5-dimethylthiazol-2-yl)-2,5 diphenyltetrazolium bromide (MTT) assays were performed.Then, 20 µL of 5 mg/mL MTT solution was added to each well and incubated for 4 h at 37 • C. MTT formazan was dissolved in DMSO, and absorbance was measured at 560 nm with reference at 670 nm using a SPARK Plate Reader (Tecan V2.3, Männedorf, Switzerland).Experiments were performed in triplicate.IC 50 values were generated using a fit curve platform [112] (Fit Logistic 4P model) from JMP 15 (SAS Institute Inc., Cary, NC, USA).
Synthesis of Grx1-roGFP3 Redox Sensor and Transfection of Cells
The ratiometric sensor Grx1-roGFP3 was used to analyse the variation in the glutathione redox potential (E GSH ).Grx1-roGFP3 was kindly donated by Dr. Manfred Frey (Steinbeis-Innovationszentrum Zellkulturtechnik, C/O University of Applied Sciences Mannheim, Germany).In brief, Grx1-roGFP3 was synthesised using the GENWIZ service from Sigma Aldrich and cloned into pHR'SIN-cPPTSEW [113] via the BamHI and XbaI restriction sites.Lentivirus particles were produced as previously described [114].A stock of pHRSINGrx1-roGFP3 was prepared and stored at −80 • C. Subsequently, 50 µL of lentivirus was added to a T25 flask at 50% confluence of primary and expandable cells and then incubated at 37 • C in a 5% CO 2 atmosphere for 24 h.The cells were washed with 1× Dulbecco's phosphate-buffered saline (DPBS, Biozym Scientific, Hessisch Oldendorf, Germany), and fresh Dresden medium was added.This step was repeated after 12 h.Afterwards, the cells were sorted for 80% of positive GFP signal at 4 • C using a BD FACSAria™ IIIu cell sorter (BD Life Sciences, San Jose, CA, USA) and the corresponding BD FACSDiva 8.0.2 software.Vector stability over passages was confirmed by a qualitative evaluation of GFP using the Carl Zeiss Axio Vert.A1 microscope (431030-9040-000; Jena, Germany).No changes in the GFP expression were observed up to three passages after transfection.
Evaluation of E GSH Variation in 2D and 3D Model
The function of Grx1-roGFP3 was verified after sorting.In total, 10,000 cells of Ma-Pac107, 10,000 cells of PaCaDD159, and 5000 cells of PaCaDD165 were seeded into 96-well flat black plates (Corning, Glendale, AZ, USA) and cultured for 24 h at 37 • C in a 5% CO 2 atmosphere.The program on the SPARK Plate Reader (Tecan V2.3, Männedorf, Switzerland) was run for 15 cycles, with each cycle lasting 15 s.The baseline was obtained by running the program over the first five cycles.The culture medium was then removed and 100 µL of 100 µM H 2 O 2 (Carl Roth, Karlsruhe, Germany) was added to each well, with measurements taken over five cycles.Next, 100 µL of 1 mM dithiothreitol (DTT; NeoLab Migge, Heidelberg, Germany) was directly added to each well, and measurements were continued for five cycles.The fluorescence intensity of the cells was detected using excitation wavelengths of 395 and 485 nm.Oxidation of the sensor caused an increase in the emission fluorescence at 528 nm when excited at 485 nm and a decrease in emission fluorescence when excited at 395 nm.
E GSH variations were determined by treatment with three different concentrations of gemcitabine and oxaliplatin.In 2D cultures, the same number of cells were seeded into 96-well flat black plates for functional verification.The next day, the culture medium was removed, and 100 µL of chemotherapeutics, H 2 O 2 (100 µM), and fresh Dresden medium were pipetted.After treatment for 48 h, the plates were analysed using a reader.In 3D cultures, a total of 3000 cells of each cell line were seeded into 96-well U-bottom plates and incubated for three days at 37 • C in a 5% CO 2 atmosphere.Cells were exposed to chemotherapeutics on the 4th day, and measurements were taken after 48 h of treatment by the reader to detect the fluorescence.Experiments were performed in triplicate.
Statistical Analysis
Statistical analysis was performed using the SPSS software (version 27.0.1;SPSS, Inc., Chicago, IL, USA).Data are expressed as the mean ± SD of three replicate assays.Comparisons between the groups were performed using an independent t-test.Statistical significance was set at p < 0.05.
Conclusions
In summary, we have used a novel approach to establish expandable primary pancreatic cancer cells using a small lentiviral library.We found that expandable primary pancreatic cancer cells conserved some of the characteristics of primary pancreatic cancer cells, while other cell biological functions were impaired.Although they supply sufficient material for drug response prediction assays, their usefulness in a clinical setting remains to be demonstrated.
Author Contributions: F.G. performed cell culture, doubling time and qPCR, established the 3D model, predicted the potential target drugs, performed the IC 50 assay, evaluated cellular redox status after chemotherapeutic, and wrote the manuscript; K.K. carried out the DEGs and DEPs analysis; F.R. provided primary pancreatic cancer cell lines; W.R. conducted the statistical comparison of populations; L.L. performed the scratch assay; J.E. and C.R. reviewed and edited the manuscript; T.M. transduced the primary pancreatic cancer cell lines; C.S. analysed RNA-Seq data; W.G.D. carried out the STR DNA profiling of primary MaPac107 cell line; P.P. guided the experiments; M.K. provided the concept and supervised the study.All authors have read and agreed to the published version of the manuscript.
Figure 2 .Figure 1 .
Figure 2. Morphology and growth characteristics of primary and expandable pancreatic cancer cells in 2D culture.(A) Morphology of primary and expandable cells.Representative pictures were derived using a Leica DMIRB inverse microscope under 63× magnification.The scale bar was 50 μm.(B) Growth kinetic curves of primary and expandable cells.The cell number of each cell line was monitored over 7 days, having the unit for 10 to the power of 4. (C) Doubling times were calculated
Figure 2 .
Figure 2. Morphology and growth characteristics of primary and expandable pancreatic cancer cells in 2D culture.(A) Morphology of primary and expandable cells.Representative pictures were derived using a Leica DMIRB inverse microscope under 63× magnification.The scale bar was 50 μm.(B) Growth kinetic curves of primary and expandable cells.The cell number of each cell line was monitored over 7 days, having the unit for 10 to the power of 4. (C) Doubling times were calculated
Figure 2 .
Figure 2. Morphology and growth characteristics of primary and expandable pancreatic cancer cells in 2D culture.(A) Morphology of primary and expandable cells.Representative pictures were derived using a Leica DMIRB inverse microscope under 63× magnification.The scale bar was 50 µm.(B) Growth kinetic curves of primary and expandable cells.The cell number of each cell line was monitored over 7 days, having the unit for 10 to the power of 4. (C) Doubling times were calculated from the logarithmic growth curve according to v = lgN − lgN0/lg2 (t − t 0 ), with doubling time = 1/v, where N = number of cells and t = time.** p < 0.01, *** p < 0.001.
Figure 3 .
Figure 3. Morphology and growth characteristics of primary and expandable pancreatic cancer cell lines in 3D culture.(A) Representative pictures were derived using Carl Zeiss Axio Vert.A1 microscope under 5× magnification.The scale bar was 200 μm.(B) The average diameter was measured in the horizontal and vertical axis of spheroid.
Figure 3 .
Figure 3. Morphology and growth characteristics of primary and expandable pancreatic cancer cell lines in 3D culture.(A) Representative pictures were derived using Carl Zeiss Axio Vert.A1 microscope under 5× magnification.The scale bar was 200 µm.(B) The average diameter was measured in the horizontal and vertical axis of spheroid.
counterparts from 3 h
to 9 h.At 24 h, all gaps were fully closed, except for those in Pri-PaCaDD165 and Ex-PaCaDD165 (Figure4B).
Figure 4 .
Figure 4. Migration was evaluated in primary and expandable pancreatic cancer cells.(A) Timelapse images at different time points were derived using a Carl Zeiss Axio Vert.A1 microscope under 5× magnification.Scale bar was 200 μm.(B) The gap closures were analysed by ImageJ according to wound closure (%) = (original wound area − area at each time point)/original wound area.* p < 0.05, *** p < 0.001.ns: not significant.
Figure 4 .
Figure 4. Migration was evaluated in primary and expandable pancreatic cancer cells.(A) Time-lapse images at different time points were derived using a Carl Zeiss Axio Vert.A1 microscope under 5× magnification.Scale bar was 200 µm.(B) The gap closures were analysed by ImageJ according to wound closure (%) = (original wound area − area at each time point)/original wound area.* p < 0.05, *** p < 0.001.ns: not significant.
Figure 5 .
Figure 5. Expression of genes related to EMT was evaluated in primary and expandable pancreatic cancer cell lines by qPCR using SYBR Green I and GAPDH used as internal reference.* p < 0.05, *** p < 0.001.ns: not significant.
24 Figure 6 .
Figure 6.Chemosensitivity of primary and expandable pancreatic cancer cell lines.(A) Venn diagram showed the number of potential target drugs commonly shared by MaPac107, PaCaDD159, and PaCaDD165.(B−D) The dose−response curves of primary and expandable pancreatic cancer cell lines exposed to gemcitabine and oxaliplatin after 48 and 72 h.
Figure 6 .
Figure 6.Chemosensitivity of primary and expandable pancreatic cancer cell lines.(A) Venn diagram showed the number of potential target drugs commonly shared by MaPac107, PaCaDD159, and PaCaDD165.(B−D) The dose−response curves of primary and expandable pancreatic cancer cell lines exposed to gemcitabine and oxaliplatin after 48 and 72 h.
Figure 7C,D show the responses of Pri-PaCaDD159-roGFP3+ and Ex-PaCaDD159-roGFP3+ cells to gemcitabine and oxaliplatin after 48 h of treatment, which showed a similar fluorescence intensity coefficient for EGSH. Figure 7E,F show that EGSH for Pri-PaCaDD165-roGFP3+ and Ex-Pa-CaDD165-roGFP3+ after 48 h of treatment showed differences after exposure to gemcitabine, while exposure to oxaliplatin led to similar EGSH.
Figure 7 .
Figure 7. Redox-regulation in response to incubation with gemcitabine and oxaliplatin in primary and expandable pancreatic cancer cell lines expressing Grx1-roGFP3+ in 2D culture.Cells were incubated with three different concentrations of gemcitabine and oxaliplatin and 100 μmol/L H2O2 for 48 h.Fluorescence intensity was measured by the SPARK Plate Reader.(A,B) Primary and expandable MaPac107 incubated by compound.(C,D) Primary and expandable PaCaDD159 incubated by compound.(E,F) Primary and expandable PaCaDD165 incubated by compound.* p < 0.05, ** p < 0.01, *** p < 0.001.ns: not significant.
Figure 7 .
Figure 7. Redox-regulation in response to incubation with gemcitabine and oxaliplatin in primary and expandable pancreatic cancer cell lines expressing Grx1-roGFP3+ in 2D culture.Cells were incubated with three different concentrations of gemcitabine and oxaliplatin and 100 µmol/L H 2 O 2 for 48 h.Fluorescence intensity was measured by the SPARK Plate Reader.(A,B) Primary and expandable MaPac107 incubated by compound.(C,D) Primary and expandable PaCaDD159 incubated by compound.(E,F) Primary and expandable PaCaDD165 incubated by compound.* p < 0.05, ** p < 0.01, *** p < 0.001.ns: not significant.
Figure 8 .
Figure 8. Redox-regulation in response to incubation with gemcitabine and oxaliplatin in primary and expandable pancreatic cancer cell lines expressing Grx1-roGFP3+ in 3D culture.Spheroids were incubated with three different concentrations of gemcitabine and oxaliplatin and 100 μmol/L H2O2 for 48 h.Fluorescence intensity was measured by the SPARK Plate Reader.(A,B) Primary and expandable MaPac107 incubated by compound.(C,D) Primary and expandable PaCaDD165 incubated by compound.*** p < 0.001.ns: not significant.
Figure 8 .
Figure 8. Redox-regulation in response to incubation with gemcitabine and oxaliplatin in primary and expandable pancreatic cancer cell lines expressing Grx1-roGFP3+ in 3D culture.Spheroids were incubated with three different concentrations of gemcitabine and oxaliplatin and 100 µmol/L H 2 O 2 for 48 h.Fluorescence intensity was measured by the SPARK Plate Reader.(A,B) Primary and expandable MaPac107 incubated by compound.(C,D) Primary and expandable PaCaDD165 incubated by compound.*** p < 0.001.ns: not significant.
Funding:
Feng Guo was supported by China Scholarship Council (CSC), No. 201808080101; P.P. and F.R. were awarded grants of the Müller foundation Mannheim.Institutional Review Board Statement: Not applicable.
Table 1 .
DEPs related to tumour-specific phenotypes were selected from KEGG database of MaPac107, PaCaDD159, and PaCaDD165 (expandable samples versus primary samples).Each two columns presented normalised enrichment score (NES) and adjusted p-values.A. Drug resistance pathways.B. EMT pathways.C. Proliferation pathways.D. ROS-activated pathways.E. Migration pathways.
Table 2 .
Clinical pathological characteristics of the three primary cell lines derived from patients with pancreatic cancer. | 2023-09-02T15:16:11.230Z | 2023-08-31T00:00:00.000 | {
"year": 2023,
"sha1": "5b5bff99d114ee23cffe5f9c054a058301fd541b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/17/13530/pdf?version=1693546278",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dbec95aaf1fbfb7b33851b9c4dd4f0af5e107d59",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232430591 | pes2o/s2orc | v3-fos-license | 3D-printed Ti6Al4V scaffolds combined with pulse electromagnetic fields enhance osseointegration in osteoporosis
The loosening and displacement of prostheses after dental implantation and arthroplasty is a substantial medical burden due to the complex correction surgery. Three-dimensional (3D)-printed porous titanium (pTi) alloy scaffolds are characterized by low stiffness, are beneficial to bone ingrowth, and may be used in orthopedic applications. However, for the bio-inert nature between host bone and implants, titanium alloy remains poorly compatible with osseointegration, especially in disease conditions, such as osteoporosis. In the present study, 3D-printed pTi scaffolds with ideal pore size and porosity matching the bone tissue, were combined with pulse electromagnetic fields (PEMF), an exogenous osteogenic induction stimulation, to evaluate osseointegration in osteoporosis. In vitro, external PEMF significantly improved osteoporosis-derived bone marrow mesenchymal stem cell proliferation and osteogenic differentiation on the surface of pTi scaffolds by enhancing the expression of alkaline phosphatase, runt-related transcription factor-2, osteocalcin, and bone morphogenetic protein-2. In vivo, Microcomputed tomography analysis and histological evaluation indicated the external PEMF markedly enhanced bone regeneration and osseointegration. This novel therapeutic strategy has potential to promote osseointegration of dental implants or artificial prostheses for patients with osteoporosis.
Introduction
Osteoporosis is a systemic osteopathy defined by insufficiency in bone repair due to the impairment of bone microstructure, the decrease of bone quality and density and the increase of bone fragility (1,2). Moreover, the osteoporotic microenvironment is not conducive to the proliferation and osteogenic differentiation of bone marrow mesenchymal stem cells (BMSCs), causing excessive bone loss and decreased osteogenic capacity (3,4). These factors lead to inadequate osseointegration within the bone and implant surfaces, thus causing an increased risk of complications in patients after joint replacement and dental implantation, due to implant loosening and displacement (5)(6)(7). Therefore, improving the osseointegration efficiency of implants would be of high clinical benefit to solve these problems.
Titanium (Ti) alloy is a widely applied material for orthopedic or dental implants with promising application prospects due to its predominant mechanical strength and corrosion resistance; however, it is restricted by its high stiffness which consequently results in stress-shielding-induced osteolysis (8,9). Three-dimensional (3D) printing technology is a promising method for the generation of individualized implants with complex components and porous sections in a single process (10,11). Interconnected porous implants with controlled pore size and porosity can significantly decrease the stiffness, imitate the structure of natural bone tissue superiorly and promote bone regeneration. Moreover, bone ingrowth into the micropores of 3D-printed porous Ti (pTi) alloy could enhance the osseointegration and improve the stability of the implants (12)(13)(14). However, considering the biological inertia, smooth surface and poor cellular adhesion of Ti alloys, pTi scaffolds may fail due to insufficient osteointegration with the surrounding bone, especially under pathological states, such as osteoporosis (15,16). Therefore, improving the bioactivity of osteoblast-related cells on the surface of titanium alloy is anticipated to treat the postoperative complications of osteoporosis patients. Bai et al (17) constructed a pTi/poloxamer 407 hydrogel system loaded with zoledronate, a bisphosphonate, thus obtaining an optimized osseointegration effect in osteoporotic defect models. In addition, Vladescu et al (10) coated 3D-printed Ti6Al4V scaffolds with HA, and calcium phosphate nanoparticles to improve the bioactivity and biocompatibility of the scaffolds. However, as it is difficult to release the loaded drugs continuously and maintain the stability of bio-coating, implanting pTi scaffolds into bone defects followed by extracorporeal non-invasive and repetitive therapy may be a potential strategy to promote bone integration (18,19).
Pulse electromagnetic fields (PEMF), which are transient electromagnetic fields produced in a coil when a pulse current generated by a pulse generator passes through the coil, are considered as a non-invasive treatment with the effect of promoting bone regeneration in clinical applications (20). In terms of cytology, external PEMF have could promote the early and late osteogenesis, enhance mineralization of BMSCs (21), improve cell viability and enhance alkaline phosphatase (ALP) activity of osteoblasts (22), as well as inhibit bone resorption by inhibiting the formation and maturation of osteoclasts (23). Although previous studies have indicated that PEMF therapy has beneficial effects on bone regeneration and significantly enhances the healing of fracture, non-union and other orthopedic diseases, it has not been widely used to enhance osteointegration for osteoporotic bone defects (24,25).
In the present study, it was hypothesized that the combination therapy of 3D-printed pTi and PEMF may have a positive effect on bone regeneration in vitro and in vivo in osteoporotic bone defects. The pTi implant supports optimized pore size and porosity, which match the mechanical properties of the bone tissue to decrease stress shielding. Additionally, a previous study has demonstrated that PEMF therapy (50 Hz; 1 mT) could induce osteogenic differentiation and proliferation of MSCs (26). In the present study, PEMF were used at the same frequency and intensity to detect whether PEMF therapy affect the fate of osteoporosis-derived BMSCs (OP-BMSCs) and enhance osseointegration in osteoporotic rabbits (Fig. 1). To the best of the authors' knowledge, this is the first study to investigate the combination of 3D-printed pTi and PEMF to promote osseointegration in osteoporosis to reduce complications following implantation. Preparation of pTi implants. Ti6Al4V porous scaffolds conducted by 3D printing were prepared by using additive manufacturing with an electron beam melting system (Q10, Arcam AB), as previously reported (27,28). Briefly, pTi alloy implants (porosity, 70%; pore size, 600 µm) with a pre-designed 3D template were translated to a standard triangulation language (STL) file, which was loaded into the EBM system. Spherical Ti alloy powder was melted layer by layer according to the preset parameters (porosity, 70%; pore size, 600 µm), then solidified by cooling. Disk-shaped (φ10 mm x L3 mm) pTi implants for microstructure identification in vitro experiments, and columnar-shaped (φ6 mm x L10 mm) pTi implants for osseointegration study in vivo. All prepared pTi implants were ultrasonically cleaned and sequentially cleaned in acetone, ethyl alcohol and deionized water for ~15 min for each treatment.
Materials
Characterization of pTi implants. To verify whether the prepared implants match with the pre-designed parameters, the porosity of samples was detected by micro-computed tomography (CT) (SkyScan 1076 scanner, Bruker micro-CT NV) and analyzed using NRecon software (version 1.6.6; Bruker micro-CT). To calculate the average pore size and distribution, the microstructure of pTi scaffolds was obtained using a JSM-6700F scanning electron microscope (SEM; JEOL, Ltd.), and images were quantitatively analyzed using ImageJ 1.50i (National Institutes of Health).
Establishment of osteoporosis models. The osteoporotic rabbit model was generated by bilateral OVX. Five-month-old female New Zealand White rabbits (n=30; 2.5 kg) were randomly separated into two groups, the OVX group (n=27) and the sham group (n=3). Surgical operations were conducted under general anesthesia using 3% (w/v) pentobarbital (50 mg/kg) intravenously. The median incision of the lower abdomen was chosen for surgery after skin shaving and sterilizing. The rabbits in the OVX group underwent bilateral OVX surgery, whereas those in the sham group were pseudo-operated, closing to the abdominal cavity after leaving the ovaries in situ. All rabbits were maintained in a cage individually and fed with standard chow (15-25˚C, humidity 60-70%, 12-h light/dark cycle, fed twice a day, and drank ad libitum). Penicillin (1.5 mg/kg) was administered to post-operative rabbits by intramuscular injection for three consecutive days to prevent infection.
Ten months after surgery, the concentration of serum estrogen was measured using an ELISA kit. Furthermore, three animals from each group were sacrificed by intravenous injection of air under anesthesia by 3% (w/v) pentobarbital (150 mg/kg) intravenously, and distal femurs were harvested for micro-CT assessment to confirm the establishment of osteoporosis.
Scaffold implantation and PEMF treatment. At 10 months post-OVX, osteoporotic rabbits were used in vivo osseointegration experiments. After general anesthesia using 3% (w/v) pentobarbital (50 mg/kg) and preparation for operation, a longitudinal incision was performed in the distal femur of the left hind limb. After exposing the bony surface of the lateral condyle, cylindrical bone defects with a diameter of 6 mm and a depth of 10 mm were prepared with a bone drill. The defects were transplanted with pTi implants, then the incisions were closed by absorbable sutures. Postoperative management was carried out as aforementioned.
On the fourth day after the implantation, the 24 osteoporotic rabbits were randomly divided into two groups: i) pT1 group, which received pTi implants without PEMF; and ii) pTi + PMEFs group, which received pTi implants with PEMF therapy. The rabbits of the pTi + PEMF group were kept in a plastic fixer, with their left legs placed within the scope of the coils of the PEMF machine in order to expose them to the pulse electromagnetic waves. The treatment parameters of PEMF were 50 Hz, 1 mT, 2 h per day. After 6 and 12 weeks of treatment, rabbits were sacrificed by intravenous injection of pentobarbital sodium 100 mg/kg, and femur samples were collected and fixed in 4% polyformaldehyde solution for further Micro-CT analysis and histological evaluation.
Isolation and culture of OP-BMSCs. OP-BMSCs were extracted from female rabbits (n=3) 10 months after bilateral ovariotomy (OVX) and cultured as previously reported (27). Briefly, BMSCs derived from the OVX rabbits were harvested from the marrow cavity of long bone of extremities after centrifugation (996 x g, 20 min, 20˚C) and cultured in LG-DMEM medium containing 10% FBS (v/v) and 1% penicillin and streptomycin in a humidified environment with 37˚C and 5% CO 2 . Once OP-BMSCs grew to ~80% confluence, the adherent cells were treated with 0.25% (w/v) trypsin/EDTA at 37˚C for 3 min. The cell suspension was collected and centrifuged (377 x g, 5 min, 20˚C), then the obtained cell precipitates were resuspended and passaged. The third generation of OP-BMSCs was used in subsequent in vitro experiments.
In vitro cell experiments. To evaluate on cell attachment, proliferation, survival rate, morphology, and osteogenic differentiation of OP-BMSCs, CCK-8 assays, Calcein-AM/PI and rhodamine phalloidin staining, and reverse transcription quantitative-PCR (RT-qPCR) were carried out. Briefly, the pTi implants were immersed into a 24-well plate with DMEM, and OP-BMSCs (2x10 5 cells/well) were added into each sample. The experiments consisted of two groups: i) pTi group, OP-BMSCs seeded on pTi implants under the basal culture medium without PEMF; and ii) pTi + PEMF group, OP-BMSCs seeded on the pTi implants with PEMF therapy. The PEMF device (Prima PFM61009; Shanghai Prima Electronics Co., Ltd.) consisted of a generator and its connected coils. The OP-BMSCs in the pTi + PEMF group were placed on the center of the coils in an incubator and received PEMF stimulation during culture (50 Hz; 1 mT; 2 h per day). The medium was changed every 3 days.
Evaluation of cell viability. After incubating for 24 h, the DMEM medium was discarded, and the CCK-8 solution was then added into each well and incubated for 2 h at 37˚C and 5% CO 2 . The number of OP-BMSCs adhering to the surface of implants was quantitatively detected by measuring the absorbance at 450 nm using a microplate reader (Multiskan EX; Thermo Fisher Scientific, Inc.). To evaluate the cell proliferation, CCK-8 assays were conducted after 1, 4 and 7 days of OP-BMSC culture on implants with or without PEMF. Live/Dead staining of the OP-BMSCs was detected at day 3 using a Calcein-AM/PI Double Stain kit according to the manufacturer's protocol, and observed under a FV1000 confocal laser scanning microscope (CLSM) (Olympus Corporation). For morphological evaluation at day 3, the samples were fixed with 4% paraformaldehyde for 10 min at room temperature, then permeabilized by 0.1% Triton X-100 for 5 min at room temperature, and washed with PBS repeatedly for 3 times. Finally, the samples were stained with rhodamine-phalloidin for 30 min and DAPI for 5 min at room temperature in dark according to the manufacturer's instructions. The photomicrographs of stained cells were collected under a CLSM.
Evaluation osteogenic differentiation in vitro.
To detect the osteogenic ability of OP-BMSCs in the presence of PEMF, basal culture medium was replaced by osteogenic induction medium including LG-DMEM along with β-glycerol-phosphate (10 mM), ascorbate-2-phosphate (50 µM), and dexamethasone (0.1 µM). The protocols of cell culture and PEMF treatment were the same as the cell viability investigation in the previous section. Following osteogenic culture for 14 or 21 days, ARS (15 min, room temperature) was conducted to assess calcium deposition according to the manufacturer's protocols and observed by a stereoscopic microscope (ZOOM-750; Tuming Optical Instrument Co., Ltd.). Subsequently, 10% cetylpyridinium chloride was added into the dyed mineralized nodules to dissolve the stained calcium nodules for further semi-quantitative analysis. The obtained solution was detected at 450 nm through a microplate reader to assess unobservable calcium nodules inside the micropores.
Moreover, after treating with osteogenic induction medium for 14 and 21 days, the expression levels of alkaline phosphatase (ALP), runt-related transcription factor-2 (Runx-2), osteocalcin (OCN), and bone morphogenetic protein-2 (BMP-2) in OP-BMSCs in the different groups were measured using RT-qPCR. The sequences of primers were listed in Table I. Briefly, total RNA was extracted with TRIzol ® (Thermo Fisher Scientific, Inc.) and 1 µg total RNA per sample was reversely transcribed to cDNA using a Prime Script RT reagent kit according to the manufacturer's instructions. The expressions of target genes were quantified by RT-qPCR using the SYBR Premix Ex Taq II kit according to the manufacturer's protocol. The qPCR was conducted using 2X Fast SYBR-Green Master Mix. Amplification was performed in 96-well optical reaction plates (Roche Diagnostics) on LightCycler 480 (Roche Diagnostics) using the following program: 94˚C for 1 min to activate polymerase, 40 cycles at 94˚C for 30 sec, 57˚C for 20 sec and 72˚C for 20 sec; melting curve analysis was performed after every run by heating up to 95˚C to monitor presence of unspecific products. The relative mRNA expression levels were normalized to that of GAPDH and calculated using the 2 -ΔΔCq method (29).
Micro-CT analysis. To explore the efficiency of bone formation, the samples were screened by micro-CT (voltage, 90 kV voltage; current intensity, 114 mA; pixel size, 18 µm). The cylindrical region of the pTi was targeted as the region of interest (ROI; a cylinder with diameter of 6 mm and height of 10 mm) for 3D reconstruction and further parameter analysis. Quantitative morphometric analysis, including bone volume/tissue volume ratio (BV/TV, %), trabecular number (Tb.N, mm), trabecular thickness (Tb.Th, mm) and trabecular separation (Tb.Sp, mm), was conducted using micro-CT auxiliary software (NRecon version 1.6.6).
Histological evaluation. The femur specimens containing pTi implants were embedded in methyl methacrylate, without undergoing decalcification and sectioned into thin sections (150-300 µm thickness). The sections were ground down and polished to 40-50 µm through the transverse saw cuts and polishing machine (EXAKT Apparatebau GmbH & Co. KG). After polishing, these hard tissue sections were stained with Masson stain for 150 min at room temperature to evaluate the bone regeneration and osseointegration with the porous implants.
Statistical analysis. All data are presented as the mean ± standard deviation (SD). Two independent groups were compared using unpaired Student's t-test, while >2 groups were analyzed using one-way ANOVA followed by Tukey's post hoc test using SPSS 19.0 (SPSS Inc.). P<0.05 was considered to indicate a statistically significant difference. All experiments were performed independently at least three times and the detailed times are shown in the figure legends.
Results and Discussion
Characterization of pTi implants. pTi implants were successfully manufactured by EBM technology. Representative optical images of the pTi implants are presented in Fig. 2A. The micro-CT quantitative analysis revealed the porosity of the pTi was 70.08±0.55%. The pore size detected based on the SEM pictures was mainly distributed at 580-620 µm (Fig. 2B). The average pore size was 600.11±7.21 µm (Fig. 2C). Therefore, the actual porosity and pore size were in accordance with the pre-designed parameters (600 µm and 70%, respectively). The osseointegrative capacity of implants is usually determined by the osteoconductivity of the prosthesis, which is further bound up with the porosity, pore size and distribution of the scaffolds. The porosity of porous implants for bone tissue engineering, is expected to be >50%, especially within the scope of 65-75%, which is mechanically and structurally similar to human trabecular bone. Additionally, a 300-700 µm pore size is beneficial for osteoblast adhesion, differentiation and proliferation (30). Thus, the porosity and pore size of pTi implants generated in this study were optimal for bone tissue engineering. It has been reported that the increased surface area and porosity of the implants can enhance the initial stability, bone ingrowth ability and the friction coefficient between the bone and scaffolds, thus decreasing micromotion and accelerating osseointegration after implanting in vivo (31). Furthermore, interconnected internal structure of the pTi scaffolds were favorable to oxygen and nutrient exchange, therefore improving cell growth and communication and further enhancing bone regeneration (32).
Cell attachment, proliferation, survival and morphology.
Under normal conditions, the bone is continuously renewed through a coordinated progress involving complex stem cell behaviors, including adhesion, proliferation, differentiation, maturation and mineralization (33). By contrast, under the condition of osteoporosis, BMSCs present decreased ability of proliferation and osteoblastic differentiation leading to limited capacity of bone regeneration (34). External PEMF have been shown to promote cell proliferation, osteogenesis and mineralization (21,22). Therefore, it was hypothesized that PEMF could be applied as a potential target to treat osteoporosis by modulating BMSC behavior. To test this hypothesis, the OP-BMSCs harvested from the OVX rabbits were added onto the pTi and received PEMF therapy.
The osseointegration between the implants and bone tissue starts from osteogenesis-related cells adhering to the surface of the prosthesis (28). As shown in Fig. 3A, the absorbance, which stood for the number of OP-BMSCs, was significantly increased in the pTi + PEMF group compared with that in the pTi group. Moreover, cell proliferation in the different groups was evaluated using a CCK-8 assay. In the OP-BMSCs from the pTi + PEMF group, cell proliferation was significantly increased compared with that in the pTi group on day 4, and this difference was more distinct on day 7 (Fig. 3B). OP-BMSCs seeded on the implants treated with or without PEMF were stained with Calcein AM/PI to measure cell viability. The fluorescent images demonstrated that the OP-BMSCs possessed good cell viability in the pTi and pTi + PEMF groups on day 3 (Fig. 3C). Furthermore, the viability rate of OP-BMSCs was also quantified. The results showed that the viability rates in the pTi and pTi + PEMF groups were 89.52±1.64, 92.34±2.00%, respectively, and there was no significant difference between the two groups (Fig. 3D). These results suggested that the pTi have good biocompatibility, and that external PEMF could promote cell attachment and proliferation, as well as and maintain viability of OP-BMSCs.
Because the morphology of the cells on the implants could dramatically influence cellular behavior, the cytoskeletal distribution of cells attached on the hydrogel was then analyzed (35). OP-BMSCs were double-stained with rhodamine-phalloidin (indicating the F-actin filament) and DAPI (indicating the nucleus) for observation. As demonstrated in Fig. 3E, the F-actin filament morphology of OP-BMSCs on the pTi implants with PEMF therapy showed better spreading and displayed more lamellipodia extensions than those on pTi scaffolds without PEMF therapy, indicating that the external PEMF could induce cell maturation (36). The intensity of F-actin on the OP-BMSCs was further quantitatively analyzed in the immunofluorescent images. As shown in Fig. 3F, OP-BMSCs in pTi + PEMF group showed increased F-actin fluorescence (1.38-fold) compared to that in the pTi group. It is well-recognized that the F-actin filament plays a fundamental role in the early maturation of BMSCs, and helps improve bone cell function by modulating cell proliferation and differentiation (36,37). Therefore, PEMF therapy can significantly enhance the expression of F-actin filaments and improve cytoskeletal organization, which is beneficial for the early osteogenic differentiation of BMSCs.
Osteogenic differentiation of OP-BMSCs. In addition to cell attachment, proliferation, survival and morphology of OP-BMSCs and osteogenic differentiation are critical factors for the initiation of bone regeneration (28). In order to explore the effect of external PEMF on cell osteogenic differentiation on the pTi implants, ARS was used to assess mineralized matrix synthesis. ARS is a histological staining method for mineralization, which indicates calcium deposition. As demonstrated in Fig. 4A, calcium deposits in the pTi group were limited at day 14. However, in the pTi + PEMF group, visible calcified nodules were detected, which were more prominent on day 21. Furthermore, the semi-quantitative analysis confirmed that the cells in pTi + PEMF group exhibited increased calcium deposition compared with the pTi group at 14 and 21 days (Fig. 4B). Therefore, these results indicated that PEMF could promote the formation of mineralized matrix and further enhance the efficiency of mineralization.
In order to verify the effect of external PEMF on osteogenic differentiation of OP-BMSCs on the pTi implants at the mRNA level, multiple osteogenic differentiation markers, such as ALP, Runx-2, OCN and BMP-2, were detected using RT-qPCR. In general, ALP is marker for osteogenic differentiation, and the stimulation of ALP activity is a critical event in early osteogenesis (38). The results indicated the mRNA level of ALP was significantly increased in the pTi + PEMF group after treatment for either 14 days or 21 days, compared with that of the pTi group (Fig. 4C). Runx-2 is a indicative of early osteoblastic differentiation and can induce the expression of key osteogenic genes, such as OCN and osteopontin (39). Compared with the pTi group, the expression of Runx-2 in the pTi + PEMF group was upregulated 1.39-fold and 1.31-fold on day 14 and 21, respectively (Fig. 4D). BMP-2 is an essential osteoinductive factors and recognized to induce the differentiation of BMSCs into osteoblasts (40). Compared with pTi group, the transcriptional level of BMP-2 was upregulated significantly at day 14 and 21 in the presence of external PEMF (Fig. 4E). OCN is a bone-specific protein synthesized by osteoblasts that can enhance osteogenic maturation and bone formation; it is expressed most abundantly at the late stage of osteogenesis (41). The expression level of OCN was measured in order to evaluate the osteogenic maturation of OP-BMSCs. The expression of OCN in pTi + PEMF group did not differ significantly at day 14, but was significantly upregulated at day 21, compared with the pTi group (Fig. 4F).
Based on the detection of ARS and quantification of osteogenic-related gene expression, it may be hypothesized that external PEMF could enhance the osteogenic differentiation of OP-BMSCs on the pTi implants. However, the mechanism through which PEMF can promote osteogenic differentiation remains unclear. For the osteogenic induction of BMSCs by PEMF, previous studies have obtained partial mechanism level explanation. Bagheri et al (42) confirmed that PEMF activated the osteogenic differentiation of BMSCs through the Notch pathway. Zhou et al (43) suggested that PEMF induced the osteogenic differentiation of BMSCs by activating Wnt/β-catenin signaling. In addition, Selvamurugan et al (44) showed that PEMF induced the TGF-β signaling pathway and increased the expression of microRNA-21-5p in the bone metabolism of BMSCs. Similarly, beneficial osteogenic induction effects were also observed in the present study in OP-BMSCs treated with external PEMF. Establishment of osteoporosis model. In osteoporosis, the imbalance between bone formation and bone resorption leads to reduced bone mineral density (BMD), which improves risk of fracture and obstructed the ability of bone regeneration (1,2). It has previously been reported that estrogen deficiency can result in increased bone resorption, thus impairing osteoblast function (45). The serum levels of estrogen was significantly inhibited in the OVX group compared with those of the sham group (Fig. 5A). Moreover, Micro-CT 2D slides showed the trabecular bone structure became looser and thinner in the OVX group (Fig. 5B). The statistical analysis suggested that the BMD (Fig. 5C) and BV/TV (Fig. 5D) in the OVX rabbits were significantly decreased compared with those in the sham group. Thus, these results indicated that the osteoporosis model was successfully established 10 months after bilateral OVX.
Evaluation of bone regeneration and osseointegration in vivo.
Osteogenic-associated growth factors, such as BMP-2, are downregulated in the osteoporotic microenvironment (3). Furthermore, BMSCs derived from osteoporosis are few in number and deficient in osteogenic activity (4). The microenvironment with low osteogenic activity, and the imbalance between bone formation and resorption in osteoporosis state increase the risk of implants loosening and displacement after implantation. Developing novel combination therapy that promotes the osteogenic differentiation of OP-BMSCs and enhances osseointegration to decrease implant loosening and displacement remains a challenge. In order to evaluate bone regeneration and osseointegration induced by PEMF, an in vivo study was performed in OVX rabbits.
New bone regeneration in the osteoporotic bone defects was scanned using micro-CT. Reconstruction of the porous implants and surrounding bone tissues are shown in Fig. 6A. The reconstructed spatial distribution of the new bone formation revealed that with an implantation time of 6-12 weeks, bone formation on the surface of the pTi in the pTi + PEMF group were increased significantly compared with the pTi group. To quantify the bone ingrowth, the ROI scanning of the implanted region and empty defect area was carried out. The BV/TV values of pTi and pTi + PEMF groups were 10.35±1.88 and 14.36±1.18% at 6 weeks, and 15.02±1.20 and 20.37±1.39% at 12 weeks (Fig. 6B), respectively, consistent with the 3D reconstruction results. In parallel, pTi + PEMF group was indicated with a higher Tb.Th and Tb.N value (Fig. 6C and D) and lower Tb.Sp value (Fig. 6E) compared to that in the pTi group at 6 and 12 weeks. In general, the results of micro-CT scanning indicated that PEMF could significantly enhance bone ingrowth into the pTi implants in osteoporotic bone defects.
Good osseointegration between host bone tissues and implants involved a series of complex biological processes, including the migration and differentiation of BMSCs. The optimal prosthesis is expected to conform to the surrounding bone tissue to avoid complications including loosening and displacement after implantation, as well as recover the function of bone. In the present study, hard tissue sections were stained with Masson's stain to evaluate bone osseointegration at the bone-implant interface. As shown in Fig. 7A, improved bone regeneration was observed between the newly formed bone and pTi implants without gaps in the pTi + PEMF group, while regenerated bone almost completely surrounded the surface of the implants along with a part of bone tissue filling the inner pores, especially after 12 weeks of PEMF treatment. Furthermore, there was a certain amount of new bone tissue around implants of the pTi group, but not as evident as that seen in the pTi + PEMF group. To calculate the ratio of bone area/total area (BA/TA), the histological pictures were analyzed using ImageJ. As shown in Fig. 7B, the BA/TA of pTi + PEMF group was significantly higher than pTi group at 6 weeks and 12 weeks, respectively.
PEMF are considered as a clinical physiotherapy that can enhancing bone regeneration (46). In the present study, pTi scaffolds were implanted into bone defects, and the addition of external PEMF led to a positive result on bone regeneration in osteoporosis models. This novel combination therapy of 3D-printed pTi and PEMF exhibited potential to promote bone regeneration and osseointegration of dental implants or artificial prostheses for patients with osteoporosis. To summarize, 3D-printed pTi implants could be customized according to desired parameters. External PEMF can effectively promote the viability and osteogenic differentiation of OP-BMSCs on the pTi surface and have great prospects for applications in bone tissue engineering. The combination of pTi and PEMF therapy resulted in improved repair effects on osteoporotic bone defects. These findings provided insight into the treatment of osseointegration under osteoporosis state. | 2021-04-01T06:17:21.238Z | 2021-03-29T00:00:00.000 | {
"year": 2021,
"sha1": "0d6e88056974ab376ce4fe94020e354d32533396",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2021.12049/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "7a6eb252fc21d0ffbcee97fc89c18133be33a6dd",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
28749439 | pes2o/s2orc | v3-fos-license | In vitro Study of Morphological Changes of the Cultured Otocyst Isolated from the Chick Embryo
The aim of this study was to observe morphological changes of the cultured otocysts isolated from various stages of the chick embryo. Isolated otocysts were dissected from embryonic day, E2.5-4.5 of incubation (HH stage 16-26) according to stages of developing inner ear. Morphology of the chick otocyst exhibited an ovoid shape. The width and height of the otocyst were 0.2 mm and 0.3 mm, respectively. Elongation of a tube-like structure, the endolymphatic duct, was found at the dorsal aspect of the otocys t. The cultured otocyst is lined by the otic epithelium and surrounding periotic mesenchymal cells started to migrate outwards the lat e al aspec of such epithelium. Notably, the acoustic-vestibular ganglion (AVG) was observed at the ventrolateral aspect of the otocyst. Ap pearance of AVG in vitro can be applied for studying chemical-induced ototoxicity and sensorineural hearing loss. It was concluded that the organcultured otocyst of the chick embryo could be used as a model to study sensory organ development of avian inner ear.
INTRODUCTION
Chicken has become a favorable model in developmental biology and stem cell research (Stern, 2005;Intarapat & Stern, 2013).There are several advantages of using chick embryos as the model system: the eggs are available all the year round (Berg et al., 1999), the neuroendocrine system is well understood (Ottinger et al., 2001), the embryo stages are well-established (Hamburger & Hamilton, 1951), and they are also recommended as a model for testing the toxicants (OECD, 1984;Touart, 2004).Strikingly, chicken was reported to be able to regenerate the new hair cells after exposure to the noise or ototoxic drugs (Cotanche, 1987;Girod et al., 1991;Janas et al., 1995).A remarkable process of this species brought the researchers to seek for the key factors playing a role in avian hair cell regeneration to overcome this limitation in mammals (Bermingham-McDonogh & Rubel, 2003;Rubel et al., 2013).
In mammalian species, the inner ear contains pluripotent stem cells that their regenerative capacity could be induced (Li et al., 2003a;Oshima et al., 2007).Identification of stem cell sources to generate hair cells in vitro for stem cell-based therapy was proposed (Géléoc & Holt, 2014).Previous studies used both ESCs and iPSCs to study auditory organ regeneration and differentiation (Li et al., 2003b;Oshima et al., 2010;Ouji et al., 2012).Cultures of mammalian stem cells with chick embryonic tissues to study hair cell differentiation were reported (Jeon et al., 2007;Oshima et al., 2010).Cocultures of mammalian stem cells with the chick otocyst and its stromal cells produced hair cell-like cells with stereocilia-like protrusions (Jeon et al.;Oshima et al., 2010).
The inner ear of the chick embryo contains presumptive sensory hair cells and supporting cells (Sokolowski et al., 1993).These cells have ability to replace damaged hair cells (Girod & Rubel, 1991;Bermingham-McDonogh & Rubel;Rubel et al.).Because such fascinating process occurs in chicks it is interesting to study embryonic development of sensory organ of this species in vitro.Since avian embryos were reported to be able to regenerate their inner ear hair cells after exposure to mechanical and chemical inducers (Cotanche; Girod et al.; Janas et al.), embryological study of avian inner ear development for medical application is required.Thus the present study aimed to observe morphological changes of developing inner ear using the organ-cultured otocyst of the chick.
Department of Anatomy, Faculty of Science, Mahidol University, Bangkok 10400, Thailand.
MATERIAL AND METHOD
Chicken fertilized eggs were obtained from Department of Animal Science, Faculty of Agriculture, Kasetsart University.The eggs were cleaned and incubated for 62-108 hrs at 38 °C in a humidified incubator.The embryos reached to HH stage 16-26 (~E2.5-4.5 days of incubation) were staged according to the Hamburger and Hamilton normal stages of chick embryonic development (Hamburger & Hamilton, 1951).The otocysts (otic vesicles) with no periotic mesenchymal tissues were carefully dissected from embryonic day, E2.5-4.5-embryos and then pooled in cold PBS, pH 7.2.Dissected otocysts were observed under stereomicroscope (Olympus, Japan) and their shapes were measured and recorded.Briefly, the size of the otocyst was indicated by width x height.The height of the otocyst was measured by starting from the base at the ventral aspect to the tip of the endolymphatic duct at the dorsal aspect of the otocyst.For organ culture, isolated otocysts were rinsed twice with chick Ringer solution and placed onto a culture dish.Cleaned otocysts were seeded onto 75 cm 2 flask containing Dulbecco's Modified Eagle Medium: Nutrient Mixture F-12 (DMEM/F12, GIBCO) supplemented with 10 % Fetal Bovine Serum (FBS, Merck Millipore).Otocyst-culture medium was changed twice a week until the migrating cells had reached confluence.Schematic isolation and culture of the chick otocyst is shown in Fig. 1 stage chick embryo is suitable for culturing as a whole organcultured system.Several problems were described regarding isolation of developing inner ear from early to later stages of otic development in the chick (Honda et al., 2014).However, dispase treatment was suggested to reduce mesenchymal tissue surrounding the otocyst (Honda et al.), indicating that this technique is required for isolation of the chick otocyst from the later stages.
Cultured E3-otocyst exhibited an ovoid shape and its size was approximately 0.2 mm in width and 0.3 mm in height (Fig. 2).These characters are similar to mammalian otocyst (Morsli et al., 1998), suggesting a conserved morphogenesis of the otocyst among vertebrates.Elongation of tube-like structure, endolymphatic duct (ED), was observed at the dorsal aspect of the otocyst (Fig. 2); moreover, the ED was first noticed since E2.5 of incubation.Such structure is formed by otic cup closure at the anterodorsal rim of pars superior (Bissonnette & Fekete, 1996;Brigande et al., 2000), giving rise to a endolymphatic sac, a swollen structure located at the end of its duct (Bissonnette & Fekete).Developing ED was well delineated at E3 onwards, indicating the rapid growth of pars superior that can be used to distinguish early and late otocyst (Bissonnette & Fekete).This suggests that appearance of the ED can be used as a landmark of pars superior development that will be useful for studying development of sense organs of equilibrium.
RESULTS AND DISCUSSION
Embryonic otocysts were isolated from E2.5-4.5 embryos.We found that E3-otocyst was easily dissected compared to E2.5 and E4.5-otocyts.The size of E2-otocyst was too small to be dissected, while E4.5-otocyst developed more complex structures that had become a hassle to isolate entire otocyst.This suggests that E3-otocyst from HH19 In the present study, the chick otocysts were studied since day 2.5 of incubation.At E2.5, we found that developing inner ear can be noticed by thickening of the ectodermal epithelium, otic placode.Invagination of such placode contributed to the otic cup closure and then the otic vesicle is completely formed (Brigande et al.; Sai & Ladher,
2015).
In vitro studies of the key factors that play a role in such processes using the organ-cultured otocyst might help to answer molecular mechanisms underlying otic development in vivo.
In culture, adherence of E3-otocyst to the bottom the flask was observed.Furthermore, the otic epithelium (OE) and acoustic-vestibular (AVG) ganglion were also observed (Fig. 3).Obviously, the otocyst is lined by a stratified epithelium and the epithelial invagination can be seen in the middle region of the otocyst (Fig. 3).AVG cells started to delaminate at the ventrolateral aspect of the otocyst and migrating cells were predominantly found at the lateral aspect of the otocyst (Fig. 3).AVG is a cluster of delaminating neuroblasts giving rise to auditory and vestibular neurons (Magariños et al., 2012).The markers for ganglion neuroblast nuclei, Islet-1 and neural processes, TuJ-1 were expressed in the chick AVG (Aburto et al., 2012;Magariños et al.).These studies indicate specification of sensory neurons in the otocyst.
The majority of migrating cells, periotic mesenchyme, surrounding the E3-otocyst was found this study (Fig. 3).These mesenchymal cells were also found in the mammalian otocyst in which the retinoic acid nuclear receptor genes are expressed (Romand et al., 2006).It has been reported that these mesenchymal cells are transformed into bony labyrinth of the inner ear (Lang & Fekete, 2001).Using the organ-cultured otocyst may be useful for identifying a cell lineage of preotic mesenchyme that contributes to the bony part of the inner ear.
Heterogeneous population of migrating cells was noticed in the present study (Fig. 4).Two distinct cell types were observed such as neuroblast-like cells and fibroblastlike cells (Fig. 4).Neuroblast-like cells showed small size, prominent nuclei with short processes, whereas fibroblastlike cells exhibited larger size, spindle-shaped with long processes (Fig. 4).Analysis of different types of migrating cells whether they express either Islet-1 or TuJ-1 in the organcultured otocyst needs further study.This study provides a basic knowledge of inner ear development of aves that will enable us to propose in vitro organ engineering for hearing loss therapy.RESUMEN: El objetivo de este estudio fue observar los cambios morfológicos de otocistos cultivados aislados en las diversas etapas del desarrollo del embrión de pollo.Otocistos aislados fueron obtenidos de embriones día, E2.5-4.5 de incubación (HH etapa 16-26) de acuerdo a las etapas de desarrollo del oído interno.El otocisto de pollo presentó una morfología ovoide.El ancho y la altura del otocisto fue de 0,2 mm y 0,3 mm, respectivamente.En la cara dorsal del otocisto se visualizó el alargamiento de una estructura similar a un tubo, el conducto endolinfático.El otocisto cultivado está revestido por epitelio ótico y células mesenquimatosas perióticas que comienzan a migrar hacia el exterior de la cara lateral en búsqueda del epitelio.En particular, el ganglio acústico-vestibular (GAV) fue observado en la parte ventrolateral del otocisto.La aparición de GAV in vitro puede ser aplicado para el estudio de la ototoxicidad inducida por productos químicos y la pérdida de audición neurosensorial.Se concluyó que el otocisto cultivado de embrión de pollo podría ser utilizado como un modelo para estudiar el desarrollo de órganos sensoriales del oído interno aviar. | 2017-11-03T12:34:57.565Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "9236a35ba95ef2c2572d71090c9fb8ee8b84969d",
"oa_license": "CCBYNC",
"oa_url": "https://scielo.conicyt.cl/pdf/ijmorphol/v35n1/art34.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "9236a35ba95ef2c2572d71090c9fb8ee8b84969d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
54951859 | pes2o/s2orc | v3-fos-license | The subgiant branch of omega Cen seen through high-resolution spectroscopy. I. The first stellar generation in omega Centauri?
We analysed high-resolution UVES spectra of six stars belonging to the subgiant branch of omega Centauri, and derived abundance ratios of 19 chemical elements (namely Al, Ba, C, Ca, Co, Cr, Cu, Fe, La, Mg, Mn, N, Na, Ni, Sc, Si, Sr, Ti, and Y). A comparison with previous abundance determinations for red giants provided remarkable agreement and allowed us to identify the sub-populations to which our targets belong. We found that three targets belong to a low-metallicity population at [Fe/H]~-2.0 dex, [alpha/Fe]~+0.4 dex and [s/Fe]~0 dex. Stars with similar characteristics were found in small amounts by past surveys of red giants. We discuss the possibility that they belong to a separate sub-population that we name VMP (very metal-poor, at most 5% of the total cluster population), which - in the self-enrichment hypothesis - is the best-candidate first stellar generation in omega Cen. Two of the remaining targets belong to the dominant metal-poor population (MP) at [Fe/H]~-1.7 dex, and the last one to the metal-intermediate (MInt) one at [Fe/H]~-1.2 dex. The existence of the newly defined VMP population could help to understand some puzzling results based on low-resolution spectroscopy (Sollima et al., Villanova et al.) in their age differences determinations, because the metallicity resolution of these studies was probably not enough to detect the VMP population. The VMP could also correspond to some of the additional substructures of the subgiant-branch region found in the latest HST photometry (Bellini et al.). After trying to correlate chemical abundances with substructures in the subgiant branch of omega Cen, we found that the age difference between the VMP and MP populations should be small (0+/-2 Gyr), while the difference between the MP and MInt populations could be slightly larger (2+/-2~Gyr).
One of the open problems concerns the most sensitive region of the colour magnitude diagram (hereafter CMD) to age differences: the subgiant branch (SGB). A large number of photometric and low-resolution spectroscopic studies (Hughes & Wallerstein, 2000;Hilker & Richtler, 2000;Pancino, 2003;Hughes et al., 2004;Hilker et al., 2004;Rey et al., 2004;Ferraro et al., 2004;Stanford et al., 2006) found age spreads ranging from 2 to 6 Gyr, with a few exceptions and puzzles (see Section 5.3, for more details). One example of the difficulties encountered in the study of this complex region is posed by the two studies by Sollima et al. (2005b) and Villanova et al. (2007), who used the same ACS dataset and low-resolution spectra of similar quality, but reached opposite conclusions on the total age spread -and age distribution -of ω Cen. Still, even with the best ACS photometries (see, e.g. Bellini et al., 2010), it is not easy to understand which features of the SGB region correspond to each of the populations that are spectroscopically identified on the red giant branch (RGB), which are known in great detail thanks to the above cited works. This understanding is crucial to solve the relative ages problem in ω Cen, and to derive the age-metallicity relation, a fundamental ingredient of any model for the formation and evolution of this unique stellar system. Pancino et al., 2000) is shown as grey dots. The six UVES targets are marked (black filled circles) with their WFI catalogue numbers.
In this paper we present the analysis of a set of UVES high-resolution spectra of six SGB stars, selected from the wide field photometric catalogue by Pancino et al. (2000) and Pancino (2003). A preliminary analysis of the same dataset was presented by Pancino (2003). We describe the spectra reductions in Section 2; the abundance analysis in Section 3; and the abundance ratio results in Section 4. Our main results are discussed in detail in Section 5 and are summarized in Section 6.
Observations and data reduction
We selected our six targets from the WFI B and I photometry presented in Pancino et al. (2000), complemented with V magnitudes from Pancino (2003). The coordinates were obtained using the astrometric catalogue by van Leeuwen et al. (2000). As shown in Figures 1 and 2, the SGB region of ω Cen in the external parts of the cluster shows clear substructures (all our targets lie on the WFI CCD#5). This is more clearly seen in other literature photometries Bedin et al., 2004;Fig. 2. Location of the programme stars on the area of ω Cen. Grey dots mark stars belonging to the WFI photometry by Pancino et al. (2000). Filled circles mark the postion of the six UVES targets. Bellini et al., 2010), but they are obtained from space, with the HST, in the very centre of ω Cen, where UVES follow-up from the ground would be difficult. Three of the programme stars were selected towards the upper envelope of the SGB and another three towards the lower envelope.
Echelle spectroscopy was obtained on 18-20 March 2002, with UVES at the ESO Kueyen (VLT UT2) telescope, on Cerro Paranal, Chile. Table 1 reports the log of the observations, along with some basic target information. Each star was observed twice and on different nights, both to minimize the cosmic rays impact and to identify possible radial velocity shifts. Given the faint magnitude of these stars (V≃17.5), the spectra were binned on chip (2×1), so that the resolution element is covered by ≃3 pixels. A final S/N≃50 per pixel was achieved around 550 nm.
The red spectra (upper and lower red CCDs) were reduced with the ESO-UVES pipeline (Ballester et al., 2000), which semi-automatically performs bias correction, flat-field correction, inter-order background subtraction, optimal extraction with cosmic-ray rejection, wavelength calibration (with rebinning), and final merging of all overlapping orders. However, since the S/N ratio is significantly lower on the blue part of the spectra (S/N≃25 around 450 nm), we decided to manually perform the echelle reduction for the blue CCD with the noao.imred.ccdred and noao.imred.echelle packages within IRAF 1 . The two one-dimensional and wavelength-calibrated spectra obtained for each star were normalized by fitting their continua with a cubic spline, then corrected for the main telluric absorption features (noao.onedspec.telluric in IRAF), using as a reference a hot, fast rotating star (HR 5206/HD 120640) selected from the Bright Star Catalog (Hoffleit & Jaschek, 1991), and observed each night at an airmass not too different from the targets. Finally the spectra were summed -after correcting for radial velocity shifts -to produce one single spectrum for each of the six observed stars.
Radial velocities were obtained with DAOSPEC 2 (Stetson & Pancino, 2008) and the procedure described by Pancino et al. (2010). In short, the laboratory wavelength of selected absorption lines (Section 3.1) was used to measure the observed radial velocity. The heliocentric correction was computed with IRAF and the telluric H 2 O and O 2 absorption bands redward of 580 nm were used to correct for zeropoint shifts. All six stars were radial velocity members of ω Cen, considering an average V r =232.8 or 233.4 km s −1 , as determined by Meylan et al. (1995) and Pancino et al. (2007), respectively, with a central velocity dispersion of the order of 20 km s −1 . The resulting velocities and errors are listed in Table 1.
Equivalent widths and atomic data
We selected the majority of our lines and of their atomic data from the VALD 3 database (Kupka et al., 1999). To identify reliable lines, high S/N median spectra of the six UVES targets were created and the cleanest, unblended lines of the available elements were identified. Only lines that appeared in at least three of the six stars were retained in our preliminary selection. DAOSPEC was used to measure the equivalenth widths (EW) of all the chosen lines. A first-pass abundance analysis was performed (Section 3.3): lines that showed systematically higher errors and bad Q parameters (see Stetson & Pancino, 2008;Pancino et al., 2010, for details) lines that gave EW<15 mÅ, or EW>100 mÅ (with a few exceptions, see Section 3.3) were not used to determine abundances. The DAOSPEC EW measurements used for the abundance analysis are shown in Table 2, along with the formal error δEW and the quality parameter Q for each line (Stetson & Pancino, 2008). A few lines were measured with the help of spectral synthesis because they were reported to have significant hyperfine structure (HFS). In particular, we used the atomic data by Martin et al. (1988) for the Mn I lines at 4030, 4033, 4034, 4041, and 4055 Å; the NIST 4 atomic data for the Ba II lines at 4934, 5853, 6141, and 6496 Å; the atomic data by Lawler et al. (2001) for the La II lines at 3988, 4086, 4123, and 4238 Å; and the atomic data by Bielski (1975) for the 5105 Å Cu I line 5 . For the CH and CN molecular bands, we used the Kurucz molecular linelists 6 , but we had to revise the logg f values of C downwards by 0.3 dex, similarly to what was reported by Bonifacio et al. (1998), Lucatello et al. (2003, and Spite et al. (2005) (see also Section 3.3).
Atmospheric parameters and best-model search
A first guess of the atmospheric parameters was derived from the WFI photometry. Dereddened (B-V) 0 and (V-I) 0 colours were obtained from B, V, and I magnitudes adopting E(B-V)=0.11 (Lub, 2002) and E(V-I)/E(B-V)=1.30 (Dean et al., 1978), and are listed in Table 1 along with the V magnitudes. The V-I colour was converted from the original V-I C , based on the Cousins I magnitude, to the V-I J , based on the Johnsons I magnitude, with Table 3 except for the effective temperature, T e f f . The inset panel shows the run of the χ 2 of fits with different values of T e f f . the relations by Bessell (1979). Effective temperatures (hereafter T e f f ) and bolometric corrections (BC V ) were obtained with the Alonso et al. (1999) calibration. Surface gravities (hereafter log g) were then derived by means of fundamental relations: where the solar values where assumed in conformity with the IAU recommendations (Andersen, 1999), i.e., log g ⊙ = 4.437, T e f f,⊙ =5770 K and M bol,⊙ =4.75. A typical mass of 0.8 M ⊙ was assumed for the programme stars (Bergbusch & VandenBerg, 2001), and we used (m-M) V =14.04±0.11 mag . An independent estimate of T e f f , which is the most influential parameter when determining abundances, was also derived from the profile fitting of the H α line wings. We computed Kurucz atmosphere models with ATLAS9 (Kurucz, 1993(Kurucz, , 2005, using gravities and global metallicities close to the photometric parameters of each target. We built synthetic spectra with a modified version of SYNTHE 7 , exported for the Linux OS (Sbordone et al., 2004). An example of a typical H α profile fit is shown in Figure 4. We found general agreement between the photometric and the H α T e f f estimates, but the scatter was large, with differences of up to 300 K in some cases. When averaging estimates from B-V and V-I together, the scatter went down (see Table 3). We decided to rely on the H α temperatures as a first guess for the spectroscopic analysis. First guess log g values were derived from H α temperatures using V magnitudes and the Alonso et al. (1999) calibration.
The final atmospheric parameters were then derived through the usual "spectroscopic method" (see Section 3.3 for abundance 7 The broadening theory adopted in this routine is that of Ali & Griem (1965) and Vidal et al. (1973). Table 3 along with the photometric and H α temperatures, for comparison. We found that, on average, spectroscopic T e f f estimates based on Fe were lower than H α ones by 17±52 K, and also lower than photometric ones by 83±26 K.
Abundance calculations
For most chemical species we computed abundances with the help of the updated version of the original code by Spite (1967). Our reference solar abundance was that of Grevesse et al. (1996). Once the best set of atmospheric parameters was chosen for each star (see Section 3.2), we used the new MARCS 9 model atmospheres with standard composition 10 . We chose the closest available global model metallicity (taking into account α-enhancement) of the ω Cen sub-populations, as reported in Table 3. For all species we computed a 3 σ-clipped average of abundances resulting from each line. For elements that had both neutral and ionized lines, we computed the weighted (on the number of lines) average of the two ionization stages to obtain [El/Fe]. We typically rejected lines that had EW>100 mÅ, where the Gaussian approximation could fail, or EW<15 mÅ, since the relative error was too high. For elements with few lines, and where we had to rely mostly on strong lines, we either performed spectral synthesis, or checked that the DAOSPEC measurements were not too underestimated by visually inspecting the spectrum and overlaying the DAOSPEC Gaussian fit on each strong line.
A few element abundances (Mn, Cu, Ba, and La) were derived with the help of spectral synthesis, taking into account HFS of single lines when needed (see also Section 3.1). We used the MOOG 11 (Sneden, 1973) package in combination with the "best MARCS models" above to find the best fitting spectrum 12 . All 6. Examples of spectral synthesis applied to the CN (top panel) and CH (bottom panel) molecular bands. Solid black curves represent the observed spectra for star WFI 512115, thick grey curves represent the best fitting synthetic spectra, while thin grey lines represent synthetic spectra differing from the best fit by ±0.2 dex in the N (top panel) and C (bottom panel) abundances. atmopheric parameters and the abundances of other blended elements (when present) were kept fixed and only the abundance of the element of interest was changed until the residuals of the fit were minimized. C and N abundances were derived by spectral synthesis of the molecular CN and CH bands at 3880 Å and 4300 Å, respectively. Examples of line fits with spectral synthesis are shown in Figures 5 and 6.
More in detail, carbon and nitrogen abundances were measured by minimizing the residuals between grids of synthetic spectra and the observed ones around the CN and CH bands.
In particular, C abundances were derived by fitting the CH Gband between 4300 and 4340 Å, including the band heads of the (0-0), (1-1) and (2-2) bands of the A 2 ∆−X 2 Π CH transitions; N abundances were derived with the CN B 2 Σ + −X 2 Σ + (0,0) UV band at 3870-3890 Å. In the measurement process, we kept the atmospheric parameters and all the atomic abundances fixed to the values of Tables 3 and 4, and we used the molecular data from the Kurucz database although, as discussed in Section 3.1, a correction of -0.3 dex to the logg f values of C was necessary. For each star, we computed synthetic spectra around the G-band by changing the C abundance only, and once the best fit was found, we employed the found C abundance in the fit of the CN band, where we changed only the N abundance. The procedure was then iterated a few times, repeating the fit for the CH band with the newly found N abundance, until the variations in both C and N were well below 0.1 dex.
The abundance calculation results are reported in Table 4, and are discussed in Section 4.
Abundance uncertainties
We estimated the internal, random uncertainty caused by imperfections in the EW measurement (or spectral synthesis process) and in the line atomic data, as σ/ √ n, for those elements that had at least two surviving lines after a 3 σ-clipping pass. For elements that relied on one line only, including those analysed with spectral synthesis, we derived a typical uncertainty of 0.05 dex by means of the Cayrel (1988) approximated formula, and we indicated this typical uncertainty between parenthesis in Table 4.
The uncertainty owing to the continuum normalization procedure can be estimated from the average spread of the residual spectrum of each star after lines removal, which is automatically computed by DAOSPEC. For our spectra, this was ∼1%. According to Figure 2 by Stetson & Pancino (2008), this propagates to an approximate uncertainty in the EW estimates of ±3 mÅ, and finally in an uncertainty of the order of 0.05 dex in the derived abundances.
Another factor that has a big impact on the abundance ratios is the choice of atmospheric parameters (Section 3.2). As discussed by Cayrel et al. (2004), T e f f , log g and v t are not strictly independent parameters when determined with the method of Section 3.3. Therefore, the best way to estimate the impact of the parameters' choice on abundance ratios is to change the most influential parameter, T e f f , and to re-optimize the other parameters that naturally re-adjust to accommodate the temperature change. The difference between abundances calculated with the "best model" and with the altered one is a robust estimate of the systematic uncertainty owing to the choice of stellar parameters.
analysed star WFI 507633 with MOOG, using the same linelist, EW, atomic data, atmospheric parameters and MARCS models. We found [Fe/H]=-2.02±0.02 which is well compatible with the abundance obtained with the Spite code (Table 4). We therefore chose our warmest and coolest stars and recomputed their abundances with models having T e f f altered by ±100 K, re-optimizing the other parameters according to the method described in Section 3.3. The final uncertainties are obtained by averaging the absolute abundance differences of the +100 and -100 altered models, and are reported in Table 5.
The global uncertainty (shown in all Figures from 8 to 13) is computed as the sum in quadrature of the random errors, the uncertainty owing to the continuum placement and the one owing to the choice of atmospheric parameters. Star T
A comparison with NGC 6397
Usually, to test the goodness of an abundance analysis, a comparison with the Sun or Arcturus is performed. In our case, because our linelist was optimized for metal-poor subgiants, we would be performing an abundance analysis of the Sun relying only on strong lines, with EW>100 mÅ. These strong lines are rejected when analysing our UVES metal-poor stars, because at this resolution they start to significantly deviate from the Gaussian shape (see Figure 7 by Stetson & Pancino, 2008).
Therefore, while the general method was already tested on the Sun by Pancino et al. (2010), we preferred to compare our present results to another well studied cluster, such as NGC 6397. We downloaded UVES archive spectra of three subgiants (stars 206810, 669, and 793) in NGC 6397, published by Gratton et al. (2001) and later re-analysed by Korn et al. (2007), which have similar temperatures, gravities, and metallicities as the three most metal-poor SGB stars in our sample. We reanalysed them adopting the same parameters as Gratton et al. (2001) and as Korn et al. (2007), respectively, as detailed in Table 6. Obviously, the differences in the resulting [Fe/H] are negligible within the quoted uncertainties. We note in passing that the major difference between the Gratton et al. (2001) and Korn et al. (2007) abundance determinations for these three stars lies in the v t determination, which is approximately 0.5 km s −1 higher in the Korn et al. (2007) paper. This difference alone is probably enough to justify the different abundances found in the two studies. Figure 7 compares the spectra of WFI 507633 and star 206810 in NGC 6397. The parameters of these stars are very similar, and indeed the calcium and iron lines are virtually identical, as confirmed by subtracting one spectrum from the other, where only noise remains. We are then confident that our analysis is correct and consistent with the one by Gratton et al. (2001), within the estimated uncertainties.
Abundance results
In the following sections we present and discuss abundance ratios of the measured elements. To compare with homogeneous literature sets, we considered preferentially large (≥10 stars) samples of high-resolution measurements of stars belonging to ω Cen. Unfortunately, there are no high-resolution studies of subgiants 13 , so we resorted to red giants surveys (Norris & Da Costa, 1995;Smith et al., 2000;Johnson et al., 2009). This will help us to cross-identify sub-populations in the SGB region with those already identified on the RGB. For some species, there were no large literature surveys available, so we included articles based on smaller samples (<10 stars). The largest sample available to date is the one by Johnson & Pilachowski (2010), which contains more than 800 stars, but at a moderate resolution (R≃18 000). Their abundance ratio plots are more populous, but also obviously more scattered compared to higher-resolution studies, so we decided not to plot them in our Figures 8 to 13, except for their stars at [Fe/H]<-1.9 dex, which are not as well sampled in other less populous studies. We will also mention the abundace ratios by 13 Although Villanova et al. (2007) study stars in the same evolutionary phase as our targets, their resolution is lower (R≃6000). Villanova et al. (2010), a study of 38 stars observed in the external regions of ω Cen. Given the diversity of comparison studies, even if we reported all literature values to our solar abundance and logg f system (using lines in common) when possible, residual zeropoint differences among different studies will surely still be present.
Iron
Our abundance ratios for iron and iron-peak elements are shown in Figure 8. We note that three of our six stars lie around [Fe/H]≃-2.0 dex, and that they do not correspond to any of the major sub-populations identified on the RGB, which are more metal-poor than any red giant studied with high-resolution spectroscopy before. The only exceptions are star ROA 213 by Smith et al. (2000), which has [Fe/H]=-1.97 dex, star 85007 by Villanova et al. (2010) at [Fe/H]=-1.98 dex, and a small group of 25 stars (out of more than 800) below [Fe/H]≃-2.0 dex in Figure 10 by Johnson & Pilachowski (2010). In their figure, the bulk populations have higher metallicities, with an abrupt drop in numbers below [Fe/H]≃-1.9 dex, approximately. We tentatively classify our three stars as a separate subgroup -containing few stars -that we name VMP (from very metal-poor, see also Section 5.1), and we will consider ROA 213 by Smith et al. (2000) and the 25 stars by Johnson & Pilachowski (2010) as the VMP counterparts on the RGB in the following discussion. The other three targets apparently fall into the metal-poor and metalintermediate populations (MP and MInt, according to the classification by Pancino et al., 2000;Sollima et al., 2005a). We will discuss population identifications in more detail in Section 5.2.
We note that all comparisons in Figure 8 are between stars in different evolutionary phases (our targets are subgiants, while those in the literature are always giants). Therefore, the uncertainty on the zeropoint -especially as far as [Fe/H] is concerned -cannot be well quantified (see also Section 3.4). Possible factors that cause systematically discrepant abundances between giants and subgiants are -there is still no consensus for the NLTE corrections to Fe I abundances, ranging from negligible (Gratton et al., 1999), to approximately 0.03-0.05 dex (Korn et al., 2007), to +0.3 dex (Thévenin & Idiart, 1999) for stars of the type presented here; -the effect of diffusion could already be present for these subgiants, lowering abundance by an amount that could be negligible (Gratton et al., 2001) or of the order of 0.1-0.2 dex (Korn et al., 2007); however, diffusion is heavily influenced in all models by the amount of turbulent mixing, a phenomenon that is poorly constrained at present; -there are also possible granulation inhomogeneities, which apparently go in the opposite direction compared to the above effects (Shchukina et al., 2005); -finally, two studies that analysed these phenomena in NGC 6397 (Gratton et al., 2001;Korn et al., 2007) gave very different abundance results (see Section 3.5), so even from the observational point of view it is quite difficult to provide reliable constraints to these phenomena.
In spite of all these problems, we have the advantage that we do not use only [Fe/H], but we can also compare [El/Fe] abundance ratios -which should be more robust to measurement uncertainties and to variations caused by these physical processeswith a vast literature, to effectively cross-identify SGB and RGB populations.
Iron-peak elements
While Norris & Da Costa (1995) provided Cr and Ni measurements for 40 red giants, Johnson et al. (2009) gave Ni measurements for 66 red giants and Cr for a smaller subset. Cr and Ni measurements are also provided by Villanova et al. (2010) for 30-35 stars. We took the Cunha et al. (2002) copper measurements for the 40 red giants already analysed by Norris & Da Costa (1995) 14 . For stars with [Fe/H]<-1.9 dex we used the Johnson & Pilachowski (2010) Sc and Ni measurements. No literature measurements in large samples of red giants were found for Co and Mn, except for the recent Mn study by Cunha et al. (2010), so we also compared with five giants by Cohen (1981), eight giants by Gratton (1982), and six giants by François et al. (1988).
There is general agreement for all iron-peak elements between our measurements for SGB stars and the literature RGB ones. Two odd elements, Sc and Co, would give better results if hyperfine splitting (HFS) was taken into account, but since they do not give any additional information with respect to Fe, traditionally no auhor has employed spectral synthesis to measure them. As a result, all ratios of Sc and Co in Figure 8 -including ours -are on average above solar. Our Sc values fall exactly on top of literature measurements, and our Co values for the three 14 The Cu measurements of six stars by our group (Pancino et al., 2002) are in substantial agreement with the ones by Cunha et al. (2002), so we do not plot them in the various Figures. Fig. 9. Abundance ratios of α-elements. Symbols are the same as in Figure 8. most metal-rich SGB stars lie between the Gratton (1982) and the François et al. (1988) measurements. It is known that large NLTE corrections are required for Co (Bergemann, 2010), which explains the rising trend observed in all cited studies. A slight underabundance can be noticed in the chromium ratios for the three most metal-poor stars, but NLTE effects should again be the cause for it (Bergemann, 2010). It is interesting to note that [Ni/Fe] tends to be slightly higher for the VMP stars than for the remaining three UVES targets by 0.1-0.2 dex, and that a similar trend can be noticed in Figure 10 by Johnson & Pilachowski (2010), who do not comment on this effect.
Mn and Cu are two very interesting elements with a dedicated vast literature because, although they belong to the ironpeak, they behave differently from other iron-peak elements (see McWilliam, 1997, for a classical review). In particular, Cu in ω Cen has been first measured by Pancino et al. (2002) on six RGB stars, and later studied in detail by Cunha et al. (2002), who found it underabundant similarly to field stars. Finally, it was theoretically modelled by Romano & Matteucci (2007), who found massive stars as the most likely producers of Cu, in agreement with previous studies (Bisterzo et al., 2004). While we find only upper limits for the three most metal-poor stars (Figure 8), our measurements agree with those by Cunha et al. (2002).
Manganese instead was found to be extremely underabundant in our six stars. The determinations by Cohen (1981) and Gratton (1982) appear only slightly subsolar, but none of these estimates takes into account HFS. A better agreement was found with the measurements by Cunha et al. (2010), based on the 10 stars by Smith et al. (2000), and derived with a complete HFS analysis, of which we plot the LTE resulting abundances in Figure 8. Our determinations, as those of Cunha et al. (2010), are lower than the values found for field stars (McWilliam, 1997) or metal-poor stars (Cayrel et al., 2004) around [Fe/H]=-2.0 dex. In particular, three of the five analysed lines belong to the 4000 Å resonance triplet which, according to Cayrel et al. (2004) gives systematically lower abundances by approximately 0.4 dex. Even rejecting these three lines, we found a [Mn/Fe]≃-0.7 dex for each of our six stars (Figure 8). The (marginal) discrepancy between our results and those by Cunha et al. (2010) could arise because we used the Mn lines around 4000 Å, while they use the lines around 6000 Å. The general result -that Figure 8.
ω Cen has lower Mn abundance than field and GGC starssuggests that the source of Mn production in ωCen should be low-metallicity supernovae, either of type II or Ia (Cunha et al., 2010).
α-Elements
It was not possible to measure the oxygen lines, or to put meaningful upper limits to the oxygen abundance in these subgiants. We were able to measure Mg, Si, Ca and Ti (Figure 9): all four elements were present in the Norris & Da Costa (1995) and Smith et al. (2000) studies, but we plotted also the Ca and Ti more recent measurements by Johnson et al. (2009) and the Si, Ca, and Ti measurements for stars with [Fe/H]<-1.9 dex by Johnson & Pilachowski (2010). The measurements by Villanova et al. (2010) also include all four α-elements studies here.
All measurements are slightly dispersed for Si because only a handful of lines is generally available, while for the other three elements they show a reasonably small spread. The three VMP stars do not seem to show any difference from the other three SGB stars in their α-enhancement, with a weighted average of [α/Fe]=0.36±0.08 dex for all elements in all six stars. Literature measurements are quite scattered for Mg, but for RGB stars the Mg lines are very strong and difficult to measure, and Mg also anti-correrelates with Na, for example, so some spread should be expected. Apart from this exception, there is very good agreement among various literature determinations for giants, and with our measurements for subgiants. In particular, we note that RGB stars with [Fe/H]<-1.9 dex are always very close to our three VMP subgiants. The NLTE effects for all our SGB stars should be around ≃0.1 dex in size (Gratton et al., 1999), for Mg, but not all the lines analysed here were also analysed by Gratton et al. (1999), so we prefer not to apply any correction.
The α-enhancement of VMP, metal-poor, and intermediate stars in ω Cen is typical of the field stars of the Galactic halo, which is indicative of a chemical enrichment dominated by SNe II.
Heavy elements
We were able to measure Ba, La, Sr, and Y. For Ba and La we used spectral synthesis (see Sections 3.1 and 3.3) to take into account HFS. We plot the abundance ratios in Figure 10, along with the measurements by Norris & Da Costa (1995), Johnson et al. (2009, and Johnson & Pilachowski (2010) for stars with [Fe/H]<-1.9 dex.
The log g f used by Norris & Da Costa (1995) are now outdated, which is why, using the stars in common between literature studies (including Vanture et al., 1994;Smith et al., 2000;Pancino, 2003), we estimated that all their ratios needed to be raised by ≃0.4-0.5 dex -depending on the element -to compare with more recent studies. The Norris & Da Costa (1995) data appearing in Figure 10 are already corrected for this effect. Also, the solar Ba abundance by Villanova et al. (2010) was 2.31, while we use 2.13, and their data are corrected for this difference in Figure 10 as well.
The heavy s-process elements Ba and La generally agree with literature estimates. In particular, the VMP stars appear to have a lower average s-process enhancement, compatible with zero, as confirmed by the La measurements by Johnson & Pilachowski (2010). Thus, no s-process enrichment by AGB stars seems to have polluted the VMP star in the RGB, or in the SGB. This is supported by our Sr and Y measurements, which mainly agree with the literature measurements and which also have a low enhancement, compatible with zero for our VMP stars.
It is also interesting that the most metal-rich star in our sample, WFI 512115, appears to have a slightly lower s-process enhancement than other stars measured in the literature at similar metallicity. If confirmed by larger samples of metal-rich stars, this could point out that the s-process enrichment by AGB stars in ω Centauri was not completely homogeneous, as supported also by the small group of stars in the Norris & Da Costa (1995) sample, around [Fe/H]≃-1.7, which appears to have [Ba/Fe] 0.2-0.3 dex higher than other stars at similar metallicity. A larger spread was found by Villanova et al. (2010) in s-process elements than in other studies presented here, which they explain by assuming that a bimodality in these elements might be present at low metallicity, supporting the mentioned effect in the Norris & Da Costa (1995) data. Our three stars always share the same s-process enhancement, but larger samples are of course needed to see if the supposed bimodality, or higher spread, extends to -2.0 dex stars as well. Finally, there is a large fraction of stars with solar Ba and Y enhancement in the Villanova et al. (2010) sample at all metallicities, which are not present in any of the other studies of red giants in ω Cen, and it is not clear yet if this is because of an intrinsic feature of stars in the outskirts of the stellar system, or a spurious measurement effect. Figure 11 shows the CH and CN band regions around 3880 and 4300 Å, respectively, for the three most metal-rich UVES stars. Clearly, two of them (namely WFI 503951 and 512115) show a deep CN band and almost no signal in the CH band. On the opposite, star WFI 503358 shows almost no CN band, and the CH band appears slightly deeper than in the other two stars. This evidence is not so clear in the three most metal-poor UVES stars, since the diatomic CN molecular band is much shallower.
Anti-correlations
We also have atomic lines of other elements that are found to (anti-)correlate within Galactic globular clusters, namely Mg, Na, and Al, besides C and N. Their abundance ratios are Figures 9 and 12. An initial observation is, looking at the literature abundance ratio plots (Norris & Da Costa, 1995;Johnson et al., 2009;Johnson & Pilachowski, 2010;Marino et al., 2010) that [Al/Fe] and [Na/Fe] appear clearly bimodal at all metallicities (with the possible exception of the most metal-rich stars around [Fe/H]≃-0.6 dex), with two distinct sequences around [Al/Fe]≃0 and [Al/Fe]≃1 dex in the case of aluminium, and the same bimodality is visible, although less clearly, in the C and N ratios. In our Figure 12 the Na data are confused because the Villanova et al. (2010) data have an opposite trend to all the other literature data plotted, with Na decreasing with metallicity instead of increasing with it. We found no explanation for this trend. Apart from this, our data follow the literature trends reasonably well, with a similarly large scatter for [N/Fe]. We did not apply NLTE corrections following a reasoning similar to that of Mg in Section 4.3, but they should be again around 0.1 dex for these subgiants (Gratton et al., 2001). Figure 13 reports on the usual (anti-)correlation plots. A clear and bimodal Na-Al correlation is seen in the literature data, as well as the C-N anti-correlation. Our data compare well with the literature in both cases. Even the most difficult Mg-Al anticorrelation is clearly visible in the literature data, although not as clearly in our own data for the six UVES stars. Finally, we also report the C-Al anti-correlation in Figure 13.
In summary, two of the three most metal-rich stars show clear signs of CNO pollution, as well as a clear Na- Al Fig. 13. Anti-correlation plots for the six UVES stars. Symbols are the same as in Figure 8.
anti-correlation, and this agrees with all past literature data on the subject. Our data for the three VMP stars are instead not conclusive about the presence of (anti-)correlations in the VMP population as a whole. We have two stars (WFI 507109 and 512938) with primordial composition and one star (WFI 507633) that seems to have high [N/Fe], but solar [C/Fe]. Literature measurements seem to point towards absent or reduced (anti-)correlations in VMP stars. The 25 stars by Johnson & Pilachowski (2010) with [Fe/H]<-1.9 dex tend to all have high sodium, and a much lower dispersion than the other >800 stars in their sample. But there are no Al, Mg, C, or N measurements in their study. On the other hand, Marino et al. (2010) show that for their red giants with [Fe/H]<-1.8 dex, the Na-O anti-correlation appear less extended than that for other metallicity groups in ω Cen, but we do not know how the situation stands for [Fe/H]<-1.9 dex only.
While a complete discussion on the anti-correlations among light elements in ω Centauri is beyond the scope of the present paper -for the moment we just want to cross-identify our SGB stars with the RGB populations -we confirm that whatever the cause of anti-correlations in GC, it must have been active on each sub-population of ω Cen, possibly excluding only the most metal-poor one, at [Fe/H]≃-2.0 dex, and the most metal-rich one (where all stars above -1.0 dex appear polluted, Marino et al., 2010). The presence or absence of (anti-)correlations in some sub-populations of ω Cen is a powerful tool to understand its chemical evolution (see Pancino, 2003;Carretta et al., 2010, and the discussion in Section 5.1).
Discussion
While comparisons between subgiants and giants are not entirely free from problems (see e.g., Bonifacio et al., 2009, and references therein), we measured abundance ratios of several species, in good agreement with past literature determination for RGB stars. We found that three of our targets seem to belong to a sepa-rate population with typical [Fe/H]≃-2.0 dex, [α/Fe]≃+0.35 dex, and an s-process enhancement compatible with zero. Of the three remaining stars, two are consistent with the abundance ratios of the RGB-MP (nomenclature by Pancino et al., 2000), and the last star appears to belong to the RGB-MInt population (nomenclature by Pancino et al., 2000) or, more specifically, to the RGB-MInt2 (nomenclature by Sollima et al., 2005a).
With this identification as our compass, we will try in the following sections to contribute to the understanding of the SGB of ω Cen on a few crucial topics.
5.1. The first stellar generation in ω Cen?
As discussed in Section 4.1, the three most metal-poor stars at [Fe/H]≃-2.0 dex appear to belong to a separate sub-population, which we termed VMP (for very metalpoor). Stars as metal-poor as these were found in the past in small amounts, but were not considered as a separate sub-population per se. In particular, several (unbiassed with respect to metallicity) studies, aimed at deriving the metallicity distribution of ω Cen either through photometry (Frinchaboy et al., 2002;Sollima et al., 2005a;Calamida et al., 2009), low-resolution spectroscopy of red giants (Norris et al., 1996;Suntzeff & Kraft, 1996), or subgiants (Sollima et al., 2005b;Stanford et al., 2006;Villanova et al., 2007) found stars as poor as -2.0 dex or lower. Some high-resolution abundance studies also found again a few VMP stars (Smith et al., 2000;Johnson et al., 2009;Johnson & Pilachowski, 2010;Marino et al., 2010;Villanova et al., 2010), and in general there appears to be an abrupt termination of the main MP population around [Fe/H]≃-1.9 dex, with a sparse group of stars around [Fe/H]≃-2.0 dex. This effect is clearly visible in Figure 10 by Johnson & Pilachowski (2010). This clean behaviour justifies the definition of VMP as a new, separate sub-population in ω Cen.
To estimate the fraction of such a minority population is not easy. From the cited studies we estimate that it should be at most 5% of the entire stellar content of ω Cen. Some more support to the existence of this small VMP component comes from the recent work by Bellini et al. (2010), who used exquisite ACS photometry to reveal additional sub-structure in the SGB region of ω Cen. In particular, the upper SGB branch (branch A in the nomenclature of Villanova et al., 2007) appears split into two sub-branches in Bellini et al. (2010). The fact that three out of three of our upper SGB targets turned out to have [Fe/H]=-2.0 dex can thus be explained, because we chose them on the upper evelope of the upper branch, which turns out to be separated from the other branches, and which we can safely consider to be made of [Fe/H]=-2.0 dex stars, at least in the external region of ω Cen that we are sampling here (see Figure 2).
Our abundance ratios, based on high-resolution spectroscopy, allow us to hypothesize that these stars must be the best candidate remnant of the primordial population in ω Centauri, enriched primarily by type II SNe (given its [α/Fe]≃+0.35 dex), and most probably free from severe pollution by AGB stars. This last statement is supported by the (almost) solar sprocess ratios (Figure 10). Also, while three stars are too few to rule out the presence of anti-correlations in this population, literature studies (Smith et al., 2000;Johnson et al., 2009;Johnson & Pilachowski, 2010;Marino et al., 2010) suggest that (anti-)correlations should be absent or reduced in their extension for VMP stars. It would be extremely interesting to study C, N, Mg, Na, and Al in larger samples of SGB stars in ω Cen. Indeed, although ω Cen is commonly considered as the remnant of a dwarf galaxy accreted a long time ago by the Milky Way, all the sub-populations identified so far show clear anti-correlations (Figures 12 and 13) at all metallicities, except for -possibly -the VMP and RGB-a stars . Field populations of dwarf galaxies do not normally show any anti-correlation, which is only found in globular clusters (Gratton et al., 2004). Therefore, the absence (or existence) of anti-correlations in the VMP component could give us evidence for (or against) the dwarf galaxy hypothesis for the origin of ω Centauri, as discussed also by Pancino (2003) and (Carretta et al., 2010).
Summarizing, given the chemical properties of the VMP stars studies here and in the cited high-resolution studies, we conclude that they belong to a small and distinct sub-population, which appears to be the best-candidate (remnant) population of the first stellar generation in ω Cen, but could also -if the presence of (anti)-correlations will be excluded -be the field population remnant of its hypothesized parent galaxy.
SGB populations puzzle: who is who
Apart from the notation used above, which separates the ω Cen sub-populations by metallicity in VMP, MP, MInt (in turn divided in Int1, Int2, and Int3), and MR (or RGB-a and SGBa) proposed by Pancino et al. (2000), Ferraro et al. (2004) and Sollima et al. (2005a), we will also use the nomenclature by Villanova et al. (2007) in this Section, who photometrically divide the SGB into four sub-branches (A, B, C and D) from the upper SGB envelope down to the SGB-a. This classification is also put into evidence in the two leftmost panels of Figure 14.
The whole difficulty in the study of the SGB of ω Cen is to cross-correlate the five sub-populations defined photometrically and spectroscopically along the RGB with the four or five (and more, see Bellini et al., 2010) sub-branches that are visible on the SGB. This is because, while on the RGB there are exquisite high-resolution studies that can link photometric features with chemical abundance patterns, this is not yet possible for the SGB. One important observation suggested by past attempts to link SGB and RGB populations is that the nicely combed structure that appears in the RGB and the similarly nicely combed one that appears on the SGB could be somehow scrambled and mixed, as pointed out by Villanova et al. (2007). While on the RGB the dominating factor that separates populations is metallicity, on the SGB it must be age (together with C, N, O, and He), and this could complicate things significantly if there were no clear (monotonic increasing) age-metallicity relation in ω Cen. The uncertainties involved in the past low-resolution spectroscopic studies such as Sollima et al. (2005b) and Villanova et al. (2007) were apparently not sufficient to give a final answer to the problem. The present study, on the other hand, while having higher precision in the abundance determination, can only rely on a limited number of stars.
The VMP population
Concerning our newly defined VMP population, we must assume that it should lie: (i) on the bluest egde of the RGB colour distribution, because of its low metallicity, and (ii) on the upper edge of the SGB, and in particular on the upper edge of branch A by Villanova et al. (2007), where we find three stars out of three at [Fe/H]≃-2 dex. This is supported by Figure 12 by Bellini et al. (2010), where for the first time the upper SGB appears split into two separate branches, and where the lower base of the RGB shows a clearly bluer sub-sequence made of a small for ω Cen are used to derive age differences; the oldest populations turns out to be 16 Gr old. Centre panel: a cosmological age of 13 Gyr is forced for the oldest populations, altering E(B-V) and (m-M) V but with no significant effect on the resulting age differences; the shape of the isochrones is now in worse agreement with the data. Right panel: NGC 6397 is shown for comparison. The letters A, B, and C, which appear in the two leftmost panels, indicate the approximate location of the SGB branches as defined by Villanova et al. (2007). number of stars. These features are the most likely photometric counterparts of the VMP population found here.
The MP population
The connection between the SGB-MP and the RGB-MP is not clear yet. While some other population should be present together with the VMP to account for the number of stars in the upper branch, or branch A, it is not clear if this is really the counterpart of the RGB-MP, because the RGB-MP contains at least 30% (and likely more) of the stars in ω Cen. We cannot speculate too much with the data in hand -the only certain fact is that the two MP stars in this paper (WFI 503358 and 503951) both lie lower than branch A, possibly on branch B or even Cbut we see two possible solutions: -One possibility -supported by our analysis of stars WFI 503358 and 503351 -is that the RGB-MInt1 stays on branch A (lower half) together with the VMP, and the RGB-MP stays instead on branch B. This would allow for a better agreement between the population fractions estimated by Sollima et al. (2005a) and Villanova et al. (2007), while the uncertainties of both low resolution studies are probably large enough to accommodate such a switch. -The other obvious interpretation has been already mentioned by Villanova et al. (2007), i.e., that MP stars lie on both branches A (lower) and B, mixed with part of the metalintermediate populations.
The MInt populations
Our star WFI 512115, which chemically belongs to the MInt population (and, more specifically, to the MInt2) also appears to lie either on branch B or -most probably -on branch C. Combining this evidence with the one presented by both Sollima et al. (2005b) and Villanova et al. (2007), we tentatively conclude that the most likely photometric counterpart of the MInt2 population should be branch C. The presence of a few RGB-MP stars on branch C still cannot be ruled out, and it could be either caused by some scatter in the low-resolution abundances by Villanova et al. (2007), or by photometric errors in the WFI reference photometry (Pancino et al., 2000;Pancino, 2003), or even by a different SGB shape for the SGB-MP . We also speculate that the RGB-MInt3 should correspond to a sequence lying between branches C and D, as clearly indicated by Sollima et al. (2005b) in their Figure 4, and mentioned in passing by Villanova et al. (2007).
The MR or SGB-a population
While the cross-identification between the RGB-a (Pancino et al., 2000;Sollima et al., 2005a), the SGB-a Ferraro et al. (2004); Sollima et al. (2005b) and branch D (Villanova et al., 2007) appears unquestionable, there is still some debate about its metallicity, ranging from [Fe/H]≃-1.1, derived from low-resolution spectroscopy of SGB stars by Villanova et al. (2007) to [Fe/H]≃-0.6 dex, derived from both high-resolution spectroscopy of RGB stars (Pancino et al., 2000) and low-resolution spectroscopy of SGB stars (Sollima et al., 2005b). The metallicity of the SGB-a will be the subject of a following paper based on GIRAFFE data, but it is interesting to note here that Bellini et al. (2010) found the SGB-a (or branch D) split into two sequences, suggesting that two populations with slightly different properties could occupy these two very close loci.
Towards a solution of the age spread problem
The history of age spread determinations in ω Cen comprises a variety of studies, all focused on the SGB, the most age sensitive region in the CMD. An age spread of 2-5 Gyr, among the various sub-populations was first found by Hughes & Wallerstein (2000) and Hilker & Richtler (2000), based on the TO region morphology of their high-quality Strömgren photometries and colour-metallicty calibrations. Later several other papers came out based on combinations of high quality photometry and lowresolution spectroscopy (Hughes et al., 2004;Hilker et al., 2004;Rey et al., 2004;Sollima et al., 2005b;Stanford et al., 2006;Villanova et al., 2007), finding again various age dispersions all around 2-5 Gyr. The only studies reporting on age difference below 2 Gyrs are those by Ferraro et al. (2004), who find that the SGB-a cannot be fitted with any isochrone younger than the SGB-MP population, and Sollima et al. (2005b), who similarly found that the overall age spread of the SGB sub-populations cannot amount to more than 2 Gyr. It is interesting to note that Sollima et al. (2005b) and Villanova et al. (2007) use photometry from the same ACS data set and spectra of similar quality, and reach opposite conclusions, with Villanova et al. (2007) finding an age spread of at least 2 Gyr within the MP population only.
On the one hand, spectroscopic abundances of both RGB and SGB stars find an α-enhancement of all populations (except the RGB-a) consistent with pure type II SNe enrichment which, in the self-enrichment scenario for ω Cen, would imply very fast enrichmemt, within 1 Gyr. On the other hand, the high s-process enhancement of all populations (except maybe the VMP found here) would imply enrichment by intermediate mass (1-3 M ⊙ ) AGB stars on longer timescales (1-2 Gyr and possibly even more Busso et al., 1999). Therefore, if ω Cen is a self-enriched system, we should find (and indeed many authors do find) some significant age spread. While we cannot give the ultimate solution to the relative ages (or age spread) puzzles listed above with the present data, we can nevertheless use our newly defined VMP population to shed some light on the problem.
We use isochrones from the Padova database of stellar evolutionary tracks and isochrones 15 , and in particular we chose the ones based on the Marigo et al. (2008) and Girardi et al. (2000) tracks. We preferred this set over BaSTI 16 (Pietrinferni et al., 2006) simply because they provide tranformations to the WFI filters. We note that the results in ages do not change significantly when using BaSTI (less than 1≃Gyr overall shift), but the actual shape of the SGB is better reproduced when the WFI filters are used instead. While the absolute ages are somewhat uncertain, we show in Figure 14 that the age differences are relatively robust regardless of the absolute age scale calibration of the chosen isochones set.
Firstly, we computed the Z values for the three populations using the formula log Z = [Fe/H] − 1.7 + log(0.638 10 [α/Fe] + 0.362) according to Salaris et al. (1993). We adopted the Lub (2002) reddening and Bellazzini et al. (2004) distance modulus (see also Section 3.2). We used the cosmological helium abundance models (for a discussion on higher He abundances see Section 5.4). The uncertainties involved in the isochrone fitting procedures are large: we assume aproximately ±2 Gyr. We find that the VMP and MP populations must be relatively close in age, with a difference of 0±2 Gyr, while the MInt component should be about 2±2 Gyr younger. If we force the age of the oldest population(s) to a cosmological value of 13 Gyr, we have to increase E(B-V) and (m-M) V by 0.03 and 0.16 mag respectively, and the fit becomes considerably worse when considering the isochrones shape. The age differences, however, do not change. As a comparison, we show in Figure 14 the case of NGC 6397, using the WFI B, V, and I photometric catalogue described by Carretta et al. (2009) and the reddening and distance modulus from the revised Harris (1996) catalogue.
The existence of the VMP at the SGB level could help in solving two of the paradoxes previously found in the literature. The first is the zero age difference -within the uncertaintiesfound by Sollima et al. (2005b) between the MP and the MInt populations, which is difficult to understand given the difference in s-process enhancement between the two populations. If one assigns [Fe/H]=-2 dex to the upper SGB envelope, and moves the MP population to branch B, this shift in metallicity would be allowed by the uncertainties in the Sollima et al. (2005b) lowresolution spectroscopy (see also Starkenburg et al., 2010, on the uncertainties of the calcium triplet calibration at low metallicity). As a result, the MP would become older, and the age difference between the MP and MInt populations would not be so close to zero anymore.
The second puzzle that could be alleviated by the existence of the VMP is the coexistence within the MP group of two separate populations with ages differing by ≃2-3 Gyr, found by Villanova et al. (2007), which could be explained if one admits that the MP population is in reality a mix of MP and VMP stars. The metallicity difference between the VMP and MP populations would be difficult to resolve with low-resolution spectroscopy, similarly to the case above. The younger MP group identified by Villanova et al. (2007) would become older if its metallicity was [Fe/H]≃-2.0 dex instead of -1.7 dex, thus recreating an almost monotonic age-metallicity relation, although the age difference between the VMP and the MP populations would be small. It could be argued that the younger MP in Villanova et al. (2007) is much more numerous than the VMP estimated fraction, i.e., 5% of the total stellar content in ω Cen. However, we must note that a large fraction of the targets in that study are selected a priori on the upper SGB branch (or branch A), which we suspect is dominated by VMP stars. In this case, the target selection would be biassed preferentially towards the VMP population and the relative numbers in Figure 19 by Villanova et al. (2007) would not be representative of the respective population fractions anymore.
In a self-enrichment scenario, the small age difference between the VMP and the MP population (0±2 Gyr) poses no problem as far as the type II SNe are concerned, but could perhaps be too short to accommodate the ≃0.5 dex overabundance in sprocess elements for the MP. On the other hand, if 2±2 Gyr occurred between the MP and the MInt2 population, there could be enough time to enrich the MInt population in s-process elements up to the observed level of [s/Fe]≃1 dex. Detailed chemical evolution calculations would be extremely useful in understanding these details.
The helium abundance
Since the discovery of a double MS in ω Cen (Anderson, 2002;Bedin et al., 2004), and the evidence that the bluer sequence was more metal-rich than the redder one (Piotto et al., 2005), it was suggested that some of the metal-richer populations in ω Cen could have abnormal He abundance (Norris, 2004), of about Y≃0.35-0.40. A recent review by Renzini (2008) discussed the possible scenarios and compared them with observations of ω Cen, but also of NGC 2808, which was found to possess a triple MS (Piotto et al., 2007), along with other massive GGC.
Such an overabundant helium would have some impact in the model atmospheres (Böhm-Vitense, 1979;Girardi et al., 2007) that are at the basis of any abundance analysis such as the one presented here. The first thing to note is that the most metal-poor populations of ω Cen do not require any He enhancement, and we assume this to be the case not only for the two MP stars (WFI 503358 and 503951), but also for the three VMP ones (WFI 507109, 507633, and 512938). The only star that could suffer from an helium enriched atmoshpere is the one belonging to the MInt2 population, WFI 512115.
A simplified treatment by Gray (2008) assumes that an overabundance in helium has a similar effect as an increase in gravity, as a first approximation: Just to derive an order-of-magnitude effect, we translate a mass fraction increase from Y≃0.25 to Y≃0.35 into a number increase from A(He)≃0.10 to A(He)≃0.15. This would be mimicked, in our WFI 512115 star, by an increase in surface gravity from roughly 3.5 to 4.0 dex, corresponding to an increase in abundance of 0.08 dex in [Fe/H]. Thus, WFI 512115 would change from [Fe/H]=-1.19 to -1.11 dex, with basically no significant effect in the age difference determination, within the quoted uncertainties. We finally note here that, as discussed by Sollima et al. (2005b) in their Figure 6, this increase in the helium abundance would only change the shape of the isochrone's SGB, making it steeper, with a negligible (less than 1 Gyr) impact on the relative age determination.
Summary and Conclusions
We analysed UVES high-resolution spectra of six stars on the SGB of ω Centauri. We compared our results with RGB high-resolution spectroscopy to identify the sub-populations to which our targets belong, and we found remarkable agreement with past abundance determinations. Three of our targets (WFI 507109, 507633, and 512939) have [Fe/H]≃-2.0 dex, are α-enhanced and show no significant s-process enhancement. Two of the remaining targets (WFI 503358 and 503951) belong to the MP population, with [Fe/H]≃-1.65 dex, α-enhanced and with [s/Fe]≃+0.5 dex. The last target, star WFI 512115, belongs to the MInt2 population, with [Fe/H]=-1.19 dex, α-ehanced and with s-process enhancement similar to the MP targets, i.e., slightly lower than what is expected for MInt stars.
Our main result (see Section 5.1) is that there exists an additional, metal-poor population (that we name VMP) at [Fe/H]≃-2.0 dex, which has chemical properties that make it the ideal candidate (remnant of) the primordial population of ω Cen. The RGB star ROA 213 Smith et al. (2000), star 85007 by Villanova et al. (2010), and 25 red giants studied by Johnson & Pilachowski (2010), have similar chemical composition and could represent the prototype VMP members along the RGB. In particular, the s-process enhancement of the SGB-VMP population is not compatible with the RGB-MP stars, while it is compatible with the quoted RGB-VMP stars. Our conclusion is also supported by previous work on metallicity distributions or RGB and SGB stars and by the exquisite photometry by Bellini et al. (2010). We estimate that this VMP population should comprise at most 5% of the entire stellar content of ω Centauri, at present. The presence or absence of light element anti-correlations in this population would be a fundamental constraint to the nature of ω Cen, because anti-correlations are generally exclusively found in globular clusters and never in the field populations of galaxies. From the available literature (mainly Johnson et al., 2009;Johnson & Pilachowski, 2010;Marino et al., 2010) it appears that (anti-)correlations could be reduced in extent, in VMP stars. Until the presence of (anti)correlations in VMP stars is excluded by larger data samples, it looks more promising to interpret this as the primordial population of ω Cen instead of the remnant field population of its putative parent galaxy.
The high-precision abundance determinations obtained allowed us to try to shed some light on the SGB morphology relation with the RGB spectroscopically identified sub-populations. We conclude that there appears to be no one-to-one correspondence between the nicely combed substructures of the RGB and SGB. In particular, the MP and MInt populations could either be mixed along the (lower) A, B, and C branches, or be positioned in a not strictly monotonic order, with MInt1 occupying branch A (lower) and MP branch B, just as an example. As already said, VMP stars should occupy (and possibly dominate) the uppermost SGB branch, or branch A in the Villanova et al. (2007) terminology.
We also found that (see Section 5.3) the existence of the VMP population could alleviate some of the problems found in previous determinations of relative ages. In particular, the small metallicity difference between the VMP and MP populations could have escaped previous abundance analyses based on lowresolution spectra. The puzzling result by Villanova et al. (2007) that the MP population should contain two groups with different ages could indeed be explained by the metallicity difference between VMP and MP. Also, the (too) small age difference found by Sollima et al. (2005b) between the MP and MInt populations woul become slightly larger when taking into account the existence of the VMP, which should dominate the uppermost SGB envelope.
Finally, there should be a small age difference between the VMP and MP populations (0±2 Gyr), while a slightly larger age difference (2±2 Gyr) should occur between the VMP and the MInt2 populations. Althought this latter result is less secure because it relies on one star only, it agrees very well with the majority of past studies (see, e.g., Stanford et al., 2006). The use of different sets of isochrones (Padova, BaSTI) does not change the result significantly, and the helium abundance problem should have a negligible impact on the MInt2 star WFI 512115 (the only one which should have higher helium) because, to first approximation, it should change its [Fe/H] by ≃0.08 dex. The age distribution suggested by the present data would accommodate a fast enrichment between the VMP and MP populations, dominated by SNe type II, while the s-process enrichment of the MP ([s/Fe]≃+0.5 dex) could still pose a problem. The age difference between the MP and MInt populations could instead be sufficient to allow for some intermediate-mass AGB star enrichment (Busso et al., 1999), bringing [s/Fe] to +1.0 dex.
We conclude by noting that this is the only high-resolutionbased abundance analysis published on SGB stars in ω Cen so far. Even if the precision of the abundances is higher than in past low-resolution studies, of course the number of stars examined is only six. To give the final answer to the relative ages problem in ω Cen, and to identify who is who in the CMD at the SGB level, a much larger sample (a few hundreds) of relatively highresolution spectra in the SGB region is absolutely necessary. | 2010-12-21T18:20:36.000Z | 2010-12-21T00:00:00.000 | {
"year": 2011,
"sha1": "3991df50e71f5fd65bac2e41cb5237db1a4e8a7e",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2011/03/aa16024-10.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "3991df50e71f5fd65bac2e41cb5237db1a4e8a7e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
240415317 | pes2o/s2orc | v3-fos-license | Reply on RC1
The paper concerns a topic consistent with the aim of the GMD journal, and I really appreciate the huge work made by the authors. The presented analysis and model application could be potentially useful in karst basins. In this study, a karst hydrological model, i.e., the QMG model-V1.0 was developed for karst floods simulation and forecasting. The model itself is a valuable improvement, and what interested me was the applicability of the model in karst areas, so I went through the entire process of modeling and validating the model myself (https://zenodo.org/deposit?page=1&size=20), and the model simulation results were satisfactory. I think the subsequent research should focus on the validation study of the model in more karst areas to prove its general applicability in karst hydrological forecasting. However, there are few drawbacks affect the manuscript and have to be addressed before the paper can be published in GMD.
The paper concerns a topic consistent with the aim of the GMD journal, and I really appreciate the huge work made by the authors. The presented analysis and model application could be potentially useful in karst basins. In this study, a karst hydrological model, i.e., the QMG model-V1.0 was developed for karst floods simulation and forecasting. The model itself is a valuable improvement, and what interested me was the applicability of the model in karst areas, so I went through the entire process of modeling and validating the model myself (https://zenodo.org/deposit?page=1&size=20), and the model simulation results were satisfactory. I think the subsequent research should focus on the validation study of the model in more karst areas to prove its general applicability in karst hydrological forecasting. However, there are few drawbacks affect the manuscript and have to be addressed before the paper can be published in GMD.
Specific comments 1) English needs modification I found several incorrect words, grammar and unclear sentences, make it very difficult to understand the analysis carried out and the results obtained. The authors need to carefully correct the language errors in the whole text.
2) More information about the potential of this new model, ie.e., the QMG model-V1.0 for application in karst areas needs to be added in the Introduction part, especially the advantages and disadvantages compared to current numerical karst groundwater models.
3) In the Methodology part, the section 3.1 Hydrological model, this title is inappropriate here, as it obviously also includes the Parameter Optimization in Section 3.2 and Model Setting in 3.4. Suggest changing it to a model framework and algorithm. 4) In section 3.3 Uncertainty Analysis, it is not clear how to analyze uncertainty in input data and model structure for this new QMG model-V1.0.
Other minor comments 1) All tables should be set to three-line tables.
2) The right side of Figure 3 seems to be a photograph, please explain the necessity of its existence.
3) Each variable in Figure 5 needs to be clearly labeled as to which parameter it refers to.
4) The horizontal axis in Figure 7 represents the date, but the interval is not one-to-one with the marked time, please check that.
General comment:
The paper concerns a topic consistent with the aim of the GMD journal, and I really appreciate the huge work made by the authors. The presented analysis and model application could be potentially useful in karst basins. In this study, a karst hydrological model, i.e., the QMG model-V1.0 was developed for karst floods simulation and forecasting. The model itself is a valuable improvement, and what interested me was the applicability of the model in karst areas, so I went through the entire process of modeling and validating the model myself (https://zenodo.org/deposit?page=1&size=20), and the model simulation results were satisfactory. I think the subsequent research should focus on the validation study of the model in more karst areas to prove its general applicability in karst hydrological forecasting. However, there are few drawbacks affect the manuscript and have to be addressed before the paper can be published in GMD.
Response:
We greatly appreciate the reviewer's comments. The reviewer confirmed the innovation and application value of this study and pointed out the potential of the model (QMG model-V1.0) proposed in karst areas, and suggested that subsequent studies should focus on applying this new model to more karst areas to test its general applicability in karst floods forecasting.
The next step of our research is indeed focused on model validation, for which we will build this model (QMG model-V1.0) for flood simulation and forecasting in more karst areas, and improve the model's functions and algorithms to provide its applicability and accuracy based on the application effects.
The following is our point-by-point response to specific comments.
1) English needs modification
I found several incorrect words, grammar and unclear sentences, make it very difficult to understand the analysis carried out and the results obtained. The authors need to carefully correct the language errors in the whole text.
Response:
We have carefully revised the language errors in the full text, including incorrect words, grammar and unclear sentences, and asked a professional English editing company (Charlesworth Advanced ) to help fix the language problems in the manuscript. application in karst areas needs to be added in the Introduction part, especially the advantages and disadvantages compared to current numerical karst groundwater models.
Response:
More information about the advantages of the QMG model-V1.0 compared with other karst groundwater models have been added in the revised Introduction (Lines 103-115).
3) In the Methodology part, the section 3.1 Hydrological model, this title is inappropriate here, as it obviously also includes the Parameter Optimization in Section 3.2 and Model Setting in 3.4. Suggest changing it to a model framework and algorithm.
Response:
This advice is very pertinent. The title of section 3.1 has been replaced by "Hydrological model framework and algorithms" accordingly (Lines 205). 4) In section 3.3 Uncertainty Analysis, it is not clear how to analyze uncertainty in input data and model structure for this new QMG model-V1.0.
Response:
Uncertainty Analysis of input data and model structure have been added in the revised section 3.3 (Lines 428-446).
Other minor comments 1) All tables should be set to three-line tables.
Response:
The tables have been seted to three-line tables accordingly (Lines 938-943).
2) The right side of Figure 3 seems to be a photograph, please explain the necessity of its existence.
Response:
It is a three-dimensional spatial model of KHRUs established in the laboratory to visually reflect the storage and movement of water in the karst water-bearing medium with each spatial anisotropy, and to provide technical support for the establishment of hydrological model. And this description has been added to the revised version (Lines 230-233). Figure 5 needs to be clearly labeled as to which parameter it refers to.
Response:
The model parameter referred to by each variable in Figure 5 has already clearly reflected in Table 1 (Lines 938). Figure 7 represents the date, but the interval is not one-to-one with the marked time, please check that. | 2021-11-02T12:06:04.285Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "d12fa7e796d3dfaa8ae55df0e5b1395438aac30a",
"oa_license": "CCBY",
"oa_url": "https://se.copernicus.org/preprints/se-2021-58/se-2021-58.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d12fa7e796d3dfaa8ae55df0e5b1395438aac30a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
258793888 | pes2o/s2orc | v3-fos-license | In-depth Teacher-student Interaction Research in Polytechnic Smart Classroom based on the Perspective of "San Jiao" Reform
. Classroom reform is an important step in the reform of "San Jiao". The polytechnic classroom urgently needs to be deeply restructured and upgraded. Smart classroom enabling deep interaction embodies the teaching structure elements, the deep learning in the environment of smart classroom, and the deep integration of information technology.
1.Introduction
"San Jiao" reform is the starting point to deepen the mode of talent training. It has become an unavoidable topic of modern vocational education to explore the in-depth classroom interaction of intelligent classroom in vocational colleges. Smart classrooms can empower indepth classroom interaction in polytechnics.
2.1.Intelligent classroom enabling in-depth classroom interaction in vocational colleges
Classroom is the main field of teaching activities. We should pay attention to teacher development and strengthen classroom vitality. Compared with general education, vocational education has more obvious professionalism, cross-border and integration, and it has a close connection with the development of economic society. Therefore, "colorful classroom" needs to focus on the compilation, selection and use of textbook resources, which cannot be separated from the guidance of relevant theoretical research.
2.2.Integrating multiple teaching methods to solve classroom problems
Teaching method is an important starting point of classroom reform in vocational colleges, and the most essential feature of the teaching method reform of "colorful classroom" is integration. That is to say, we should promote the "multi-dimensional" transformation of classroom teaching method based on the deep cooperation between schools and enterprises so that "colorful classroom" is more in line with the characteristics of the type of vocational education.
3.Countermeasures and suggestions for classroom reform in vocational colleges
Some countermeasures and suggestions can be put forward from three aspects: path innovation, resource development and factor reconstruction, so as to lay a solid foundation for innovative collaborative new-type classroom.
3.1.Path innovation to break the path dependence of traditional classroom teaching reform
The reform path of traditional classroom teaching is mostly top-down policy-driven reform, that is, experts are called together to put forward teaching reform suggestions based on research and experience, and then these suggestions are written into such documents as professional teaching standards, curriculum standards, talent training programs, and are widely disseminated and enter the classroom of front-line teachers.
3.2.Resource development and improving the multi-party collaborative classroom resource construction path
The scientific development and rational application of classroom resources become the representation form of "colorful classroom". Therefore, improving the multiparty collaborative mechanism of classroom resources development in vocational colleges is helpful to promote the standardized development of related markets. This requires relevant departments to conduct complete management and organization of teaching resources.
4.The value and path of intelligent classroom enabling in-depth interaction in higher vocational classes
High quality classroom teaching is the core content of constructing high quality vocational education system. Many studies have shown that classroom discussion and innovation, peer interaction and cooperation, teacherstudent interaction and communication and other interactive participation variables have an important impact on strengthening the role of classroom main position and effectively improving the quality of classroom teaching [1] .
4.1.Implications of in-depth classroom interaction
To grasp the meaning of deep classroom interaction is the epistemological premise for clarifying the practical path of deep classroom interaction in vocational colleges empowered by intelligent classroom. Classroom interaction has both situational and practical characteristics. It is not an abstract, isolated form of understanding that must be translated into action to demonstrate its true value. In the smart classroom environment, in-depth interaction of multiple elements, deep driving of multiple tasks and deep linkage of multiple platforms effectively enhance students' interactive participation, thus significantly improving the effect of indepth classroom interaction in the smart classroom environment.
4.2.Smart classroom enabling the value of indepth classroom interaction in vocational colleges
The in-depth classroom interaction in vocational colleges enabled by smart classroom is not to change the traditional classroom interaction, or completely deny the interaction concept in the traditional classroom, but to explore a more effectively interactive practical logic, so that every student in vocational colleges can participate in the in-depth exchange of ideas all the time. Its value is mainly reflected in the following three aspects.
Realizing multiple integration and interaction of teaching structure elements
The deep classroom interaction in vocational colleges enabled by smart classroom builds a deep classroom interaction form based on the reform of teaching structure. Interactive feedback is an important coupling variable. Meaning construction and behavior response regulate and control content processing through interactive feedback, so as to achieve multiple integration and interaction among elements of teaching structure.
Realizing deep learning in the smart classroom environment
The concept of in-depth classroom interaction centered on learning is emphasized. The in-depth classroom interaction in vocational colleges under the intelligent classroom environment places learning as the main focus and puts it in the center, so that learning integrates the four elements of teaching structure.
Technology becomes a subjective role in the indepth classroom interaction of vocational colleges
In the smart classroom, technology is no longer an auxiliary and intermediary tool role in the in-depth classroom interaction of vocational colleges. Smart classroom environment becomes the medium and bridge connecting abstract interactive teaching theory and concrete interactive teaching practice, providing full support for the development of classroom interactive activities, interactive data collection, interactive behavior analysis, interactive feedback push, etc., to achieve the maximum effect of interaction.
5.Smart classroom enabling the path of in-depth classroom interaction in vocational colleges
Although the media in the environment of smart classroom extends to virtual reality, the in-depth classroom interaction in vocational schools cannot be examined solely from the perspective of technology, which will only fall into the pattern of instrumental thinking, but should be under the care of teaching structure, social interaction theory and scientific theory of learning. On the basis of following the law of education science, the in-depth classroom interaction in vocational colleges under the intelligent classroom environment is considered comprehensively.
5.1.Relying on data thinking to realize scientific interactive decision-making
Interactive decision making refers to the process in which teachers, under the guidance of educational theories and with the help of certain technical means, implement a number of satisfactory schemes to achieve teaching objectives [2] . It can be said that interactive decisionmaking is the normal behavior which determines the quality of classroom interaction.
5.2.Relying on smart environment to enhance interaction participation
The level of interaction is the primary condition that affects the depth of interaction. The stimulation of intrinsic participation motivation is the most fundamental way to enhance the interactive participation of vocational college students.
Smart classroom environment can make vocational college students get a more vivid sense of presence and improve the immersion of students' experience, so as to achieve the purpose of enhancing the interactive participation of vocational college students with deep situational experience. In addition, it is possible to quantitatively evaluate the interaction participation in the smart classroom environment, so that the frequency and quality of each student's interaction can be clearly recorded.
For example, please refer to The table shows 317 students from different classes of the same department I taught last semester. In terms of the average time invested, students with moderate participation have the highest willingness to invest, with an average value of 1.87, followed by students with high participation, with an average value of 1.70, and students with low participation have the lowest willingness, with an average value of 1.65. In addition, since sig=0.20>0.05, there was no significant difference in the real willingness of different types of students to invest time in weekly teacher development activities.
5.3.Relying on high-quality interaction to improve the effectiveness of interaction
Interactive effectiveness is the core of in-depth classroom interaction in vocational colleges. High quality interactive feedback is an important way to achieve high quality interaction. Students' right of discourse refers to students' right to express their thoughts, emotions and opinions in educational activities, especially in the classroom [3] . Each student not only gets the opportunity to answer freely, but also gets the right to question and speak freely. Only in this way can a polytechnic classroom truly become an ecological field of thought collision, emotional resonance and wisdom creation. Quality interaction is an interactive process aimed at deep learning.
5.4.Improving interaction accuracy relying on learning analysis
Interactive precision is a necessary way to achieve indepth classroom interaction. The learning analysis technology integrated with smart classroom is just in line with it. Smart classroom can collect data of teacherstudent interaction behavior throughout the class.
Smart classroom can be used to collect data on teacherstudent interactions throughout the class. Learning analysis technology is a technology used to measure, collect, analyze and report data related to students' learning behavior and their learning environment, so as to better understand and optimize students' learning status and their learning environment. The use of learning analysis techniques aims to understand deep learning from the perspective of big data.
6.Conclusion
According to the above analysis, most of the students in the sample accept to invest 1 hour in teacher development activities every week. However, in the actual participation process, on average, each student only participated in teacher development activities for 1.3 times in the last semester. What are the reasons that affect the further exploration of the value of different types of higher vocational students' participation in teacher development activities?
The author adopted 5-point Likert scale to investigate 7 factors including "time", "credits", "volunteer hours", "remuneration", "teacher-student communication", "interest" and "skills". In the process of data processing, the author first assigns a "1" to "very unimportant", a "2" to "not important", a "3" to "average", a "4" to "important" and a "5" to "very important", and then discusses it. In other dimensions, the survey was carried out in the form of fill-in-the-blank, which was filled in by the surveyed students according to their personal situation. As an inclusive teaching environment, smart classroom provides a unique natural condition for the realization of in-depth classroom interaction in vocational colleges. However, the fundamental appeal of smart classroom is to deepen the profound understanding of classroom interaction in practice, rather than simple technology superposition. Reason and sensibility are neither mutually exclusive nor opposite poles. Just as Dewey said, "Any theory that considers objective conditions to be important is only at the cost of exerting external control and restricting individual freedom" [4] . | 2023-05-20T15:10:00.998Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "9fecd5508da2dc09e4cc2290b630e88533d79f43",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2023/17/shsconf_clec2023_02010.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "86547e630b94b36c3e1bbb42e8e22fea2574fd9b",
"s2fieldsofstudy": [
"Computer Science",
"Education",
"Engineering"
],
"extfieldsofstudy": []
} |
11415975 | pes2o/s2orc | v3-fos-license | The double competition multigraph of a digraph
In this article, we introduce the notion of the double competition multigraph of a digraph. We give characterizations of the double competition multigraphs of arbitrary digraphs, loopless digraphs, reflexive digraphs, and acyclic digraphs in terms of edge clique partitions of the multigraphs.
Introduction
The competition graph of a digraph is defined to be the intersection graph of the family of the out-neighborhoods of the vertices of the digraph (see [6] for intersection graphs). The competition graph of a digraph D is the graph which has the same vertex set as D and has an edge between two distinct vertices x and y if and only if N + D (x) ∩ N + D (y) = ∅. This notion was introduced by J. E. Cohen [2] in 1968 in connection with a problem in ecology, and several variants and generalizations of competition graphs have been studied.
In 1987, D. D. Scott [10] introduced the notion of double competition graphs as a variant of the notion of competition graphs. The double competition graph (or the competition-common enemy graph or the CCE graph) of a digraph D is the graph which has the same vertex set as D and has an edge between two distinct vertices x and y if and only if both [4,5,9,11] for recent results on double competition graphs.
A [1] in 1990 as a variant of the notion of competition graphs. The competition multigraph of a digraph D is the multigraph which has the same vertex set as D and has m xy multiple edges between two distinct vertices x and y, where m xy is the nonnegative integer defined by m xy = |N + D (x) ∩ N + D (y)|. See [8,12] for recent results on competition multigraphs.
In this article, we introduce the notion of the double competition multigraph of a digraph, and we give characterizations of the double competition multigraphs of arbitrary digraphs, loopless digraphs, reflexive digraphs, and acyclic digraphs in terms of edge clique partitions of the multigraphs.
Main Results
We define the double competition multigraph of a digraph as follows.
Definition. Let D be a digraph. The double competition multigraph of D is the multigraph which has the same vertex set as D and has m xy multiple edges between two distinct vertices x and y, where m xy is the nonnegative integer defined by where A i and B j are the sets defined by Proof. First, we show the only-if part. Let M be the double competition multigraph of an arbitrary digraph D. Let (v 1 , . . . , v n ) be an ordering of the vertices of D. For i, j ∈ [n], we define Then S ij is a clique of M. Let F be the family of S ij 's whose size is at least two, i.e., By the definition of a double competition multigraph, F is an edge clique partition of M.
We show that the condition (I) holds. Fix i and j in [n] and let A i and B j be sets as defined in (1) and (2).
There are four cases for v k arising from the definitions of A i and B j as follows: Hence the condition (I) holds.
Next, we show the if part. Let M be a multigraph with n vertices, and suppose that there exists an ordering (v 1 , . . . , v n ) of the vertices of M and a double indexed edge clique partition F = {S ij | i, j ∈ [n]} of M such that the condition (I) holds.
We define a digraph D by V (D) := V (M) and . Again, take any two distinct vertices v k and v l and let t ′ := m M ′ ({v k , v l }). Then, for some nonnegative integers r ′ and s ′ with r ′ s ′ = t ′ , there are r ′ common in-neighbors v i 1 , . . . , v i r ′ and s ′ common out-neighbors v j 1 , . . . , v j s ′ of the vertices v k and v l in D.
Therefore, {v k , v l } ⊆ A i ∩B j for any i ∈ {i 1 , . . . , i r ′ } and any j ∈ {j 1 , . . . , j s ′ }. By the condition (I), we have A i ∩ B j = S ij . Therefore {v k , v l } ⊆ S ij for any i ∈ {i 1 , . . . , i r ′ } and any j ∈ {j 1 , . . . , j s ′ } and this implies that where A i and B j are the sets defined as (1) and (2).
Proof. First, we show the only-if part. Let M be the double competition multigraph of a loopless digraph D. Let (v 1 , . . . , v n ) be an ordering of the vertices of D. Let S ij (i, j ∈ [n]) be the sets defined as (3), and let F be the family defined as (4). Then S ij is a clique of M, and F is an edge clique partition of M. Moreover, we can show, as in the proof of Theorem 1, that the condition (I) holds. Now we show that the condition (II) holds. Take any vertex v k ∈ S ij . Then (v i , v k ), (v k , v j ) ∈ A(D). Since D is loopless, we have v i = v k and v i = v k . Therefore it follows that v i ∈ S ij and v j ∈ S ij . Thus the condition (II) holds. Next where A i , B j , S i * , and S * i are the sets defined as (1) and (2).
Proof. First, we show the only-if part. Let M be the double competition multigraph of a reflexive digraph D. Let (v 1 , . . . , v n ) be an ordering of the vertices of D. Let S ij (i, j ∈ [n]) be the sets defined as (3), and let F be the family defined as (4). Then S ij is a clique of M, and F is an edge clique partition of M. Moreover, we can show, as in the proof of Theorem 1, that the condition (I) holds. Now we show that the condition (III) holds. Since D is reflexive, we have (v i , v i ) ∈ A(D) for any i ∈ [n]. Then it follows from the definition of D that there exists p ∈ [n] such that v i ∈ S ip or v i ∈ S pi . Therefore v i ∈ S i * ∪ S * i . Thus the condition (III) holds.
Next, we show the if part. Let M be a multigraph with n vertices, and suppose that there exists an ordering (v 1 , . . . , v n ) of the vertices of M and a double indexed edge clique partition F = {S ij | i, j ∈ [n]} of M such that the conditions (I) and (III) hold. We define a digraph D by V (D) := V (M) and A(D) given in (5). Fix any i ∈ [n]. By the condition (III), there exists p ∈ [n] such that v i ∈ S ip or v i ∈ S pi . Then it follows from the definition of D that (v i , v i ) ∈ A(D). Therefore D is a reflexive digraph. Moreover we can show, as in the proof of Theorem 1, that M is the double competition multigraph of D.
A digraph D is said to be acyclic if D has no directed cycles. An ordering (v 1 , . . . , v n ) of the vertices of a digraph D, where n is the number of vertices of D, is called an acyclic ordering of D if (v i , v j ) ∈ A(D) implies i < j. It is well known that a digraph D is acyclic if and only if D has an acyclic ordering. (IV) for any i, j, k ∈ [n], v k ∈ S ij implies i < k < j, where A i and B j are the sets defined as (1) and (2).
Proof. First, we show the only-if part. Let M be the double competition multigraph of an acyclic digraph D. Let (v 1 , . . . , v n ) be an acyclic ordering of the vertices of D. Let S ij (i, j ∈ [n]) be the sets defined as (3), and let F be the family defined as (4). Then S ij is a clique of M, and F is an edge | 2013-07-21T02:38:25.000Z | 2013-07-21T00:00:00.000 | {
"year": 2013,
"sha1": "0adf2702179006b6afeefe60be60f1b243399352",
"oa_license": "CCBY",
"oa_url": "https://dmtcs.episciences.org/2133/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "aee62f3e10accf9c8741e486aba43dafad4e81f0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
48028 | pes2o/s2orc | v3-fos-license | Population genetic structure of eelgrass (Zostera marina) on the Korean coast: Current status and conservation implications for future management
Seagrasses provide numerous ecosystem services for coastal and estuarine environments, such as nursery functions, erosion protection, pollution filtration, and carbon sequestration. Zostera marina (common name “eelgrass”) is one of the seagrass bed-forming species distributed widely in the northern hemisphere, including the Korean Peninsula. Recently, however, there has been a drastic decline in the population size of Z. marina worldwide, including Korea. We examined the current population genetic status of this species on the southern coast of Korea by estimating the levels of genetic diversity and genetic structure of 10 geographic populations using eight nuclear microsatellite markers. The level of genetic diversity was found to be significantly lower for populations on Jeju Island [mean allelic richness (AR) = 1.92, clonal diversity (R) = 0.51], which is located approximately 155 km off the southernmost region of the Korean Peninsula, than for those in the South Sea (mean AR = 2.69, R = 0.82), which is on the southern coast of the mainland. South Korean eelgrass populations were substantially genetically divergent from one another (FST = 0.061–0.573), suggesting that limited contemporary gene flow has been taking place among populations. We also found weak but detectable temporal variation in genetic structure within a site over 10 years. In additional depth comparisons, statistically significant genetic differentiation was observed between shallow (or middle) and deep zones in two of three sites tested. Depleted genetic diversity, small effective population sizes (Ne) and limited connectivity for populations on Jeju Island indicate that these populations may be vulnerable to local extinction under changing environmental conditions, especially given that Jeju Island is one of the fastest warming regions around the world. Overall, our work will inform conservation and restoration efforts, including transplantation for eelgrass populations at the southern tip of the Korean Peninsula, for this ecologically important species.
Introduction Seagrasses, marine angiosperms, play a pivotal role in ecosystem functioning and services in coastal zones. For example, they are primary producers and seagrass beds also provide important habitats, serving as both nursery and grazing areas for other marine organisms [1][2][3]. They are often called "ecosystem engineers" as they can modify surrounding biotic and abiotic marine environments, creating their own habitats. The structural components of seagrass leaves, rhizomes, and roots alter water currents, buffer physical forces of waves [4], and filter organic nutrients or pollutants, which help to improve water quality [5], stabilize sediment bottoms [6], and enhance carbon sequestration [7][8][9]. Seagrasses are thus considered a valuable ecosystem component in coastal and estuarine habitats [10].
Unfortunately, seagrass populations have been disappearing recently worldwide, primarily due to anthropogenic pressure such as reclamation, dredging, and climate change [11,12]. According to a recent meta-analysis of quantitative data on seagrass coverage from 215 sites around the world, more than 51,000 km 2 of seagrass meadows have been lost during the past 127 years [11]. The rate of seagrass decline has accelerated from a median of 0.9% per year before 1940 to 7% per year since 1990. Therefore, seagrass meadows are now regarded as the most threatened ecosystem on earth amongst all coastal ecosystems (e.g., mangroves, coral reefs, and tropical rainforests) [11].
Zostera marina (common name "eelgrass"), which is the most wide-ranging seagrass species in the northern hemisphere, including the North Atlantic and North Pacific Oceans, is also the predominant seagrass species along the Korean coasts [13,14]. This species usually occurs from intertidal to subtidal areas (e.g., it typically occurs at depths of 1-7 m relative to the mean low tide point on the southern coast of Korea) [14].
Understanding the extent of intraspecific genetic diversity and population genetic structure of Z. marina provides important information for monitoring and conservation or restoration efforts of seagrass meadows [15][16][17]. Population genetics surveys allow for inferring the population demographic history of Z. marina (e.g., population bottleneck), assessing population connectivity (i.e., contemporary gene flow), which is especially important for determining the size of a management unit, and gauging the likelihood of population persistence and adaptive potential in response to anthropogenic pressure such as climate change [16]. A number of population genetics studies of Z. marina have been conducted to examine spatial and temporal variation in the population genetic structure as well as clonal patch dynamics [18][19][20]. The observed patterns of the genetic diversity and population connectivity of this species, however, appear to vary among coastal regions and also with sampling or geographic scales examined [21].
Information on genetic variation/diversity permits testing whether the current population has lost genetic variability through the effects of genetic drift, particularly when the target population is isolated from surrounding populations [22,23]. Genetic diversity is well known to play a significant role in the ecological performance of natural seagrass populations [24,25], and it therefore strongly affects the ultimate outcome of conservation and restoration efforts of seagrasses [17,26]. Enriched within-population genetic diversity safeguards an increase in seagrass population density and biomass, enhances coexisting faunal abundance through community-level positive feedback [25], ensures rapid recovery after disturbance events (e.g., geese grazing) [24], and helps withstand biotic and abiotic environmental changes [27]. As a consequence, taking into account the information on population genetic structure of Z. marina helps facilitate effective population restoration and management plans [17,26].
Z. marina occasionally displays complex reproductive strategies associated with environmental conditions, evolving divergent life history tactics [28,29]. Populations of Z. marina separated by water depth within the same locality have recently been suggested to adapt to different light conditions, evolving alternative life history strategies (e.g., annual or perennial life histories) [29]. However, how eelgrass populations have evolved different reproductive strategies along the depth gradient and whether these depth populations within a site share a similar genetic makeup or distinct genetic clusters remain unresolved. Nevertheless, a recent study found some detectable genetic divergence between shallow and deep zones of Z. marina in San Francisco Bay, California, USA [28]. Further studies are required to test the hypothesis that ecologically divergent populations of Z. marina isolated by depth also comprise different genetic sub-populations rather than a single gene pool.
In recent years, a number of studies on Korean populations of Z. marina have been carried out, focusing on their ecological and physiological characteristics such as distribution patterns, growth dynamics, recruitment, and photosynthetic capability [30][31][32][33][34]. Additionally, several transplantation projects have been successfully conducted for seagrass habitat restoration [35][36][37][38]. Although studies have highlighted the significant role of genetic diversity in the ecological performance of seagrass populations [24,25], little effort has been made to understand the genetic structure and genetic diversity in Z. marina populations in Korea for the purpose of their conservation and restoration efforts.
In the present study, we examined the level of genetic diversity and the population genetic structure of Z. marina on the southern coast of Korea (including Jeju Island which is located approximately 155 km off the southernmost region of the mainland) to assess the current population genetic status and conservation necessity. We also tested whether ecologically divergent populations by water depth within three localities differ in genetic composition. The specific objectives of this study were to (1) examine and compare the levels of within-population genetic diversity in Z. marina between five populations in Jeju Island and five populations in the South Sea on the southern coast of Korea; (2) examine the spatial genetic structure on different geographical scales; (3) test whether there was a change in genetic composition over a 10 year-period using temporal samples; and (4) investigate whether there was significant variation in the population structure between shallow and deep populations within each of three sites. The results of our study will provide a basic but significant guideline for designing effective management, conservation, and restoration plans of this ecologically crucial species, Z. marina, on the Korean coast.
Sample collection
We sampled 454 individuals of Z. marina from 10 different localities on the southern coast of Korea (including Jeju Island, which is located off the southern tip of Korea) in August 2015 (Table 1, Fig 1). Sampling sites at Jeju Island included Hamdeok (HD), Tokki-seom (TK), Ojo (OJ), Woljeong (WJ), and Siheung (SH), and those in the South Sea on the southern coast of Korea included Gamak Bay (GM), Jindong Bay (JD), Nampo Port (NP), Aenggang Bay (AG), Koje Bay (KJ) ( Table 1, Fig 1). Near WJ in Jeju Island, human-mediated transplantation project was undertaken using other populations as a source material in 2009 [39]. However, their source population was not reported, so it remains unknown. Note that the WJ samples used in this study were obtained from areas that differed from the sites where restoration efforts had been implemented. No specific permission to collect samples was required at the study sites, and the field study did not involve endangered or protected species. Samples were collected in monotypic meadows of Z. marina by both wading and diving at a distance of 1-2 m intervals between samples using a linear transect to obtain randomly chosen ramets within each location [40]. All the samples were collected at 1-2 m intervals of each other within sites and these sampling distances were kept to be identical among sampling localities. Z. marina meadows at five sites on Jeju Island were mapped in the field with Global Positioning Systems (OziExplorer program).
To test whether there was temporal variation in the population genetic structure of Z. marina in Koje Bay on the southern coast of Korea, plants were sampled during two separate sampling periods, which were 10 years apart (July 2005 and August 2015) (KJ05 and KJ15). Due to the reported local or fine-grained genetic structure of this species [18,19], we attempted to collect the second samples (i.e., 2015 samples) from exactly the same microhabitats as the previous samples from 2005 in order to rule out a confounding spatial effect. To investigate whether there was significant genetic structure between populations by water depth within sites, three (KJ, JD, and AG) of the five sites were chosen in the South Sea for depth-specific populations, which were collected at both shallow zone (S, water depth ranges from 0 to 0.6 m) and deep zone (D, from 1.6 to 8.5 m). At AG, an additional population was sampled at a middle zone (AG-M, from 1.8 to 2.3 m). Note that the KJ population was collected in the intertidal zone for the shallow population (intertidal zone is always above water at low tide and under water at high tide).
Collected samples were washed using freshwater and raked to remove epiphytic algae using a sterilized razor blade. Leaf samples were dried at 60˚C for 24 h and then ground using a Tis-sueLyserII (QIAGEN). Powdered samples were transferred to a 1.5-ml microcentrifuge tube with silica gel and stored at -20˚C until genetic analysis. [41,42]. Each forward primer was labeled with a fluorescent dye (FAM, VIC, NED, TAMRA, and PET). PCR amplification was accomplished in a reaction volume of 15 μl containing 25 μM of each dNTP (Bio Basic), 0.6 μM each of the forward and reverse primers, 0.2 units of Taq DNA polymerase (Thermo Fisher Scientific), 1× PCR buffer, and approximately 5 −10 ng of template DNA. PCR cycling conditions comprised an initial denaturation phase at 94˚C for 5 min, followed by 37 cycles of 94˚C for 20 sec (denaturation), 54−57˚C for 30 sec (annealing), and 72˚C for 30 sec (extension), followed by a terminal extension phase at 72˚C for 12 min in a 2720 thermal cycler (Applied Biosystems). Each PCR product was checked on a 2% agarose gel stained with Redsafe TM (iNtRON Biotechnology). Amplified PCR products were then electrophoresed in an ABI 3730xl automated DNA sequencer (Applied Biosystems). Fragment sizes were compared with that of a ROX 500 bp size standard (ABI) as determined using GeneMapper software v5.0 (Applied Biosystems).
Statistical analyses
Clonal and genetic diversities. To avoid resampling multiple examples of the same clonal individual, we defined an individual plant (genet) as having a unique multi-locus genotype (MLG) using GENALEX v6.5 [43]. Where multiple samples (ramets) shared a single MLG, all but one ramet was removed for further analyses. We calculated clonal diversity R = (G-1)/(N-1), where G = the number of genets and N = the total number of ramets sampled [44]. Higher values of R denote lower levels of clonality. Once replicate ramets were removed, we assessed the statistical power of the pruned dataset to detect clones using the probability of identity values, P IDunbiased (hereafter P ID ) and P IDsib [45] estimated in GIMLET v1.3.3 [46] for each population, and depth and temporal samples.
To evaluate microsatellite diversity in the Korean eelgrass, the mean number of alleles per locus (N A ), observed (H O ) and expected (H E ) heterozygosity, inbreeding coefficient (F IS ) [47], and allelic richness (AR) corrected for unequal sample sizes were calculated using GENEPOP v4.3 [48] and FSTAT v2.9.3.2 [49]. We conducted two separate Mann-Whitney U tests to investigate if there were significant differences in the levels of clonal and genetic diversities (e.g., R, AR) between populations from Jeju Island (n = 5) and from the South Sea (n = 9) after the KJ samples obtained in 2005 (KJ05-S, KJ05-D) were excluded (see below). We tested for the presence of null alleles using MICRO-CHECKER v2.2.3 with 1000 randomizations at the 95% confidence level [50]. Genotypes at the eight microsatellite loci were tested for linkage disequilibrium (LD, nonrandom associations of alleles from different loci) using the entirely pooled sample, and multilocus tests for Hardy Weinberg equilibrium (HWE) were undertaken using GENEPOP. The 95% significance levels for every exact test for both LD and HWE were adjusted using a sequential Bonferroni correction.
Evidence of recent population bottlenecks was tested using BOTTLENECK v1.2.02 with the two-phase mutation model (TPM) [51]. A population bottleneck can be identified by the occurrence of a mode-shift (i.e., allele distribution shift) and/or a significant heterozygosity excess tested statistically by a Wilcoxon sign-rank test [52]. Contemporary effective population sizes (N e ) were also calculated for each of the samples based on the LD method in NeEstimator v2.01 [53].
Population genetic structure. To evaluate the spatial population genetic structure of Z. marina on the southern coast of Korea, a hierarchical analysis of molecular variance (AMOVA) was performed in ARLEQUIN v3.5 [54]. The spatial AMOVA was conducted by grouping the were excluded from this analysis because detectable temporal variation was observed (see below). To further investigate spatial, temporal, and depth-specific genetic differentiation between populations, exact tests for population differentiation [55] as well as calculation of pair-wise estimates of F ST [47] were performed using GENE-POP. The 95% significance levels for pairwise comparisons were adjusted using a sequential Bonferroni correction.
Isolation by distance (IBD) among geographic populations was tested using the Mantel test. The KJ populations sampled in 2005 (KJ05-S, KJ05-D) were again omitted from this analysis.
The IBD analysis was carried out using two matrices, genetic distance (F ST ) and geographic surface distance (in kilometers), in GENALEX v6.5 [43]. Geographic surface distance was calculated as the shortest distance between sampled populations via water (derived oceanographic distance) from the website (http://www.movable-type.co.uk/scripts/latlong.html). Geographic surface distance for depth populations within the three sites (KJ15, JD, AG) in the South Sea was considered as zero kilometers because of the close surface distances (within approximately 100 m) between them. To analyze IBD in smaller geographic scales, we performed two independent Mantel tests separately for populations in Jeju Island and those in the South Sea.
In addition, we analyzed the population structure of Z. marina using an individual-based Bayesian population assignment test in STRUCTURE v2.3.1 under a model of admixed ancestry among populations and correlated allele frequencies [56] with no a priori information on the geographic origins of the samples. STRUCTURE calculates a likelihood score when the data are forced into a given number of genetic clusters, K. We tested 10 iterations at each K = 1 −14, with 50000 burn-in steps followed by 500000 Markov chain Monte Carlo (MCMC) generations. STRUCTURE analyses were also performed separately for populations in Jeju Island and for those in the South Sea. The "temporal" structure between the samples of KJ (KJ05-S, KJ05-D, KJ15-S, and KJ15-D) was examined at each K = 1−4. For this temporal comparison, depth samples were not pooled because statistically significant differentiation was observed between the 2015 samples (KJ15-S vs KJ15-D; F ST = 0.031, P < 0.01). The most probable number of clusters (K value) was estimated using the ΔK method [57] implemented in the webbased tool, Structure Harvester (http://taylor0.biology.ucla.edu/structureHarvester), on the basis of the rate of change in the log probability of data between successive K values [58]. In addition, the number of genetic clusters (populations) was set to K = 11 (after excluding KJ05 samples), based on the results of significant pairwise F ST -statistics tested. We ran the analysis three independent times to check for convergence on similar K values, and found that the three runs arrived at identical values. Finally, genetic relationships among individuals with multilocus genotypes were assessed by factorial correspondence analysis (FCA) as implemented in GENETIX v4.04 [59].
Clonal and genetic diversities
Genetic diversity indices [e.g., clonal diversity (R), allelic richness (AR), mean number of alleles per locus (N A ), heterozygosity] within the 16 populations of Z. marina on the Korean coasts are summarized in Table 1. The overall values of P ID and P IDsib were 0.0017 with a range of 0-0.0175 and 0.0436 with a range of 0.0074-0.2215, respectively, indicating a reasonable power to identify unique clones using our eight microsatellite markers and sample sizes [45]. The values of P IDsib for the Jeju Island samples (mean = 0.0994) were generally higher than for the South Sea samples (mean = 0.0146) ( Table 1), suggesting that values of R for the Jeju Island populations may be underestimated if near relative matings are common in these populations. Extents of within-population microsatellite diversity were significantly greater for the samples within the South Sea region than for those within the Jeju Island region (R: Mann-Whitney U = 7.5, P = 0.042; AR: Mann-Whitney U = 0, P = 0.001) (Table 1, Fig 2). Mean R for Jeju Island and the South Sea was 0.507 ± 0.288 (standard deviation; SD) and 0.822 ± 0.153, respectively. N A for Jeju Island and South Sea ranged from 1.625 (HD) to 3.125 (SH) and from 3. (Table 1). Moreover, the number of private alleles detected was four times greater for the South Sea (n = 44) than for Jeju Island (n = 11). The KJ samples collected in 2015 showed a slightly higher level of AR than those collected in 2005 (mean AR for KJ15 = 2.582; KJ05 = 2.327; Table 1) as 12 alleles, which had not been present in KJ05 samples, were newly found in KJ15 samples.
The F IS values within the Jeju Island and South Sea regions ranged from −0.198 (HD) to 0.251 (SH) and from −0.129 (AG-D) to 0.243 (KJ15-D), respectively. As a consequence, two populations (TK, and SH) on Jeju Island and three populations (JD-D, KJ15-D, and KJ05-D) in the South Sea might be experiencing non-random mating (e.g., inbreeding or outbreeding) at the eight loci analyzed, based on our multilocus tests for Hardy-Weinberg equilibrium (HWE) expectations (Table 1). In Jeju Island, TK showed a negative value for F IS , indicating outbreeding, but SH had positive values, indicating a significant deficiency of heterozygotes. In the South Sea, JD-D, KJ15-D and KJ05-D had positive values, ranging from 0.053 (JD-D) to 0.243 (KJ15-D). The estimated frequencies of null alleles at the eight loci were close to zero, ranging from 0.020 (GA1) to 0.1521 (CT20), suggesting a low probability of null alleles. Tests of LD between the eight loci were not significant after a sequential Bonferroni correction, suggesting that all of the loci analyzed can be considered independent markers. BOTTELNECK analysis revealed that only two populations (HD and OJ) within Jeju Island and one population (KJ05-S) within the South Sea had allelic distribution shifts (mode-shift), which are typically regarded as evidence of population bottlenecks (S1 Table). However, heterozygosity excess was only detected in the two populations within Jeju Island. These two populations that showed allelic distribution shifts, however, displayed a statistically nonsignificant heterozygosity excess using a Wilcoxon sign rank test (S1 Table).
The LD method gave median estimates of effective population sizes (N e ) of only 0.9 [95% confidence interval (CI): 0.6-1.5] and infinity (95% CI: 1.6 -1) for the WJ population and the OJ population on Jeju Island, respectively ( Table 2). The estimates of N e for the populations in the South Sea were, however, generally larger, ranging from 2.7 (95% CI: 1.7-7.9) for KJ05-D to an infinite N e (95% CI: 50.1 -1) for JD-S (Table 2). Population genetic structure Spatial AMOVA analysis revealed significant variation in the genetic structure between the Jeju Island and South Sea regions (Table 3). In addition, significant genetic variation among populations within the regions was detected. However, the percentage of variation accounted for between the regions (22.11%) was higher than that for among populations within the regions (14.46%; Table 3). The overall genetic variation among populations, regardless of the groups, was also significant. (Table 4).
For depth samples at KJ, AG and JD in the South Sea, we found weak but some statistically significant genetic differentiation between shallow or middle and deep samples (Table 4). F ST statistics between shallow and deep populations for the KJ05 samples showed no significant difference (F ST = 0.017, P > 0.05). However, these populations after 10 years became significantly genetically divergent (KJ15: F ST = 0.031, P < 0.01). Among samples from three depths (shallow, middle, deep) within the AG population, one population-pair was significantly genetically differentiated (AG-M vs AG-D, F ST = 0.036, P < 0.05). There was no significant genetic differentiation detected between shallow and deep populations for JD (JD-S vs JD-D, F ST = 0.013, P > 0.05) ( Table 4). The pairwise estimates of F ST statistics revealed a significant genetic divergence between the KJ05-S samples collected in 2005 and KJ15 samples in 2015 ( Table 4).
The Mantel test showed a significant positive correlation between geographic (km) and genetic distances (F ST ) for all 14 populations (R 2 = 0.4549, P < 0.01) (Fig 3). Especially within the Jeju Island region, relatively high genetic distances between populations were detected, although those populations are geographically closely situated (e.g., HD vs WJ: 11 km, HD vs OJ: 30 km; Table 4). While the South Sea populations revealed a significant positive correlation between geographic and genetic distances (R 2 = 0.3703, P < 0.01), Jeju Island populations showed a lack of a significant correlation between them (R 2 = 0.0001, P = 0.33). STRUCTURE analysis found that the 14 populations of Z. marina are most likely to form two genetically unique clusters (K = 2), which corresponds well to their geographical proximities (e.g., Jeju Island, South Sea) (Fig 4A). Two distinct genetic clusters, as for the Jeju Island and South Sea regions, were further supported by the results of our FCA (Fig 5). The number of genetic clusters was determined by statistics values, ΔK = 3674.59. Several individual Table 3. Hierarchical analysis of molecular variance (AMOVA) of spatial genetic structure for the 14 populations of Zostera marina based on eight microsatellite loci. The KJ populations sampled in 2005 (KJ05-S, KJ05-D) were excluded from this analysis since detectable temporal genetic variation was observed. The analyses were performed by grouping the geographic populations according to the respective regions (Jeju Island and South Sea; see Materials and Methods). Population genetic structure of eelgrass on the Korean coast Fig 4C). Similar to the results of weak genetic differentiation between temporal samples for the KJ population (KJ05 and KJ15), STRUCTURE analysis suggested three genetic clusters (K = 3) that appeared to be distributed broadly across the depths and sampling years, perhaps due to low levels of genetic divergence between the samples (S1 Fig).
Lower level of genetic diversity in Island populations
Seagrasses play a central role in the ecosystem functioning of coastal and estuarine habitats [60]. Seagrasses worldwide are, however, under severe threat due to sharp population declines in recent decades, and one in five seagrass species is now at risk of extinction [11,12]. Natural recovery of perturbed seagrass meadow requires a considerable amount of time, even though seagrass die-off is rather rapid [61]. Therefore, seagrass conservation and restoration management efforts through transplantation are currently under way in many parts of the world, including Korea [36][37][38]62]. However, seagrass transplantation projects often focus on increasing the density, productivity, and area of meadow coverage, while information on the genetic diversity and genetic structure of source and recipient populations, which is suggested to be a key factor in the ultimate outcome of restoration efforts [63], is nonetheless often overlooked. Transplantation should also be accomplished without perturbing the natural genetic structure, because natural seagrass populations typically show spatial genetic structure, which is often Population genetic structure of eelgrass on the Korean coast apparently associated with particular local environments (e.g., oceanic current) [64] and/or spatially and temporally varying patch dynamics (e.g., 'genetic patchiness') [18,19]. Therefore, understanding the population genetic structure of seagrass species occurring at a given local environment should be an essential component of conservation and restoration efforts.
In the present study, we first reported on the levels of within-population genetic diversity and population genetic structure among the 16 populations (including temporal and depth samples) of the temperate seagrass species, Z. marina, in Jeju Island and the South Sea on the southern part of the Korean Peninsula using eight nuclear microsatellite loci. We found that populations from Jeju Island, which is located between the southernmost region of the Korean Peninsula and Japan where the water mass from the Kuroshio Current meets the Yellow Sea, serving as a biological hotspot in Korea [65], harbor significantly lower levels of within-population genetic diversity (e.g., AR, R) than those from the South Sea. The observed lower levels of genetic diversity in five populations from Jeju Island (mean AR = 1.92), which is located approximately 155 km off the mainland, relative to those from the South Sea (AR = 2.69), which is situated on the southern coast of the mainland, suggest that effective population sizes (N e ) have been smaller for the former than for the latter. Multifaceted lines of evidence support the hypothesis that Jeju Island populations have a smaller current N e . Our microsatellite-based LD estimates of N e indicate that a median of N e at Jeju Island [mean N e = 17.55 when OJ (N e = infinity) was excluded] is approximately eight times smaller than that at the South Sea [mean N e = 143.50 when NP and JD-S (N e = infinity) was excluded], providing direct support for our hypothesis ( Table 2). Two of the five populations from Jeju Island (HD and OJ) show a genetic signal of a "population bottleneck", as suggested by the incidences of allele distribution shift or heterozygote excess, further supporting the hypothesis that these particular populations on Jeju Island have probably undergone a recent bottleneck [51]. Levels of within-population genetic diversity in plants are suggested to be positively associated with population size and fitness [66]. For eelgrass, a previous study experimentally demonstrated a positive correlation between a measure of genetic diversity as represented by the number of alleles per locus and individual survivorship and thus, population size during a 3-year period [26]. The observed trend of the positive relationship between genetic diversity and population size or coverage (HD = 138 m 2 , WJ = 310 m 2 , OJ = 841 m 2 , TK = 4438 m 2 , and SH = 275736 m 2 ; S.R. Park, unpublished data) estimated from Jeju Island appears to be commensurate with those previous reports. The SH population, which has the largest meadow area, shows the highest level of genetic diversity (AR = 2.26) whereas the HD population, which has the smallest area, shows the lowest level of genetic diversity (AR = 1.52). Although the WJ population's coverage is approximately half that of OJ, the level of genetic diversity in WJ (AR = 2.12) is even higher than in OJ (AR = 1.69). This unexpected, relatively high genetic diversity in WJ is possibly due to a human-mediated transplantation project undertaken using other populations as a source material around this region in 2009 [39]. Interestingly, the level of genetic diversity in the KJ population in the South Sea has slightly increased over the last decade, implying that N e has been augmented in this population, which is supported by our estimates of N e ( Table 2). The patterns of elevated levels of genetic diversity in KJ in 2015 relative to 2005 are apparent in both shallow and deep populations. The increased genetic diversity in KJ over those 10 years may contribute to the observed temporal genetic structure between KJ05 and KJ15 samples (see below).
Geographically disconnected and genetically isolated populations with a relatively smaller N e on Jeju Island are more vulnerable to the effects of increased genetic drift causing the loss of rare alleles [67]. This diminishes the evolutionary potential of the population to genetically adapt to novel environmental conditions such as climate change [68]. This is particularly critical, given that populations from Jeju Island are situated off the southern end of the Korean Peninsula, which is the warmer area and is classified as subtropical weather zone recently; this region is also one of the fastest warming regions worldwide [69]. Therefore, the Jeju Island populations should be conserved with high priority. According to a previous study [25], a high extent of genetic diversity may enhance ecosystem recovery, such as biomass production, plant density, and faunal abundance, after perturbations. Loss of genetic diversity resulting from the increased effects of inbreeding and genetic drift may elevate the probability of extinction of small populations [70]. In this respect, Jeju Island populations are perhaps at high risk of local extinction under changing environments, as they have not only a low degree of genetic diversity (AR, R) but also small N e .
Strong spatial but weak temporal genetic structure
We find that the eelgrass, Z. marina on the southern coast of the Korean Peninsula comprises genetically divergent populations on a 0.7-250 km geographic scale at a given time. Our multifaceted analyses of a microsatellite dataset clearly reveal a noticeable level of genetic structure between populations in Jeju Island and those in the South Sea, as suggested by spatial AMOVA as well as an individual based Bayesian population assignment test and also FCA analyses. The observed significant genetic structuring between the Jeju Island and South Sea regions suggests a very low level of ongoing gene flow occurring between populations across the South Sea of Korea. Even within each region, microsatellite genetic differentiation was fairly high (Jeju Island: F ST = 0.054 − 0.573; South Sea: F ST = 0.009-0.326; Table 4) and highly statistically significant (except some population comparisons between depth or temporal samples). The magnitude of genetic differentiation is, however, generally higher for populations in Jeju Island than for those in the South Sea, although the former localities (approximately 0.7 − 30 km) are geographically closer to each other than the latter (approximately 26 − 95 km). Isolation by distance (IBD) analysis further indicates that genetic distance among populations increases with geographic distance, meaning that the geographic proximity of populations contributes to shaping the observed spatial genetic structure. However, geographic distance cannot explain the genetic variation observed among populations in Jeju Island.
Most, but not all, other population genetics studies on spatial genetic variation in the eelgrass Z. marina have also found genetic heterogeneity over rather small geographic scales of a few or tens of kilometers [28,71]. This can be interpreted as restricted ongoing gene flow taking place among geographically disconnected populations of Z. marina. At even smaller, finegrained scales, a mosaic of clones that originate from different source populations over space and time can also lead to the micro-geographical population structure in this species [18,19]. There appears to be two possible modes of dispersal across populations of Z. marina [21]: (1) short-distance dispersal of seeds from a nearby parental population by gravity [72] and (2) long-distance dispersal from a far-away parental population by rafting of shoots containing seeds over several tens of kilometers [73] and even sometimes up to a few hundred kilometers via local oceanic currents [74,75]. In this regard, the geographic distance between HD and WJ on Jeju Island is approximately 11 km, but the degree of genetic differentiation is the second highest among populations on Jeju Island (F ST = 0.531), suggesting that dispersal via rafting seems highly unlikely in this region, perhaps due to local currents. The restricted population connectivity, particularly among Jeju Island populations might, at least in part, explain why levels of within-population genetic diversity on Jeju Island are generally lower than in the South Sea, and this may ultimately lead to a reduction in population fitness, thereby elevating the risk of local extinction. On the other hand, directional selection can cause a reduction in genetic diversity rapidly, although it increases average population fitness. Yet, this scenario seems unlikely, given presumed selective neutrality of microsatellite markers used in this study.
We also observed a statistically significant change in population structure between temporal samples (2005 vs 2015) in a shallow population of KJ in the South Sea (Table 4). Twelve alleles that were not present at KJ in 2005 were newly detected at the same sites after 10 years had elapsed. In addition, the frequencies of 10 and 13 alleles for shallow and deep populations, respectively, at KJ have changed over the last decade. Therefore, the level of genetic diversity in KJ15 (mean AR = 2.582) is higher than in KJ05 (mean AR = 2.327). In this area, the Z. marina population was maintained through only asexual reproduction (new shoot recruitment via lateral shoot production) before 2006; however, seedling shoots via sexual reproduction have been observed since 2006, and their density has gradually increased in both shallow and deep zones (S.R. Park, personal observation). The observed weak temporal variation in the genetic structure can also be explained by the temporally varying clonal dynamics and genetic mosaic of eelgrasses [19]. Our data and observations, however, suggest that the observed new alleles have come into KJ from other South Sea regions (e.g., GM, AG and JD) by floating seed dispersal over 10 years and thereby, changed its genetic composition.
Genetic divergence by water depth
The life history or reproductive strategy of Z. marina is known to be affected by various environmental parameters such as different light regimes [29,76], salinity fluctuations, and water temperature [77]. According to a previous study [29], a deep population (water depth ranges from 4 to 7 m) at JD in the South Sea maintained its meadow only through sexual reproduction (typical annual life cycle) whereas a shallow population (water depth ranges from 1 to 3 m) at the same site persisted through both sexual and asexual reproduction (typical perennial life cycle).
We find weak but statistically significant genetic differentiation between shallow (and middle) and deep populations of Z. marina at two (AG and KJ) of the three localities in the South Sea region analyzed ( Table 4). The results may suggest that life history differences between shallow and deep populations [29] are hindering genetic exchange to some detectable degree along the depth gradient in these particular environments. Light attenuation along the depth may act as a barrier to gene flow between shallow and deep populations examined [29]. However, this hypothesis should await future ecological investigation. Alternatively, other environmental factors, such as tidal changes, may serve as a barrier to genetic exchange, leading to the observed genetic divergence among samples from different depths. In addition, differences in the disturbance regime along the depth gradient may lead to differences in the frequencies of an opportunity for subsequent (chaotic) recolonization, causing the fine-grained population structure [18]. A previous study found only marginally significant genetic differentiation between shallow and deep populations at one of the three sites tested in San Francisco Bay, California, USA [28]. Additional population genetic studies using samples taken from various depths at other localities would be required to generalize the observed patterns of genetic divergence along the depth gradient.
Conservation implications
Knowledge about the population genetics of seagrass is an important issue for its effective conservation and restoration management [15,25,28]. Multiple lines of evidence-depauperate within-population genetic diversity, small N e , and limited levels of genetic exchange among populations on Jeju Island-suggest an urgent need to conserve these vulnerable populations. Other populations with higher levels of genetic diversity could be used as source materials for restoration of the eelgrass populations through transplantation [78]. However, the possibility of disrupting localized adaptation that may already be present in the recipient population should also be considered. For Posidonia oceanica, a seagrass species endemic to the Mediterranean Sea, enriched genetic diversity of source populations was significantly positively correlated with individual survival, increased rhizome length (i.e., growth), and number of ramets in the transplanted shoots [79]. However, transplantation should be undertaken carefully, considering not only genetic diversity but also the "natural" spatial population structure that is possibly related to particular local environments [64].
Conclusions
This study first provides the information on the genetic diversity and genetic structure of South Korean eelgrass (Zostera marina) populations, which will contribute to the establishment of appropriate management, conservation, and restoration plans for future persistence of this ecologically valuable species. We genotyped eight microsatellite loci for 454 individuals sampled from 16 populations (including temporally replicated and depth samples) along the southern coastal regions of the Korean Peninsula (e.g., Jeju Island and South Sea). We found significantly lower levels of genetic diversity, smaller N e and more restricted population connectivity (i.e., contemporary gene flow among populations) for Jeju Island compared with the mainland populations, suggesting that the southernmost populations off the Korean Peninsula are more vulnerable to local extinction under future changing environments. We suggest that the Jeju Island eelgrass populations should be conserved with high priority, given that this region is known to harbor the highest level of biodiversity in spite of being one of the fastest warming regions around the world. Further studies of Z. marina along the western and eastern coasts of the Korean Peninsula would help us to better understand the broader pattern of population genetic structure of Korean eelgrass and also to develop effective restoration strategies for this species. | 2018-04-03T01:56:45.100Z | 2017-03-21T00:00:00.000 | {
"year": 2017,
"sha1": "76e1257cf232dbc0d64cc028256e298ec28fdd3f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0174105&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "76e1257cf232dbc0d64cc028256e298ec28fdd3f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
255853045 | pes2o/s2orc | v3-fos-license | Dispensing and determinants of non-adherence to treatment for non complicated malaria caused by Plasmodium vivax and Plasmodium falciparum in high-risk municipalities in the Brazilian Amazon
In Brazil, 99.7 % of malaria cases occur in the Amazon region. Although the number of cases is decreasing, the country accounted for almost 60 % of cases in the Americas Region, in 2013. Novel approaches for malaria treatment open the possibility of eliminating the disease, but suboptimal dispensing and lack of adherence influence treatment outcomes. The aim of this paper is to show the results on dispensing practices, non-adherence and determinants of non-adherence to treatment of non-complicated malaria. The study was conducted in six high-risk municipalities with Plasmodium vivax and Plasmodium falciparum transmission in the Brazilian Amazon and based on the theoretical framework of the Mafalda Project, which included investigation of dispensing and adherence. The World Health Organization Rapid Evaluation Method has been used to estimate sample size. Individuals over 15 years of age with malaria were approached at health facilities and invited to participate through informed consent. Data was collected in chart review forms focusing on diagnosis, Plasmodium type, prescribing, and dispensing (kind, quantity, labelling and procedures). Follow-up household interviews complemented data collection at health facility. Non-adherence was measured during the implementation phase, by self-reports and pill-counts. Analysis was descriptive and statistical tests were carried out. Determinants of non-adherence and quality of dispensing were assessed according to the literature. The study involved 165 patients. Dispensing was done according to the national guidelines. Labelling was adequate for P. vivax but inadequate for P. falciparum medicines. Non-adherent patients were 12.1 % according to self-reports and 21.8 % according to pill-counts. Results point to greater non-adherence among all P. falciparum patients and among malaria non-naîve patients. More patients informed understanding adverse effects than ‘how to use’ anti-malarials. Non-adherent patients were mostly those with a P. falciparum diagnosis and those in their second or more malaria episode. New taxonomies and concepts on adherence stress the importance of focusing on the individual patient. Interventions targeted to and tailored for malaria patients must be addressed by health policy and implemented by managers and clinicians.
Background
Despite the decrease in the number of malaria cases worldwide in the past decade, this disease remains a major public health problem. Brazil accounted for almost 60 % of cases in the Americas Region, with 178,613 cases reported in the year 2013. Transmission still occurs in 808 municipalities of the country [1,2]; 99.7 % of cases occur in the Amazon region where Plasmodium vivax and Plasmodium falciparum co-exist, and to a lesser extent, Plasmodium malariae is present.
Following the recommendations of the Amsterdam conference in 1992, Brazil bases malaria control on early diagnosis and adequate treatment, while also favouring prevention strategies [3]. For P. vivax malaria, for instance, the recommended treatment is the combination of chloroquine plus primaquine and this protocol remains unchanged.
In 2006, the Brazilian National Malaria Control Programme (NMCP) introduced changes in P. falciparum malaria treatment, from quinine plus doxycycline plus primaquine to artemisinin combination therapy (ACT). According to NMCP, this treatment change resulted in a decrease in total number of P. falciparum cases, from 24.9 % of all registered malaria cases in 2006, to <16.2 % in 2011 [1]. Novel approaches for malaria open the possibility of eliminating the disease, in specific situations [4]. In this context the correct use of anti-malarials is tantamount for disease control. This is one additional reason for studying treatment-based control policies.
Lack of treatment adherence is frequent in malaria [5], due to a number of factors that may be present, such as lack of prescription or written instructions, regimen complexity, adverse effects [6]. Sub-optimal dispensing is an important service determinant that may also be present and influence treatment outcomes. These may lead to intermittent dosing and eventually to drug resistance [7].
Adherence has been measured by a series of direct and indirect methods, and is usually expressed as a percentage of total number of doses, according to dosing regimen [6,8,9]. Controlled studies have recently characterized adherence as a multi-step or multi-phase process, involving initiation (or decision to start dosing), implementation (actual dosing history) and discontinuation (cessation) of the treatment regimen. The maintenance of treatment after initiation and before discontinuation is called persistence [7,8]. In field conditions adherence, or non-adherence, may be investigated by questionnaires to produce estimated measures such as self-reports or pill counts. Both of these methods measure implementation and may clarify essential details regarding this phase, even if not other phases [10]. Overall, adherence remains an essential factor for malaria patients and although considerable work has been done to assess magnitude of adherence, several questions regarding adherence remain unanswered [11].
The main goal of this study is to show the results on dispensing practices, non-adherence to treatment during the implementation phase and determinants of nonadherence to treatment of non-complicated malaria in settings with P. vivax and P. falciparum transmission in high-risk municipalities of the Brazilian Amazon.
Methods
The methods employed in this work were based on the theoretical framework of the "Mafalda" Project ("Pharmaceutical services for non-complicated malaria by P. vivax and P. falciparum in high-risk municipalities of the Brazilian Amazon: organization of services, prescribing, dispensing and adherence to treatment"). Initially, in order to subsidize the project, a comprehensive review of pharmaceutical services for malaria was carried out and published [12]. The framework comprising a logic model and 25 indicators was developed and subsequently published [13]. The indicators included in the framework encompassed the following dimensions: context and organization of pharmaceutical services, prescribing, dispensing and (implementation) adherence to treatment. The first two dimensions were developed and published [14]. This paper will focus on the implementation phase and examine non-adherence, by means of a careful examination of determinants, which include those linked to treatment regimen and those linked to health services and care to malaria patients, including dispensing practices.
The method was based on World Health Organization (WHO) guidelines [15] and adapted by Management Sciences for Health (MSH) [16], in which evaluations of non-complicated malaria require a sample of not under 600 patient registries (investigated at health facility level) is recommended [17], complemented by at least 150 patients at household level.
Data collection instruments
Questionnaires, observation forms, interviews forms and chart review forms were used during field work, according to the dimension that was being investigated [13]. Specific forms, for quantitative and qualitative data, were used for organization of services, prescribing and dispensing. A chart review form compiled data on patient diagnoses, type of Plasmodium spp, treatment characteristics (including prescribing), dispensing (dispensing process, medicines received, in kind and quantity, labelling), information given to patient by health worker (regarding use of medicines, adverse effects and how to keep medicine at home) and compliance to national guideline. For the household survey, another data collection instrument, for objective data, was applied, according to treatment regime.
Field study
The investigation was carried out in six malaria high-risk municipalities of the Brazilian Amazon, selected according to Annual Parasitic Index (API) and population. Up to four high-coverage primary health care facilities were selected to guarantee number of eligible individuals. Individuals of both genders, ≥15 years of age, excluding pregnant women, with parasitological confirmed mild malaria were followed throughout the study. More detailed information on the field study and participants may be found elsewhere [14].
All procedures for investigation of organization of services and prescribing were complemented by those designed to investigate dispensing and adherence. For information regarding these dimensions, patients were approached in a two-step process. Data collection during consultation at health facility and observation of dispensing practices was complemented by household interviews. Patients that had been recruited for the first part of the investigation (consultation at health facility) were asked if household follow-up visit was welcomed, which indicated their inclusion in the next part of the investigation. Data was collected, according to P. falciparum or to P. vivax treatment, on the second or on the fifth day of treatment (D2 or D5), respectively.
Analysis
Analysis was based on the theoretical framework developed for the Mafalda Project [13]. Dispensing was characterized by drugs dispensed according to prescription or indication, information given to patient during dispensing; adequate labelling of medicines; patients reporting knowledge of treatment regimen and adverse effects. Adequacy of labelling was assessed by direct observation during dispensing of treatment regimens at health facilities.
Adherence was approached considering the assumption that all patients initiated treatment on D0 and discontinued treatment at the end of D2 (P. falciparum) or D5 (P. vivax). Household visits occurred during the last day of treatment. Non-adherence was measured during the implementation phase, by self-reports (adherence accepted as no missed doses during treatment period) and pill counts (adherence accepted as quantity received as proxy of quantity consumed), both of which are expressed by percentages.
Determinants of non-adherence related to quality of dispensing were evaluated according to in-place requirements at facilities suggested by the literature [6,7,9,18,19]. Other determinants related to disease (diagnosis, first malaria episode, general well-being in the present malaria episode) patient characteristics and treatment characteristics (first-line treatment, adverse effects, use of other medicines, care-seeking behaviour) were analysed according to adherence group. Among non-adherent individuals, reasons given for non-adherence during household visit were described.
Statistical analysis
A simple test of differences between proportions was carried out to investigate possible discrepancies in results, admitting a 95 % CI [20]. Concordance between measures of adherence (patient reports and pill counts) was carried out by calculation of the kappa index [21].
Ethical considerations
All participants were asked to sign an informed consent form and furthermore to give written agreement for household visits. The Sergio Arouca National School of Public Health, Oswaldo Cruz Foundation (Fiocruz) Ethics in Research Committee gave study approval (Approval number 91/06; CAAE 0086.0.031.000.06).
Dispensing
Treatment regimen was examined for 165 adult patients. This number was 10.0 % over the minimum number of patients required for the study of adherence as described in the study design. Of the 165 patients, 134 (81.2 %) had been diagnosed with P. vivax, all receiving first-line treatment (Table 1) and 31 with P. falciparum, of which 16 (51.6 % of P. falciparum patients) had received first-line treatment at the time of the study (Table 1). Twenty-eight patients (16.9 % overall) were experiencing their first malaria infection.
Observation of dispensing for these patients in health facilities first resulted in findings related to dispensing requirements. Dispensing of first-line regimens followed indication, and was done in accordance to the national guideline, for P. vivax as well as for P. falciparum. Among 165 patients, 134 patients (81.2 %) received verbal instruction at dispensing; 112 (67.9 %) patients informed understanding of how to use the anti-malarials and 137 (83 %) of their possible adverse effects.
In regard to labelling of medicines at dispensing, shortcomings related to name of medicine, dose or dosage form were observed. Labelling of chloroquine and primaquine were adequate in 80 % of P. vivax regimens. At the time of data collection, first-line regimen for P. falciparum included quinine sulfate, doxycycline and primaquine. However, labels for these medicines (quinine, n = 16, doxycycline, n = 16 and primaquine, n = 16) were all inadequate. Labels presented problems among the 15 patients receiving alternative P. falciparum treatment, of which 14 received artemether-lumefantrine and primaquine. Inadequately labelled medicine was also dispensed to one patient receiving mefloquine and primaquine for P. falciparum (Table 1).
Adherence and non-adherence
One hundred and sixty-five patients were visited for household interviews during investigation of adherence (134 P. vivax patients on Day 5 and 31 P. falciparum patients on Day 2). Non-adherence during implementation measured by self-reports revealed 144 adherent patients (there was one missing measure for among P. vivax patients). Twenty patients (12.2 %), non-adherent in self-reports, informed they had stopped using at least one anti-malarial during treatment (Table 2).
Pill counts were conducted for 165 patients, and the quantity that they had in their possession, in relation to day of treatment, was accurate in 129 (78.2 %) and inaccurate for 36 (21.8 %), patients designated as non-adherent (Table 2). Concordance (kappa) between these two methods of measuring adherence was 0.74. Table 2 also shows non-adherence determinants among individuals. P. falciparum patients were more prone to being non-adherent. This finding was significant for both measures of adherence implementation. Also significant for implementation non-adherence according to pill count were results for non-naive malaria patients (p = 0.012). All other variables were non-significant.
Regarding P. falciparum patients, of the 12 non-adherent by pill-counts, 11 were on the first-line regimen and one was on artemether-lumefantrine and primaquine; of the eight non-adherent patients by self-reports, seven were on first-line treatment and one was on the alternative regimen with artemether-lumefantrine and primaquine. The distribution of non-adherence determinants by malaria type was not possible because of sample size.
Discussion
The Mafalda Project ("Pharmaceutical services for noncomplicated malaria by P. vivax and P. falciparum in high-risk municipalities of the Brazilian Amazon: organization of services, prescribing, dispensing and adherence to treatment") was developed in order to provide data of the situation of pharmaceutical services for malaria in municipalities at high risk for the disease in the Amazon [13,14,22,23].
Previous results from the Mafalda Project have shown that in Brazil diagnostic procedures work well, but good prescribing practices are not performed in most municipalities. Other findings showed problems with organization of pharmaceutical services-especially concerning stock management and drug storage [14]. There are no written instructions for the malaria patient and only oral guidance is received from technicians in endemic areas. Few qualified professionals (physicians) actually prescribe [23]. Health workers have little formal education while training is informal or insufficient [23]. In this context it seems difficult that these workers can effectively contribute to the processes that lead to adherence [6]. This study focused specifically on dispensing and adherence. Main findings in this study showed that dispensing was carried out according to the national guidelines. A greater proportion of patients informed understanding adverse effects over 'how to use' antimalarials. Labelling was adequate for P. vivax but inadequate for P. falciparum medicines. Self-reports accounted for 144 adherent patients and pill counts for 129.
Well-reported studies in the Brazilian Amazon have shown differences in adherence. One measured adherence of P. vivax patients by a standardized scale and presented similar results as to range of non-adherence, but an overall higher proportion of non-adherent patients-33.3 % [9]. Other studies showed lower percentages of non-adherent individuals: 9.6 % for nonadherent P. vivax patients in Pará State [24] and 16 % non-adherence for P. vivax and P. falciparum patients in Mato Grosso [25].
This difference between methods and problems with questionnaire consistency has been described previously in the literature [6] and may account for the discrepancy. In this case, concordance between methods was 0.74. There was more implementation non-adherence among P. falciparum patients and among non-naive patients. Various previous malaria episodes in the same patient may be a barrier to complete treatment. Patients usually discontinue medication as they feel better and malariasavvy patients may disregard need to finalize treatment regimen [26]. Adequate dispensing with assertive information as to the risks of not completing the treatment might have had a positive influence on these patients.
It is noteworthy to mention that lack of adequate labelling for P. falciparum medicines coincided with greater non-adherence for this group of patients. In regions where many patients are illiterate, family members who can read may provide support for better understanding of treatment regimens by patients, at least by reading the labels and instructions on how to use their medicines. During the Mafalda Project, ACT had been introduced by PNCM in certain areas of the Amazon but were labelled in English. This evidently worsened the situation. Recently this is undergoing change. Anti-malarials are now supplied in blisters labelled with symbols for better understanding [27]. Lack of written instructions for medicines, however, persists.
A greater number of patients (83 %) mentioned understanding adverse effects. These possible malaria treatment outcomes may be acutely felt by patients [23], and are therefore valued as a worthwhile reason for nonadherence to treatment, as reported by ten individuals (27.8 %). Drugs with hazardous effects, even on the first dose, such as some anti-malarials, may cause 'off-on' episodes in treatment implementation, which account for abrupt changes to drug exposure, compromising treatment response and fostering resistance [28,29].
Results point to greater non-adherence among all P. falciparum patients and among malaria non-naive patients. For the first group, reasons may be associated to change in treatment regimens, from a three-medicine 5-day treatment to a single-medicine (ACT) 3-day treatment. Many patients experienced various episodes of P. falciparum malaria and were already accustomed to the treatment regimen. With the lack of adequate instructions and labels, switches might be confusing. Another possibility is the acuteness of adverse effects with the traditional P. falciparum regimen (quinine sulfate, doxycycline, primaquine), leading patients to discontinue treatment as soon as they feel a little better.
More than suboptimal dosing history (implementation of treatment regimen), early discontinuation (or short persistence) is the largest single factor for a decrease in adherence [7]. Three patients mentioned 'feeling cured' as a reason for early discontinuation. Individuals who experience malaria for the first time are apt to feel fear and recur to treatment, while those who have had more than one episode may feel more confident and not so treatment-dependent. Fourteen non-adherent patients were forgetful and-or unwilling to take their medicines. However, a precise understanding of discontinuation was not possible, due to methodological limitations. Visits were conducted at the end of treatment but actual treatment discontinuation was not observed or measured.
Adherence is a crucial step for any pharmacological treatment. Acute-phase, complex treatments, such as anti-malarial treatment, oblige prescriber-patient collaboration mainly as to the initiation and implementation steps of adherence [8]. Interventions, such as the NMCP, may not be sufficient to secure adherence. Populationtargeted approaches would need to be developed for nonadherent individuals, while tailored approaches would need to focus on the principal causes and determinants for non-adherence [28]. As such, these steps must begin to be addressed by health managers and clinicians. Relevant information on non-adherence by malaria patients is essential for health-based interventions that aim to decrease therapeutic failure and emergence resistance for P. falciparum and P. vivax.
Plasmodium falciparum and non-naïve patients constitute possible target groups for adherence interventions in malaria treatment [28]. Brazil has a low number of P. falciparum cases [1] and the goal is to eliminate P. falciparum malaria before emergence of resistance to ACT [4]. However, this may prove to be difficult given the context in the Amazon. As such, alternative and concurrent strategies [29] must be taken to improve treatment effectiveness and retard resistance emergence. One of these strategies is improving adherence in all its stages (initiation, implementation and discontinuation) [8] and identifying non-adherent patients. Tailoring interventions for individual non-adherent patients, on the other hand, is only possible through understanding the determinants associated with non-adherence [18,28].
Studies on adherence cover many types of definitions of adherence and measures [8,9]. Several employ selfreports or pill counts, or both, in order to improve validity. This study presented both methods and concordance between them of 0.74, considered good [21]. Nonetheless, both methods may overestimate adherence-or underestimate non-adherence.
Concepts on adherence have traditionally followed a stepwise process. The WHO [19] has proposed five groups of adherence determinants-linked to health system, to disease, to the individual, to social-economic aspects, and to treatment-related aspects. Other authors [18,30] have also studied adherence-related determinants. Apart from variables associated with the various determinants of adherence, Kardas and colleagues [18] point out that actual organizational processes-correct prescribing and adequate dispensing-have a direct impact on adherence and can invalidate control efforts (and in case of P. falciparum, elimination). This may be the case with malaria, mainly because of complex treatment regimens [6,30].
Through controlled studies, new concepts associated with adherence have emerged which subvert previous understanding of how adherence should be measured [6,[8][9][10]28]. This has consequences on how to design studies on adherence and on how limited may findings such as ours be on actually measuring adherence in a given population. This study is limited to implementation adherence and in that to overall percentages of implementation, not being able to acknowledge the actual links between prescribing and drug dosing histories, so well put by Blaschke and colleagues [7]. Nevertheless, by acknowledging information gathered on overall implementation non-adherence and its determinants, results may shed light on needs for policy interventions, such as close patient monitoring and preventive measures to curb lack of treatment effectiveness.
As the sample in this study was designed for a traditional measure of adherence, numbers produced an overall idea of adherent and non-adherent patients, in respect to treatment implementation. Determinants for non-adherence were consistent with the literature [10].
However, non-adherence caused by sub-optimal initiation or discontinuation could not be identified by this approach. The small sample also impeded us from distinguishing between non-adherent P. falciparum patients in respect to differences in treatment regimens. | 2023-01-17T14:36:21.653Z | 2015-11-26T00:00:00.000 | {
"year": 2015,
"sha1": "6090d17e650bf83c763b7b2a6c758c51cffffc54",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12936-015-0998-3",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "6090d17e650bf83c763b7b2a6c758c51cffffc54",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
270972241 | pes2o/s2orc | v3-fos-license | The effect of erythropoiesis‑stimulating agents on lung cancer patients: a meta‑analysis
Previous studies have demonstrated that erythropoiesis-stimulating agents (ESAs) can reduce anemia and improve quality of life in cancer patients, but ESAs may increase mortality. Therefore, we conducted a meta-analysis of randomized controlled trials (RCT) comparing the effect and risk of ESAs about the prevention or treatment of anemia in cancer patients. Four databases including PubMed, Embase, Web of science and Cochrane Library were searched for published RCTS on ESAs in the treatment of anemia in lung cancer patients from 2000 to 2023. Endpoints including mortality, incidence of thrombotic vascular events, blood transfusion requirement, and incidence of adverse events. Our meta-analysis included 8 studies, with a sample size of 4240 patients, including 2548 patients in the ESAs group and 1692 patients in the control group. The risk of mortality was lower in patients using ESAs than control group (RR 0.96, 95% CI 0.92–0.99, P = 0.02). But there was no significant difference in the risk of mortality between the patients using ESAs and controls (RR 0.99, 95% CI 0.92–1.06, P = 0.69) after removing Pere 2020. Subgroup analysis found that patients diagnosed with small cell lung cancer (SCLC) (RR 1.00, 95% CI 0.92–1.08, P = 0.16) or non-small cell lung cancer (NSCLC) (RR 1.01, 95% CI 0.87–1.17, P = 0.13) were no significant difference in mortality rate. The thrombotic vascular events increase in patients using ESAs than control group (RR 1.40, 95% CI 1.13–1.72, P = 0.002). The blood transfusion requirement of ESAs group was lower than control group (RR 0.56, 95% CI 0.44–0.72, P < 0.00001). And the subgroups of Darbepoetin alfa (RR 0.57, 95% CI 0.41–0.79, P = 0.003) and Epoetin alfa (RR 0.68, 95% CI 0.47–0.99, P = 0.01) had lower transfusion requirements than the control group. In the SCLC subgroup (RR 0.51, 95% CI 0.40–0.65, P = 0.34), blood transfusion requirements were lower in the ESAs group, but there was no significant difference between the subgroup of patients with NSCLC (RR 0.61, 95% CI 0.36–1.04, P = 0.009). There was no statistically significant difference between the two groups in the incidence of adverse reactions (RR 0.98, 95% CI 0.95–1.00, P = 0.10). In conclusion, ESAs does not increase the mortality of lung cancer patients or may reduce the risk of death, and can reduce the need for blood transfusion, although ESA can increase the incidence of thrombotic vascular adverse events. Registration PROSPERO CRD42023463582.
Introduction
Bone marrow suppression due to chemotherapy and radiation therapy can cause or exacerbate pre-existing anemia, so anemia is a particularly common complication of cancer, which leads to hypoxia of tumor cells, reduces sensitivity to radiation and chemotherapy, and leads to rapid disease progression and reduced quality of life [1].In the Zhenhua Tong and Zhumeng Xu have contributed equally to the work, so they are co-first authors.
Xue Sun and Bin Qi have contributed equally to the work, so they are co-corresponding authors.
Extended author information available on the last page of the article
European Cancer Anemia Survey (ECAS), a 39-month follow-up of 15,367 patients received chemotherapy found that the incidence of anemia was 53.7% in cancer patients, 29.3% with mild anemia, and 1.3% with severe anemia, and only 39% of these patients were treated [2].Therefore, improving anemia will improve the quality of life of cancer patients.Current treatments include blood transfusions, erythropoietin, and iron [1].For cancer patients, dietary therapies and drugs work more slowly [3], and there is a risk of infection transmission and alloimmunization when they receive red blood cell transfusion [1].It has been shown that ESAs not only increase hemoglobin concentrations in cancer anemia and reduce the need for red blood cell transfusions, but also improve the quality of life of cancer patients [4].However, erythropoiesis-stimulating analogs have also been shown to increase mortality by 10% in chemotherapy patients [5], and they have been reported to increase the risk of thromboembolism and may stimulate tumor growth [4].
Erythropoietin (EPO) is a glycoprotein hormone naturally produced by peritubular cells in the kidneys that stimulate the production of red blood cells in the bone marrow [6,7], and ESAs are recombinant versions of pharmacologically produced EPO.The ESAs currently on the market are epoetin, dabepoetin, and methoxypolyethylene glycol-epoetin β [7].A study of patients with lung cancer treated with platinum-based chemotherapy showed that 59% of patients in the darbepoetin alfa group died compared with 69% in the placebo group, and the median overall survival was 46 weeks in the darbepoetin alfa group and 34 weeks in the placebo group [8], suggesting a potential survival benefit of ESA treatment in patients with cancer anemia.There are also many meta-analyses that have reported the effect of ESA on the quality of life of patients in the treatment of cancer anemia.Bohlius et al. [9] performed a meta-analysis of survival rates in 42 ESA tumor trials involving 8167 patients that showed no statistically significant increase in the risk of death associated with ESA use.Similar findings were reported in the study by Ross et al. [10] and an updated meta-analysis by Bohlius et al. [11].However, a large meta-analysis by Bennett et al. (2008) showed that mortality was significantly higher in the ESA group than in the control group.Therefore, there is still much controversy about whether ESAs reduce mortality in cancer patients.
In fact, among all solid tumor types, lung cancer patients have the highest incidence of anemia and transfusion use, with 50-60% of patients developing anemia and 30 to 40% requiring transfusion after four to six cycles of chemotherapy [12].Therefore, we selected lung cancer patients of anemia for meta-analysis to investigate whether ESAs had an effect on mortality, blood transfusion rates, incidence of thrombovascular adverse events, and adverse events.
Literature search
The English search terms were mainly "lung neoplasms, lung tumor, NSCLC, SCLC, erythropoietin, cancer, and anemia."The literature on clinical RCTs of ESA for the treatment of anemia in patients with lung cancer published in the PubMed, Embase, Web of science, and Cochrane Library electronic databases from 2000 to 2023 was searched, and a total of 3977 articles were retrieved.We searched for studies that included (1) age > 18 years with a diagnosis of lung cancer, including SCLC or NSCLC; (2) the included studies were RCTs, with or without blinding or allocation concealment, and the included studies must contain valid data and evaluation measures; (3) the experimental group was treated with ESAs (Darbepoetin alfa, Epoetin alfa, Erythropoietin), and the control group was treated with placebo or no placebo, and other anemiarelated treatment measures and drug use (blood transfusion, iron therapy, radiotherapy and chemotherapy, etc.) were consistent between the two groups.The main exclusion criterion were (1) primary hematologic disorders known to cause anemia; (2) unstable or uncontrollable disease or heart condition related to or affecting cardiac function; (3) other known primary malignancies; (4) Unstable or uncontrolled comorbidities such as diabetes mellitus or hypertension; (5) have received at most any erythropoietin therapy in the last 8 weeks.
Data extraction and quality assessment
Two review authors independently searched and screened the literature, resolved by discussion if there were differences.And after excluding trials that clearly did not meet the inclusion criteria, the full text was read for those that may have met the inclusion criteria to determine whether the inclusion criteria were met.The extracted data mainly included (1) basic information of the included studies: study title, first author, nationality, date of publication, duration of follow-up, and source of the literature; (2) characteristics of the studies: general conditions of the study subjects, baseline comparability of patients in each group, sex ratio of the patients, average age of the patients, interventions, and drug dosages; (3) outcome measures: mortality rate, transfusion rate, incidence of adverse events, vascular thrombotic events incidence; (4) key elements of risk of bias evaluation.The methodological quality of the included studies was assessed independently by 2 investigators, and the risk of bias of the included studies was assessed using the risk of bias assessment tool for randomized controlled trials recommended by the Cochrane Handbook, including generation of randomized sequences, concealment of allocations, double-blinding of implementers and participants, blinding in outcome assessment, incomplete outcome data, selective publication, and other biases.And we used Review Manager 5.4.1 software to create quality evaluation charts.
Statistical methods
Meta-analysis of the final included literature was performed using Review Manager 5.4.1 statistical software, and relative risk ratio (RR) and 95% confidence interval (CI) were selected as effect indicators for dichotomous variables.Heterogeneity between included studies was judged in conjunction with I 2 : values of I 2 > 50% indicate significant heterogeneity, in which case a random-effects model was used and further sensitivity analyses were performed and sources of heterogeneity were explored using Stata 14.0 or subgroup analyses.Otherwise a fixed effects model was used.If there was statistical heterogeneity between studies (P < 0.1, I 2 > 50%), the source of heterogeneity was analyzed, and subgroup analyses were performed for factors that might have contributed to the heterogeneity, and a random-effects model was used if there was statistical heterogeneity without clinical heterogeneity between the two study groups or if the difference was not statistically significant.If the heterogeneity originated from low-quality studies, sensitivity analysis was performed.Descriptive analysis was used if the heterogeneity between the two groups was too large or if the data source could not be found.
Literatures searching results
PubMed, Embase, Web of science, and Cochrane Library were used to search for publicly available RCTs, and a total of 3977 relevant literature were retrieved; excluding duplicates, 3476 articles remained.By reading the titles and abstracts, 53 articles were left.By reading the full text, a total of 8 papers were finally included for meta-analysis (Fig. 1).
Basic information about the included literature
The 8 included studies [8,[13][14][15][16][17][18][19] were published in English between 2000 and 2020, with a total sample size of 4240 cases, including 2548 cases in the experimental group and 1692 cases in the control group.The main study population of 3 studies published by Pere, Jürgen and James R, were NSCLC patients, and 4 studies published by Sylke, Robert, Hye-Suk and Thomas were SCLC patients; Johan's main study population included NSCLC patients and SCLC patients.Four studies used darbepoetin alfa, 3 studies used epoetin alfa, and 1 study used RCHT+EPO.By organizing and summarizing the baseline data of the 8 included studies, the baseline characteristics were basically balanced between the experimental and control groups.Following is the basic information and basic characteristics of the included studies (Tables 1 and 2).
Results of risk of bias assessment
An assessment of the methodological quality of the included studies is shown in Table 3.For the quality assessment of the literatures, most of the assessed entries were identified as low risk, which suggested no significant risk of bias (Fig. 2).
Data analyses
Mortality A total of 6 included studies (I 2 = 40%, P = 0.14) reported mortality of patients from the start of treatment to the end of study follow-up, with 1892 (71.10%) of 2661 patients in the erythropoietin group and 1320 (72.17%) of 1829 patients in the control group.The result showed that the risk of mortality was lower in patients using ESAs than the control group (RR 0.96, 95% CI 0.92-0.99,P = 0.02) (Fig. 3).We removed data of the Pere 2020 which had a larger sample size, and there was no significant difference in mortality risk between patients treated with ESAs and control groups (RR 0.99, 95% CI 0.92-1.06,P = 0.69) (Fig. 4).
By using Stata 14.0 statistical software for sensitivity analysis of the 5 included studies, the main indicators of each group were excluded one by one analysis, and the final results did not change much, indicating that the present results are reliable (Fig. 5).
Among the included studies, 2 studies included patients with NSCLC and 2 included SCLC, so we did subgroup analyses based on lung cancer type.And the results showed no significant difference in the risk of mortality between the NSCLC subgroup (RR 1.01, 95% CI 0.87-1.17,P = 0.13) and SCLC subgroup (RR 1.00, 95% CI 0.92-1.08,P = 0.16) (Fig. 6).
Incidence of thrombotic vascular events
A total of 5 included studies (I 2 = 11%, P = 0.002) reported the occurrence of thrombotic vascular events, including 218 (8.9%) of 2445 patients in the ESAs group and 123 (7.7%) of 1593 patients in the control group.The result showed that the incidence of adverse thrombovascular events was higher in patients using ESAs than in the control group (RR 1.40, 95% CI 1.13-1.72,P = 0.002) (Fig. 7).
Blood transfusion requirements
A total of 7 included studies (I 2 = 70%, P = 0.002) reported patients with transfusion requirements during treatment, including 496 (21.3%) of 2328 patients in the ESAs group and 543 (34.17%) of 1589 patients in the control group.Our studies showed that patients in the ESAs group had lower transfusion requirements than those in the control group (RR 0.56, 95% CI 0.44-0.72,P < 0.00001) (Fig. 8).
After sensitivity analysis of the 7 included studies by using Stata 14.0 statistical software, the results of the sensitivity analysis showed that the 2 included studies that published by Jürgen 2014 and Robert 2008 had a large impact on the results, and the study of Pere 2020 had the largest impact on the results (Fig. 9).
A total of 6 studies were included, with 4 studies used Darbepoetin alfa and 2 studies used Epoetin alfa.And the results showed that patients treated with ESAs had lower blood transfusion requirements than control group in both the Darbepoetin alfa group (RR 0.57, 95% CI 0.41-0.79,P = 0.003) and the Epoetin alfa group (RR 0.68, 95% CI 0.47-0.99,P = 0.01) (Fig. 10).A total of 6 studies we included, of which 3 included NSCLC and 3 included SCLC, so we did analyses based on lung cancer type.And the results showed that in the NSCLC subgroup (RR 0.61, 95% CI 0.36-1.04,P = 0.009), there was no significant difference in the transfusion requirements between the ESAs group and the control group; whereas, the transfusion requirements in the ESAs group were lower (RR 0.51, 95% CI 0.40-0.65,P = 0.34) than those in the control group about the SCLC subgroup (Fig. 11).
Incidence of adverse events
A total of 5 included studies (I 2 = 47%, P = 0.11) reported that patients had experienced adverse events during treatment, including 1903 (81.8%) of 2326 patients in the ESAs group and 1205 (81.9%) of 1470 patients in the control group.The results showed that the difference of the two measures between the erythropoietin and control groups was not statistically significant (RR 0.98, 95% CI 0.95-1.00,P = 0.10) ( Fig. 12).
Discussions
Anemia is a common complication in patients with solid tumors, and the etiology of cancer-related anemia is multifactorial, ranging from direct effects of cancer such as tumor hemorrhage and direct effects of bone marrow invasion, results of cancer treatment itself such as chemotherapy, radiotherapy, tyrosine kinase inhibitors (TKIs), and monoclonal antibodies-induced cell death, or chemical factors produced by cancer such as autoantibodies, inflammatory cytokines that affect erythropoietin production and block iron metabolism [20,21].Following the cloning of the EPO gene in 1984, EPO was approved for use by the FDA in 1989 [22].Currently, EPO and its analogs ESAs are mainly used for the treatment of anemia in chronic renal failure and malignancy.Numerous clinical trials have shown [20] that the efficacy and safety equivalence of different ESAs can reduce the need for blood transfusions in patients with chemotherapy-induced anemia, and improve blood flow and quality of life.However, since 2005, the use of these drugs has decreased substantially as data related to declining survival rates have been published [20].
The study by Jürgen et [18] showed that no significant deleterious effect of ESAs on short or long-term survival in patients with NSCLC was found at a median follow-up of nearly one year.The results of the Sylke et al. [17] study suggested that the use of Darbepoetin alfa in chemotherapy is still useful in some cases, so the efficacy in avoiding blood transfusions and improving quality of life does not appear to be overshadowed by shortcomings in terms of thromboembolic safety.Previous studies in lung cancer patients receiving platinum-based chemotherapy [8] showed a potential survival benefit in SCLC patients receiving Darbepoetin alfa compared with placebo, with 59% of patients in the experimental group dying compared with 69% in the control group, consistent with our findings on mortality.However, studies by James et al. [15] showed a reduced overall survival rate in patients treated with EPO in patients with advanced non-small cell lung cancer.
In addition, a meta-analysis of survival and other safety outcomes studies on the use of ESAs published in 2010 [23] showed that ESAs did not lead to an increase in mortality in cancer patients.In a pooled analysis of individual-level data from a randomized, double-blind, placebo-controlled trial of Darbepoetin alfa in patients with chemoanemia [24], ESAs had no effect on mortality risk or disease progression.A meta-analysis of the effects of ESAs on lung cancer patients published in 2012 [25] showed that treatment with ESAs reduced transfusion rates and had no effect on overall survival.In a meta-analysis of randomized trials of recombinant human erythropoietin and mortality in cancer patients [26], ESAs increased mortality in all cancer patients, and a similar increase in mortality may occur in chemotherapy patients.
In terms of the efficacy of anemia, the results of most studies appear to be similar.Pere et al. [13] demonstrated in experiments with patients with NSCLC that ESAs increased hemoglobin and reduced the need for blood transfusions in patients with lung cancer or other cancers undergoing chemotherapy without increasing mortality or disease progression.Therefore, for supportive care of patients with lung cancer anemia, there is good evidence that the benefits of using ESAs actually outweigh the possible risks.The results of Robert et al. [8] did not show an increase in survival after treatment with ESAs; however, they reinforced the benefits of ESAs in reducing blood transfusions.The study by Hye-Suk et al. [16] showed that Epoetin alfa was effective in preventing severe anemia during CCRT in patients with limited disease small cell lung cancer (LD-SCLC).The results of Thomas et al. [14] showed that ESAs did not affect tumor response to chemotherapy or survival in patients with newly diagnosed small cell lung cancer.
Limitations
Our study has some limitations.Firstly, the studies we included may have selection bias, performance bias and measurement bias, etc., which affected the quality of the trials.In addition, different studies did not agree on the measurement time of the same indicator, which led to some measurement bias.Second, patients who used ESAs had a lower risk of in mortality outcomes compared to than controls, which contradicted the findings of previously published studies.We believe that this may be due to differences in the design and reported endpoints of the studies, mortality and disease progression are often only safety endpoints, inconsistent criteria for disease progression, different patient conditions, and different minimum survival times.While different hemoglobin thresholds have been used as the basis for initiation of treatment for ESAs, studies have varied in the management of iron deficiency, which may affect mortality and disease progression, but have not been systematically analyzed due to limitations inreporting.Therefore, after the exclusion of Pere2020, the largest sample size study, the results of five studies showed no significant difference in mortality risk between patients treated with erythropoietin and control groups.Thirdly, in the meta-analysis, we Fig. 12 Forest plot of the incidence of adverse events between the erythropoietin group and the control group considered the potential for cross-over in the included studies.This phenomenon could introduce bias, affecting the accurate assessment of treatment effects and consequently influencing the accuracy and reliability the study results.Despite our efforts to control for this situation in the analysis, its impact cannot be completely eliminated.Fourthly, lung cancer exhibits various subtypes and molecular characteristics, leading to different tumor responses to treatment, thereby affecting the consistency and generalizability of the study results.Non-small cell lung cancer includes several subtypes such as adenocarcinoma, squamous cell carcinoma, and large cell carcinoma, each with distinct pathological and molecular features.However, due to limitations in the included study data and variations in study designs, the differential effects of erythropoietin treatment among different subtypes were not comprehensively assessed.In the metaanalysis, we only conducted the subgroup analyses for nonsmall cell lung cancer and small cell lung cancer, without fully considering the distribution of tumor subtypes in different studies to evaluate differences in the effects of erythropoietin treatment among different subtypes.Meanwhile, the treatment regimen for lung cancer depends on factors such as tumor type, stage, overall health status of the patient, and clinical judgment of the physician, leading to significant differences in treatments received by patients in different studies.Although we attempted to control for this variability in the analysis, for example, by conducting subgroup analyses or sensitivity analyses, there still exist potential confounding factors.Therefore, when interpreting and applying the study results, we must carefully consider the impact of treatment allocation and integrate other evidence from clinical practice to comprehensively assess the reliability of the results.In conclusion, we believe that more high-quality, large sample, multicenter, fully randomized, double-blind controlled clinical trials are needed to demonstrate whether erythropoietin has an effect on the survival of lung cancer patients, so as to obtain more valuable meta-analysis results.
Conclusions
Our meta-analysis suggests that patients treated with ESAs have a lower risk of death than controls, and the exclusion of one large RCT study and sensitivity analysis and subgroup analysis further demonstrated that ESAs does not increase mortality in lung cancer patients.For patients with lung cancer, especially small cell lung cancer, ESAs may reduce the need for blood transfusion.These benefits are not accompanied by a significant increase in adverse drug events.However, our results also confirm that ESAs increases the incidence of thrombotic vascular events in lung cancer patients.Therefore, it is recommended that lung cancer patients with low risk of thromboembolism should be treated with ESAs to improve the quality of life of patients when anemia occurs.
Fig. 1
Fig. 1 Flowchart for inclusion of literature
Fig. 3 Fig. 4 Fig. 5
Fig. 3 Forest plot of total mortality between erythropoietin and control groups
Fig. 8
Fig. 8 Forest plot of blood transfusion requirements between the erythropoietin group and the control group
Fig. 9
Fig. 9 Sensitivity analysis of blood transfusion requirements
Fig. 10 Fig. 11
Fig. 10 Subgroup analysis of transfusion requirements-forest plot using different types of erythropoietin
Table 1
Basic information on included studies
Table 2
Basic characteristics of included studies | 2024-07-06T06:17:13.420Z | 2024-07-05T00:00:00.000 | {
"year": 2024,
"sha1": "e21c53827bcdd4191a02bd485cf1cce320b2cdcb",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "dab27c1d1eb593c2e3780d2f386192f7ccd40860",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10420643 | pes2o/s2orc | v3-fos-license | Iodine deficiency: An under recognized problem
© 2017 Indian Journal of Endocrinology and Metabolism | Published by Wolters Kluwer Medknow 2. Facchinetti F, Bizzarri M, Benvenga S, D’Anna R, Lanzone A, Soulage C, et al. Results from the international consensus conference on myo-inositol and d-chiro-inositol in obstetrics and gynecology: The link between metabolic syndrome and PCOS. Eur J Obstet Gynecol Reprod Biol 2015;195:72-6. 3. Unfer V, Nestler JE, Kamenov ZA, Prapas N, Facchinetti F. Effects of inositol(s) in women with PCOS: A systematic review of randomized controlled trials. Int J Endocrinol 2016;2016:1849162. 4. Unfer V, Carlomagno G, Papaleo E, Vailati S, Candiani M, Baillargeon JP. Hyperinsulinemia alters myoinositol to d-chiroinositol ratio in the follicular fluid of patients with PCOS. Reprod Sci 2014;21:854-8. 5. Unfer V, Carlomagno G, Dante G, Facchinetti F. Effects of myo-inositol in women with PCOS: A systematic review of randomized controlled trials. Gynecol Endocrinol 2012;28:509-15. 6. Carlomagno G, Unfer V. Inositol safety: Clinical evidences. Eur Rev Med Pharmacol Sci 2011;15:931-6.
Sir,
We read with interest the original article by Palaniappan et al. on iodine excess and Hashimoto's thyroiditis in children in which authors have reported a possible link between excess iodine intake by children and increasing the prevalence of autoimmune thyroiditis and eventually thyroid hypofunction. [1] In this regard, we would like to highlight recently published data on iodine status in several studies carried out in our country highlighting that iodine deficiency still continues to be endemic throughout India. [2,3] In the state of Tamil Nadu, the overall utility of iodine-rich salt among households and 6-12 years children has been extensively evaluated by Pandav et al. with estimations of urinary iodine excretion (UIE) and goiter indices, respectively. His study among school children aged between 6 and 12 years age has reported consumption of iodized salt at 18%, total goiter index of 13.5%, median UIE <100 mcg/L in 56%, and below 50 mcg/L in 22% of the children. [4] As most reference ranges of TSH, free T4 and UIE levels are strongly determined by diurnal and circadian variations, quality control standards for all biological samples in particular for UIE status become important. Several studies carried out in state of Tamil Nadu and Chhattisgarh have utilized stringent external and internal quality standards greatly adding to the quality of data presented. [5] By not including iodine deficient children and the prevalence of autoimmune thyroid disease in the iodine deficient cohort, it is difficult to accept a causal link between the iodine excess and autoimmune thyroid disease. We would also like to know about the laboratory details where the urine iodine was performed and standardization procedure undertaken in this aspect. The mean and standard deviation of the urine iodine excretion between the two groups are presented with a P value, but the confidence intervals of UIE between the two groups could better highlight the overlap.
Letters to the Editor
This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.
Parathyromatosis Following Endoscopic Parathyroid Surgery: A Rare Occurrence
Sir, Parathyromatosis, a cause for recurrent hyperparathyroidism is common in middle-aged women and chronic kidney disease patients. It involves multiple nodules of benign hyperfunctioning parathyroid tissue scattered throughout the neck and mediastinum. [1,2] Primary or Type 1 parathyromatosis is the result of hyperplasia of parathyroid rests from embryologic development. [3] Secondary or Type 2 parathyromatosis, a rare complication of parathyroidectomy, first described in 1975 by Palmer et al., [4] is more common which arises due to seeding of parathyroid tissue during surgery. It has been also considered as a low-grade malignancy. [5] Thirty-five cases have been reported so far. [6] Its preoperative diagnosis is rare due to the lack of awareness of this entity. Sonographic imaging provides clues to the diagnosis. [7] Medical and surgical interventions carry high failure rates. Cinacalcet and bisphosphonates are main-stays of medical therapy. Alcohol ablation and several novel calcimimetics have been used in these patients. [8] Repeated neck explorations to remove parathyroid implants are often unsuccessful.
We came across a 55-year-old male with bone pains, pruritus, polyuria, difficulty in getting up from the chair for the last 1 year. He was operated in the past for renal stones. He was diagnosed with primary hyperparathyroidism (PHPT) due to left superior parathyroid adenoma and got operated endoscopically for the same. Postsurgery, discharged medications included oral calcium and Vitamin D supplements. Two years after the surgery, he presented with bone pains and increasing fatigue. Physical examination was unremarkable. Serum calcium was 14 mg/dl, intact parathyroid hormone (PTH) was 1400 pg/ml, and 24-h urinary calcium was 649.28 mg/dl, suggestive of recurrent hyperparathyroidism. Ultrasonography, sestamibi and positron emission tomography scans failed to localize lesion. Differential diagnosis of incomplete removal of adenoma, hyperplasia, multiple/ectopic adenoma, malignancy, and parathyromatosis were considered. On exploration [ Figure 1a], multiple nodules (<5 mm) were evident in the left-side neck compartment embedded in strap muscles, sternocleidomastoid, on thyroid surface, and left central compartment. Right parathyroid glands were normal. The patient underwent left hemithyroidectomy, removal of ipsilateral straps, parts of sternocleidomastoid, berry picking of superficial nodules, and clearance of tissue close to the entry of ports. Postoperatively, serum calcium was 9.7 mg/dl and PTH (<2.5 pg/ml) was undetectable. Histopathology revealed multiple, small, hypercellular parathyroid glands along with normal looking thyroid follicles with diagnosis of parathyromatosis [ Figure 1b]. On 1-year follow-up, serum calcium was normal.
Parathyromatosis is a rare but clinically relevant disease. It is characterized by ectopic hormone secreting parathyroid tissue scattered throughout the neck and mediastinum. It may be considered a benign malignancy with locally invasive behavior. In our opinion, the cause of secondary hyperparathyroidism, in this case, was most likely due to rupture of capsule leading to spillage of tumor cells during removal of parathyroid adenoma by endoscopic parathyroid surgery. Secondary parathyromatosis following endoscopic parathyroid surgery has not been reported so far. The preoperative diagnosis of parathyromatosis poses great challenges and needle aspiration may be helpful in selected patients. [9] The sonographic appearance of parathyromatosis may mimic that | 2018-04-03T00:41:37.747Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "9eebd4c0604626be73af399d12270ffe638ef073",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijem.ijem_39_17",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "a945af3760ea09b7309bb4060e4255fe3623b213",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266993438 | pes2o/s2orc | v3-fos-license | Rheumatic Exercise for Menopausal Women in the Perumnas II Pontianak Health Center Area
Purpose: Menopause is a phase experienced by women due to the declining function of the ovaries, resulting in the cessation of the menstrual cycle. Various complaints arise in women entering menopause, especially joint and muscle pain, as well as psychological complaints. This condition can affect the quality of life of menopausal women. Therefore, it is necessary to educate women on how to reduce these complaints. This educational activity aims to increase women's knowledge about menopause and the appropriate self-management of menopausal complaints. Method: This activity is conducted in the community served by the Perumnas II Pontianak Health Center. Participants are members of the Pre-Elderly Prolanis Kejora Manis group. The activity methods include explanations, observational techniques, demonstrations, and practical exercises. Practical Applications: The outreach activity is divided into two sessions. The first session focuses on educating about menopausal symptoms and how to address them, followed by the second session, which involves demonstrations and joint exercises for rheumatism. Conclusion: Rheumatism exercises should be taught to pre-menopausal women for daily practice to improve comfort and adaptation to the menopausal condition.
Introduction
Menopause is a natural biological phenomenon that occurs at the end of the reproductive phase in every woman's life.Menopause occurs as a result of the declining function of the ovaries with increasing age (Erika & Fridayana Fitri, 2023;Juliana et al., 2023).The average age at which women experience menopause is 51 years, but it can occur between the ages of 40-45 and is still considered normal.With increasing life expectancy, women now spend almost one-third of their lives in the menopausal stage (Armini et al., 2018).Various menopausal complaints include physical and psychological symptoms, such as vasomotor symptoms, genitourinary syndrome, musculoskeletal disorders, sleep disturbances, as well as psychological issues like depression, anxiety, and mood swings (Hofnie-Hoëbes et al., 2018).According to research conducted by Ganapathy,97.14% of menopausal women experience menopausal complaints that negatively impact their quality of life (Ganapathy & Furaikh, 2018).Currently, menopause affects the quality of life of millions of women worldwide, including in Indonesia, and can become an increasingly concerning issue.
One of the complaints of menopausal women is joint and muscle pain, which is caused by the reduction of estrogen.When estrogen levels decrease, it can lead to damage to the collagen matrix and cartilage (Fede et al., 2022).Joint pain is a standard physical change experienced during menopause, leading to conditions like osteoarthritis and osteoarthrosis (Yi & Hwang, 2018).Pain management can be achieved through both pharmacological and nonpharmacological methods (Idris & Astarani, 2017).Pharmacological pain relief includes the use of pain relievers.Pain relief does not always have to involve medication; nonpharmacological methods like cupping therapy and warm compresses can also be effective.Progressive muscle therapy is particularly effective in relieving joint pain (Richard & Sari, 2020).
Rheumatic exercise focuses on maintaining maximal joint range of motion.It is an alternative treatment that can have a positive impact on the health of menopausal women by enhancing joint and muscle flexibility to reduce joint pain (Sitinjak et al., 2016).This exercise also induces a sense of happiness, especially when done collectively.During exercise, the body releases endorphins.Higher levels of endorphins reduce or alleviate pain experienced by an individual, making them feel more comfortable and happier and promoting oxygen delivery to the muscles (Bagar & Wilson, 2017;Conde Moreno & Ramalheira, 2022).Dysmenorrhea exercise can increase the production of endorphins (natural painkillers in the body) and can increase serotonin levels.This exercise does not require expensive equipment, is easy to perform, and, of course, has no side effects when done regularly (Yufdel et al., 2022).
If rheumatic exercise helps with physical adaptation to menopausal changes, it can protect and improve the body's energy reserves for increased needs, such as dealing with illness.Additionally, rheumatic exercise is highly beneficial for menopausal women suffering from obesity, as gradual movement can lead to weight loss (Sukarni et al., 2022).Weight loss can reduce the workload on the joints, especially the knees.For maximum effectiveness, rheumatic exercises should be performed regularly and consistently (Rismawati & Hermalia Putri, 2022;Rusli, 2021).Based on a study conducted in 2020 on menopausal women in the working area of UPK Perumnas II Pontianak Health Center, it was found that there were two main complaints among participants: discomfort in joints, muscles, and bones (joint pain, rheumatic complaints, osteoporosis), and psychological complaints, such as feeling depressed (Juliana et al., 2021).Based on interviews with five menopausal women visiting Perumnas II Health Center, they frequently experienced physical and psychological complaints such as joint pain, irritability, anxiety, and depression.According to interviews with the health Center staff, there is currently no specific program in place to address these menopausal complaints.Based on the information provided above, education and rheumatic exercise for menopausal women in the working area of Perumnas II Pontianak Health Center are highly appropriate.It is hoped that these activities can help participants address the 477) Rheumatic Exercise for Menopausal Women in the Perumnas II Pontianak Health Center Area, Juliana, D., Masmuri, Sari, L., Saputra, F., Khatifah, M. P.
Method
This Community Service Activity (PKM) was carried out at the Perumnas II Pontianak Health Center, Idham Khalik Street, West Pontianak, Pontianak City, West Kalimantan Province.The education and rheumatic exercise activities took place on Friday, October 15, 2021, in the courtyard of the Perumnas II Health Center from 06:30 to 08:00 AM.The flow of the implementation of PKM education and rheumatic exercise activities to address menopausal complaints can be seen in Figure 1.The first step taken is to determine the activity's target.Following up on previous research, the PKM team selected the Perumnas II Health Center as a partner and the location for the PKM activity.Perumnas II Health Center has a group called "Prolaris Kejora Manis", which consists of more than 100 elderly and pre-elderly members.The majority (70%) of Prolanis members are women who have already experienced menopause.Every week, specifically on Friday mornings, Prolanis holds regular activities, often including health education sessions and aerobic exercises.
The following preparation stage involves coordinating with Perumnas II Health Center and the management of Prolanis Kejora Manis.On September 01, 2021, a partnership agreement was signed with the Perumnas Health Center.Subsequently, the PKM team, along with the elderly program coordinator, planned the activities, including participants, timing, and the location of the activities.The team also prepared educational materials on how to address menopausal complaints and rheumatic exercise.The PKM activity was planned to take place during the regular Prolanis activities.The target participants were 40 menopausal members of Prolanis.The following preparation stage involved sending invitations to the participants through the elderly program coordinator at the health centre.The PKM activity used an educational and practical demonstration method conducted directly in the field while adhering to health protocols throughout the event.The rheumatic exercise activity was divided into four stages: warm-up, low-impact aerobic exercise, strengthening and balance exercises, and cooling down.The exercise was led by instructors provided by the PKM team, with some instructors positioned in front and among the participants to guide the exercise.The observation team was located at the rear.Evaluation of the activity was conducted using observation techniques.Subsequently, the team prepared the final report on accountability for the implementation of the community service activity.
The activity methods included demonstrations of rheumatic exercises, presentations, discussions, and practical exercises.The community service activity was divided into two sessions, with the first session focusing on educating about the benefits of rheumatic exercise, followed by a demonstration and collective rheumatic exercise.Participants were given the opportunity to ask questions about anything they found unclear.The evaluation was conducted using observation techniques.
Result
The education and rheumatic exercise activity for menopausal women in the Perumnas II Pontianak Health Center area was attended by more participants than the target.Participants were enthusiastic about participating in the rheumatic exercise activity.This was evident as the targeted number of participants was set at 40, but the actual attendance exceeded the expected target.The activity began with greetings, an opening, a welcome address by the PKM team leader, and an educational presentation addressing menopausal complaints.After the presentation, it was followed by a demonstration of rheumatic exercise and then a collective exercise session.The outcome of the rheumatic exercise activity was that participants were able to follow the exercise until completion, and they could feel the benefits of rheumatic exercise.
Discussion
Preparations before the activity included sending invitations to participants through Prolaris Kejora Manis, creating banners, procuring souvenirs, preparing educational materials related to addressing anxiety during menopause, and gathering equipment for the exercise.Additionally, the team prepared instructors to lead the rheumatic exercise activity.
During the implementation, many participants had already gathered at the Health Center's courtyard before the scheduled time.Participants appeared highly enthusiastic about participating in the rheumatic exercise activity.This was evident as the targeted number of participants was set at only 40, but the actual attendance exceeded the expected target.The activity began with greetings, an opening, a welcome address by the PKM team leader, and educational presentations regarding how to address complaints during menopause.After the presentation, it was followed by a demonstration of rheumatic exercise, and then the collective exercise session.The outcome of the rheumatic exercise activity was that participants who attended could follow the exercise until completion, and they could feel the benefits of rheumatic exercise.The benefits of rheumatic exercise include training joint and muscle flexibility to reduce joint pain.
479) Rheumatic Exercise for Menopausal Women in the Perumnas II Pontianak Health Center Area, Juliana, D., Masmuri, Sari, L., Saputra, F., Khatifah, M. P. The rheumatic exercise session was led by an instructor positioned at the front and in the midst of the participants.The exercise conducted during this activity consisted of four stages.Firstly, the instructor, together with the participants, conducted a warm-up session.According to (Putra & Suharjana, 2018), the purpose of warming up is to loosen or stretch the muscles, prepare the respiratory and circulatory systems, and adjust the body temperature to be ready for the subsequent exercise movements.Warm-up is also performed to prevent or minimize injuries during exercise activities.Secondly, the instructor and all participants engaged in the core aerobic low-impact movements.Low-impact aerobic exercises are suitable for seniors.These movements are performed with light and slow intensity, such as walking in place, stepping, swinging the arms, and combining various body movements.The core movements aim to strengthen the muscles in the body and train the coordination of movements between body parts.Thirdly, strengthening and balancing exercises were performed.These exercises were conducted to enhance flexibility, strength, and balance in the body.Fourthly, cooling-down exercises were carried out to restore body flexibility after performing various exercise movements, ensuring that the muscles do not become tense and stiff.The atmosphere during the rheumatic exercise session can be seen in Figure 3.The activity concluded with participants signing the attendance sheet for the rheumatic exercise session and distributing souvenirs to all rheumatic exercise participants.Following the exercise, an evaluation of the exercise activity was conducted through observation.Based on the observation results, participants expressed their intention to continue rheumatic exercise regularly after this community service activity.
Conclusion
Based on the series of activities carried out in this community service program, it can be concluded that education on managing menopausal complaints can increase knowledge among pre-elderly women.As a solution to menopausal complaints, especially in cases of joint pain, these can be addressed through rheumatic exercises.Rheumatic exercises can stretch the body's muscles and reduce joint pain.Participants showed a high level of enthusiasm and participated in the activities until the end.Based on the observations conducted, participants expressed their intention to engage in rheumatic exercises regularly.Regular rheumatic exercises can have a more significant positive impact in addressing joint pain complaints in the elderly.sincere thanks go to the staff and management of the Perumnas II Pontianak Health Center for their cooperation and for generously providing us with the necessary facilities and resources to carry out this community service.I am grateful to the Prolaris Kejora Manis group for their active involvement in promoting our program and for their assistance in reaching out to participants.Special appreciation goes to our committed instructors, who led the rheumatic exercise sessions and provided valuable guidance to the participants.I would also like to acknowledge the researchers and experts whose work underpinned our educational materials and activities.Lastly, I express my gratitude to my colleagues and team members who dedicated their time and effort to plan, coordinate, and execute this community service project.Your collective efforts have had a positive impact on the well-being of menopausal women in our community, and I look forward to our continued collaboration in the future.
Figure 2 .
Figure 2. Opening and Welcome of the PKM Activity
Figure 4 .
Figure 4. Participants signing the attendance sheet | 2024-01-16T16:36:03.932Z | 2023-10-31T00:00:00.000 | {
"year": 2023,
"sha1": "6732c94f5ffc968f7a05ca116fafb7bbc251b566",
"oa_license": "CCBY",
"oa_url": "https://jurnal.stie.asia.ac.id/index.php/jpm/article/download/1786/468",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4020bad024f352a661954473d1f024bfe778cdb9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
219758702 | pes2o/s2orc | v3-fos-license | Lung function in adults born prematurely with bronchopulmonary dysplasia
Transl Pediatr 2020;9(3):210-212 | http://dx.doi.org/10.21037/tp-20-116 Bronchopulmonary dysplasia (BPD), which interferes pulmonary vascular and alveolar development especially in preterm infants, usually results from hyperopia, degree of prematurity, prolonged mechanical ventilation and other antenatal risk factors. The diagnostic criteria for BPD are persistent oxygen dependency up to 28 days of life and/or a need for supplemental oxygen at the postmenstrual age of 36 weeks (1). Infants with BPD may not only suffer respiratory-related problems such as respiratory distress symptoms, increased bronchial secretion and repeated lower tract infection, but also develop feeding problem and delayed growth. Systemic hypertension, poor neurodevelopmental outcome, pulmonary hypertension and left ventricular hypertrophy and left ventricular dysfunction are complications attributed to BPD (2). BPD survivors may also be at a greater susceptibility of developing compromised lung defence, asthma-like symptoms and exercise intolerance for long term (3). The study by Yang et al. (4) in the issue of Paediatrics conducted a national cohort study to invest igate the alterations in lung function (i .e. , spirometry, plethysmographic lung volumes, diffusing capacity and single-breath nitrogen washout). They described those very low birth weight (VLBW) survivors aged 26–30 years (born in 1986) in New Zealand suffered higher incidence of airflow obstruction, gas trapping, reduced gas exchange and ventilator inhomogeneity. Moreover, BPD (defined as receiving supplementary oxygen at 36 weeks’ postmenstrual age) worsened the scenario. These findings suggested BPD have long-term effects on lung function and raise awareness that late pulmonary sequelae might lead to higher occurrence of pulmonary diseases in future years. Indeed, a considerable body of data has revealed that BPD might contribute to the deviations in lung function. Preterm infants with BPD had diminished functional residual capacity (FRC) depending on severity of the disease (5,6) and compliance (6). Extremely preterm infants with BPD at a post-conceptional age of 44 weeks had airway obstruction measured by lower peak tidal expiratory flow as a proportion of expiratory time (TPTEF/TE ratio) reflected the increasing severity of BPD (7). Infants with BPD at the age of 36–42 postconceptional weeks had reduced respiratory functions demonstrated by higher incidence of concave tidal breathing flow-volume loop (TBFVL) and an increased respiratory rate (8). Furthermore, survivors with BPD had abnormal airway patency in the first year (9) and third year (10) of life measured by decreasing maximal flow at FRC (V’maxFRC). VLBW preterm infants with BPD also had lower forced vital capacity (FVC) (11), forced Editorial Commentary
Bronchopulmonary dysplasia (BPD), which interferes pulmonary vascular and alveolar development especially in preterm infants, usually results from hyperopia, degree of prematurity, prolonged mechanical ventilation and other antenatal risk factors. The diagnostic criteria for BPD are persistent oxygen dependency up to 28 days of life and/or a need for supplemental oxygen at the postmenstrual age of 36 weeks (1).
Infants with BPD may not only suffer respiratory-related problems such as respiratory distress symptoms, increased bronchial secretion and repeated lower tract infection, but also develop feeding problem and delayed growth. Systemic hypertension, poor neurodevelopmental outcome, pulmonary hypertension and left ventricular hypertrophy and left ventricular dysfunction are complications attributed to BPD (2). BPD survivors may also be at a greater susceptibility of developing compromised lung defence, asthma-like symptoms and exercise intolerance for long term (3).
The study by Yang et al. (4) in the issue of Paediatrics conducted a national cohort study to investigate the alterations in lung function (i.e., spirometry, plethysmographic lung volumes, diffusing capacity and single-breath nitrogen washout). They described those very low birth weight (VLBW) survivors aged 26-30 years (born in 1986) in New Zealand suffered higher incidence of airflow obstruction, gas trapping, reduced gas exchange and ventilator inhomogeneity. Moreover, BPD (defined as receiving supplementary oxygen at 36 weeks' postmenstrual age) worsened the scenario. These findings suggested BPD have long-term effects on lung function and raise awareness that late pulmonary sequelae might lead to higher occurrence of pulmonary diseases in future years.
Indeed, a considerable body of data has revealed that BPD might contribute to the deviations in lung function. Preterm infants with BPD had diminished functional residual capacity (FRC) depending on severity of the disease (5,6) and compliance (6). Extremely preterm infants with BPD at a post-conceptional age of 44 weeks had airway obstruction measured by lower peak tidal expiratory flow as a proportion of expiratory time (TPTEF/TE ratio) reflected the increasing severity of BPD (7). Infants with BPD at the age of 36-42 postconceptional weeks had reduced respiratory functions demonstrated by higher incidence of concave tidal breathing flow-volume loop (TBFVL) and an increased respiratory rate (8). Furthermore, survivors with BPD had abnormal airway patency in the first year (9) and third year (10) of life measured by decreasing maximal flow at FRC (V'maxFRC). VLBW preterm infants with BPD also had lower forced vital capacity (FVC) (11), forced
Editorial Commentary
Lung function in adults born prematurely with bronchopulmonary dysplasia expiratory flow at 50% of vital capacity (FEF50) (11), forced expiratory volume in one second (FEV 1 ) (11)(12)(13), and forced mid-expiratory flow (FEF25-75) (12,13) but increased residual volume/total lung capacity (RV/TLC) ratio (14) at school age. Impaired lung function caused by BPD appears to persist into not only childhood but also adolescence. A diminished FEV 1 /FVC of BPD survivors was observed in late adolescence (15). Furthermore, alterations in airway hyperresponsive, diffuse capacity, lung and chest wall mechanics, ventilation inhomogeneity and exhaled nitric oxide have been observed in infancy, preschool age and childhood (16). Yang et al. (4) then reported that adult VLBW survivors especially those with BPD had a higher incidence of airflow obstruction, gas trapping, reduced gas exchange, and increased ventilatory inhomogeneity versus controls. BPD might have negative effects continuously on lung function at adults.
However, ventilation strategies used in 1986 were quite different from now. Lung-protective ventilation (i.e., moderate PEEP, low inspiratory pressure, low tidal volume and short inspiratory time) is commonly used now in order to prevent volutrauma, barotrauma and lung inflammation which are potential risk factors to BPD. Non-invasive ventilation (i.e., nasal continuous positive airway pressure, nasal intermittent positive pressure ventilation, nasal highfrequency oscillatory ventilation etc.) is also preferred to invasive ventilation in preterm neonates as it helps increase survival rate without BPD. Non-invasive ventilation should be able to ensure maintenance of FRC, prevent cyclical reopening and closing, support fatigable ventilatory muscles and provide respiratory stimulation, thereby improving gaseous exchange. Intubation and mechanical ventilation of preterm infants remains the critical factors of subsequent BPD (17). Since Yang's study only mentioned numbers of participants and days of assisted ventilation via an endotracheal tube, we couldn't tell whether this would affect the number of participants diagnosed as BPD under different respiratory support strategy.
A better definition of BPD diagnostic criteria was addressed in 2000: oxygen need for ≥28 days and at 36 weeks' postmenstrual age to identify different severity of BPD, and also to include oxygen concentration at 36 weeks' postmenstrual age to further define the severity of lung injury. Therefore, the number of preterm infants diagnosed as BPD in Yang's study may be different according to the new diagnostic criteria, which in turns affecting the results (1).
In addition to invasive mechanical ventilation exposure, surfactant deficiency in the immature lung or surfactant dysfunction due to oxidant injury and lung inflammation are contributing factors to the pathogenesis of BPD. Although late administration of exogenous surfactant did not reduce the incidence of BPD (18), early surfactant therapy helped reduce the need for aggressive ventilation strategies, thereby preventing BPD (19). It also has been recommended to use non-invasive ventilation strategies and less invasive surfactant administration/minimally invasive surfactant administration (LISA/MIST) whenever feasible for BPD prevention (20). Exogenous surfactant therapy was not used on those participants Yang's study recruited, which may affect results and cannot reflect current condition.
Despite the difference of clinical management between past and present mentioned previously, Yang's study demonstrated the long-term adverse effects of BPD on lung function and raise awareness to those survivors to keep tracking their lung function throughout life and avoid potential exacerbating risk factors. Neonatologists and respiratory therapists thus should work on reducing incidence of BPD in very preterm infants by applying and closely monitoring appropriate respiratory support strategy and medical intervention, thereby preventing short-term and long-term impacts on human health in their future life. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). | 2020-06-18T16:01:02.883Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "a1265592b97ca983f4a4bf64c161d6851bca28a1",
"oa_license": "CCBYNCND",
"oa_url": "https://tp.amegroups.com/article/viewFile/42144/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ca2d1887d399c13a82ce8a84a591525fee028d19",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119591401 | pes2o/s2orc | v3-fos-license | Fermions in the background of mixed vector-scalar-pseudoscalar square potentials
The general Dirac equation in 1+1 dimensions with a potential with a completely general Lorentz structure is studied. Considering mixed vector-scalar-pseudoscalar square potentials, the states of relativistic fermions are investigated. This relativistic problem can be mapped into a effective Schr\"{o}dinger equation for a square potential with repulsive and attractive delta-functions situated at the borders. An oscillatory transmission coefficient is found and resonant state energies are obtained. In a special case, the same bound energy spectrum for spinless particles is obtained, confirming the predictions of literature. We showed that existence of bound-state solutions are conditioned by the intensity of the pseudoscalar potential, which posses a critical value.
Introduction
Since its formulation in 1928 [1], the Dirac equation has been widely investigated in various physical systems. Among the applications, we can highlight the Klein's paradox [2][3][4], the description of electrons in graphene [5], the influence of the nuclear medium on the nucleons [6] and the relativistic hydrogen atom [7].
The pseudospin symmetry (PSS) is a topic of intense discussion, activities and recent progress in the last decades. PSS was introduced in nuclear physics for explain the degeneracies of orbitals in single particle spectra [8,9]. In order to understand the origin of PSS, we need to take into account the motion of the nucleons in a relativistic mean field and thus consider the Dirac equation [10][11][12]. The case in which the mean field is composed by a vector (V t ) and a scalar (V s ) potential, with Σ = V t + V s = 0 (V t = −V s ) is usually pointed out as a necessary condition for occurrence of PSS in nuclei [10][11][12]. The study of symmetries in resonant states is certainly an interesting topic. The authors of Ref. [13] showed that the PSS in single particle resonant states in nuclei is exactly conserved, i.e., the pseudospin doublets with different quantum numbers κ e −κ + 1 have the same resonant state with energy E res and width Γ res . That novel result were illustrated for single particle resonances in spherical square-well and Woods-Saxon potentials. Additionally, the Dirac equation exhibit spin symmetry (SS) when the vector and scalar potentials have the same magnitude and was used for explain the small spin-orbit splitting in hadrons [14]. Recently, a comprehensive review of the progress on the PSS and SS in various systems and potentials have been reported in [15], including extensions of the PSS study from stable to exotic nuclei, from non-confining to confining potentials, from local to non-local potentials, from central to tensor potentials, from bound to resonant states, from nucleon to anti-nucleon spectra, from nucleon to hyperon spectra, and from spherical to deformed nuclei.
The four-dimensional Dirac equation with a mixture of spherically symmetric scalar, vector and tensor interactions can be reduced to the two-dimensional Dirac equation with a mixture of scalar, vector and pseudoscalar couplings when the fermion is limited to move in just one direction (p y = p z = 0) [16]. In this restricted motion the scalar and vector interactions preserve their Lorentz structures, while the tensor interaction becomes a pseudoscalar. This kind of dimensional reduction is very useful because the two-dimensional version of the Dirac equation can be thought as that one describing a fermion embedded in a four-dimensional space-time with either spin up or spin down [7]. Furthermore, the absence of angular momentum and spin-orbit interaction as well as the use of 2 × 2 matrices, instead of 4 × 4 matrices, allow us to explore the physical consequences of the negative-energy states in a mathematically simpler and more physically transparent way. Therefore, we can take advantage of the simplicity of the lowest dimensionality of the space-time.
Considering mixed scalar-vector potentials the Dirac equation in (1+1) dimensions have been investigated for a sign potential [17] and smooth step potential [18]. In the context of mixed scalar-vector-pseudoscalar potentials (the most general Lorentz structure), the bound-state solutions for fermions and antifermions in (1+1) dimensions have been studied for harmonic oscillator potential [19], Pöschl-Teller potential [20], Cornell potential [21,22] and Coulomb potential [23]. In those works, the relation between spin and pseudospin symmetries using chargeconjugation and chiral transformation was illustrated.
The square potentials, wells and barriers, are models widely used in low-dimensional systems such as the quantum dots [24], Dirac fermions in graphene [25], electrons in semiconductor heterostructures [26] and theoretical studies [27,28]. Besides these applications, the well and barrier potentials are extremely examples used as toy-models in textbooks (for example, see [7] ), further increasing its importance in quantum mechanics.
The main motivation of this paper is the approach of the Dirac equation (1+1) dimensional in the framework of mixed vector-scalar-pseudoscalar square potentials. The scattering solutions furnish an oscillatory transmission coefficient, which does not present a total reflection. The presence of bound-state solutions are conditioned by the intensity of the pseudoscalar potential, which possess a critical value, compared to the mixed vector-scalar potential. Those interesting results are obtained as solutions of an effective Schrödinger equation for a square potential with repulsive and attractive delta-functions situated at the borders. The results can give us support for study PSS and SS in high dimensionality of the space-time.
The Dirac equation in (1+1) D
The time-independent Dirac equation for a fermion of rest mass m in the background of vector (V t ), scalar (V s ) and pseudoscalar (V p ) potentials can written as (with units in which = c = 1 ) where σ 1 , σ 2 and σ 3 are the Pauli matrices and changes sign whereas V s and V t remain the same. This is because the parity operator P = exp (iε) P 0 σ 3 , where ε is a constant phase and P 0 changes x into −x, changes the sign of σ 1 and σ 2 but not of σ 3 .
The charge-conjugation operation is accomplished by the transformation ψ c = σ 1 ψ * and the Dirac equation becomes H c ψ c = −Eψ c , with One see that the charge-conjugation operation changes the sign of the energy and of the potentials V t and V p . In turn, this means that Σ turns into −∆ and ∆ into −Σ. Therefore, to be invariant under charge conjugation, the Hamiltonian must contain only a scalar potential. The chiral operator for a Dirac spinor is the matrix γ 5 = σ 1 . Under the discrete chiral transformation the spinor is transformed as ψ χ = γ 5 ψ and the transformed Hamiltonian H χ = γ 5 Hγ 5 is This means that the chiral transformation changes the sign of the mass and of the scalar and pseudoscalar potentials, thus turning Σ into ∆ and vice versa. A chiral invariant Hamiltonian needs to have zero mass and V s and V p zero everywhere.
The equation (1) decomposes into two first-order equations for the upper, ψ + , and the lower, ψ − , components of the spinor: The components of four-current are given by J 0 = ψ † ψ and J 1 = ψ † σ 1 ψ. If we use the spinor ψ in terms of its components the four-current is expressed by J 0 = |ψ + | 2 + |ψ − | 2 and J 1 = 2Re(ψ * + ψ − ) which are conserved quantities for stationary states. Then, the Dirac spinor is normalized as +∞ −∞ dxJ 0 = 1, so that ψ + and ψ − are square integrable functions. It is clear from the pair of coupled first-order differential equations (4) and (5) For ∆ = 0 and E −m, using the expression for ψ − obtained from (5) and inserting it into eq. (4) the Dirac equation for ψ + becomes Therefore, the solution of the relativistic problem is mapped into a Sturm-Liouville problem for the upper component of the Dirac spinor. As discussed in the ref. [20], we can take advantage of the discrete chiral transformation (γ 5 ) and we can obtain the solutions for Σ = 0 from the ∆ = 0 case. This means that the chiral symmetry is invoked to obtain the equations obeyed by ψ + and ψ − , for Σ = 0 and E m. They are obtained from the previous ones by doing [22,29,30], are obtained directly from the original first-order equations (4) and (5). For ∆ = 0 and whose solution is where N + and N − are normalization constants, and Note that this sort of isolated solution cannot describe scattering states and inasmuch as ψ + and ψ − are normalizable functions, the possible isolated solution implies that V p 0, therefore the presence of a pseudoscalar potential is sine qua non for provide isolated solutions [22].
The Square Potentials
Lets us consider where sgn(x) = x/|x| (x 0) is the sign function, C Σ and C p are constants with dimensions of energy. Due to the chiral symmetry we can focus the discussion on the Σ case (∆ = 0). The results for the case ∆ 0, Σ = 0 and V p , still given by (16), can be easily obtained by just changing the signs of m and C p in the relevant expressions.
For the potentials (14), (15) and (16) we have not found a normalizable isolated solution with E = −m, i.e., the absence of isolated solution is because V p (x) = 0 for |x| > a and therefore ψ + is a constant. For the case E −m the Eq. (7) takes the form where δ(x) is the Dirac delta function and the effective energy particle is given by k 2 /2m, with k 2 = E 2 − m 2 . We note that the effective potential distinguishes particles and anti-particles if C Σ 0, and so we can not expect a symmetric spectrum with respect to E. Let us introduce the parameter r ≡ V eff (|x| < a), where The parameter r characterizes three different profiles for the effective potential as illustrated in figure ( 1). If r < 0, the effective potential consists in a finite square well potential at the region −a < x < +a with attractive and repulsive delta function situated at x = −a and x = +a, respectively. If r = 0, the effective potential consists in a double delta function potential with attractive and repulsive delta function situated at x = −a and x = +a, respectively. Finally, if r > 0, the effective potential consists in a finite square barrier potential at the region −a < x < +a with attractive and repulsive delta function situated at x = −a and x = +a, respectively.
Scattering States
We focused our attention to the scattering states solutions that describes a fermion moving from left to right. In this way, ψ(x → −∞) describes an incident wave moving to the right and a reflected wave moving to the left, and ψ(x → −∞) describes a transmitted wave moving to the right or an evanescent wave. The upper components for scattering states are written as The group velocity of the waves described above is given by where the double signal is related to propagation direction. For both range the probability current densities are given by and Note that J 1 (−∞) = J inc −J re f and J 1 (+∞) = J trans , where J inc , J re f and J trans are nonnegative quantities characterizing the incident, reflected and transmitted waves, respectively. If E + m > 0, then A + e +ikx (A − e −ikx ) will describe the incident (reflected) wave, and D − = 0. On the other hand, if E + m < 0, then A − e −ikx (A + e +ikx ) will describe the incident (reflected) wave, and D + = 0. To determinate the reflection and transmission coefficients, we use the probability current densities given by (24) and (25). The x-independent probability current allow us to define the reflection and transmission coefficients as R ± = |r ± | 2 , T ± = |t ± | 2 with R ± + T ± = 1 (26) where the quantities are called reflection and transmission amplitudes, respectively. We demand that ψ + be continuous at x = ±a, that is Moreover, the effect due to the delta function potential on dψ + /dx in the neighborhood of x = ±a can be evaluated by integrating (18) from ±a − σ to ±a + σ and taking the limit σ → 0. Thereby, we obtain With ψ + (x) given by (21), conditions (28) and (29) imply that Omiting the algebraic details, we obtain the relative amplitudes where The equation (37) given us the transmission coefficient The scattering process is only possible with localized energies in the range |E| > m and then k ∈ R. We see that both the effective potential as the transmission coefficient depends on the energy and then the transmission coefficient is not symmetric with respect to E. We can see that the transmission increases with energy, namely, |E| → ∞ T → 1. The transmission coefficient provides us the necessary condition to obtain the resonant states Therefore, the energies of resonant states can be writen as and in the limit N → ∞, we obtain For this case r = 0 the energy is fixed (E = C 2 p /C Σ − m) and we can see that the pseudoscalar potential influence to transmission coefficient as showed in figure 2. From figure 2, we note an oscillatory behavior and no full reflection, as expected. As T does not depend on the sing of C p the transmission coefficient have the same values from both barrier and well pseudoscalar potentials. Some profiles for transmission coefficient are show in the figure 3 for differents values of C p and C Σ . We can see that there is not full reflection for all cases. Further, from figure 3 we can note that curves of the transmission coefficient for negative energy are closer between them in comparasion to curves for positive energy. As expected, for very high energy, T → 1 and also observe that the pseudoscalar potential maintains the oscillatory behavior of T .
Also, we can observe that the energies from resonant states is much similar to bound state energies for an infinity double-step potential in the limit N → ∞ The similarity is because the bound state energies corresponds approximately to real part of the resonant energies obtained from the poles of the transmission amplitude for a square well [31]. A study on correspondence between behavior of T and the bound-state energies for nonrelativistic square wells and barriers was done by Maheswari and collaborators [32]. But the authors make some mistakes, corrected by Ahmed [33], which also discusses some criteria to find oscillatory T for a potential class.
Bound States
The bound-states solutions (|E| < m) can be obtained from (21) using the prescription k = +i |k| and A + = D − = 0 or k = −i |k| and A − = D + = 0, i.e., the bound-states solutions correspond to the poles of the transmission amplitude. The conditions (28) and (29) providing the following quantization condition 2η |k| We note that the above condition does not depend on the sign of C p (barrier or square well pseudoscalar potential), and hence the energies does not depend on the localization of the delta function. Using the trigonometric relation cot(θ) = − tan(θ/2) ± 1 + cot 2 (θ), the quantization condition can be rewrite as We can see that in the absence of pseudoscalar potential (C p = 0), the quantization condition is the same obtained for spin-0 bosons [34]. This equivalence confirm the results obtained by P. Alberto and collaborators [35], which show that the spin and pseudospin symmetries in Dirac equation produce an equivalent energy espectra for relativistic spin-1/2 and spin-0 particles in the presence of vector and scalar potentials.
Case r ≥ 0.
For r = 0 we have η = ±i |k|, therefore (46) can be writen as The above condition has just one solution in |k| = 0, therefore we do not have bound-states solutions for r = 0. The case r > 0 has a repulsive barrier between the two delta functions, and does not contain bound-states solutions too. The equation (20) and the condition |E| < m allows us to conclude that only have bound states for C Σ > 0.
Case r < 0.
For r < 0 we have the condition 0 ≤ C 2 p < 2mC Σ and the quantization conditions provides the figures 4 and 5. The effective potential given by equation (19) has not defined parity, and therefore do not know the parity of the solutions. Obviously, the case C p = 0 show energy levels with parity defined, even (odd) solutions to negative (positive) sign in (47), the same behavior is found in reference [34]. The ground state energy (E g.s ), in the figure 4 (5), is given by E g.s ≃ m − C Σ (E g.s ≃ m − C p sgn(C p )). Therefore we obtain the constrain C Σ + C p sgn(C p ) = 2m for the low-energy state. As we already know, there are always bound-states solutions (at least one) if the intensity of the pseudoscalar barrier or well not exceeds the critical value C critic p = √ 2mC Σ , for C Σ 0.
Conclusions
The states of fermions in the framework of mixed vector-scalar-pseudoscalar square potentials was investigated. The condition for spin symmetry (∆ = 0) enables to decouple the Dirac equation in an effective Schrödinger equation with a square potential with repulsive and attractive delta-functions situated at the borders for the upper component; and the lower component was expressed in term of the upper component in a simple way. An oscillatory transmission coefficient and resonant states energy were obtained. We showed that existence of bound-state solutions are conditioned by the intensity of the pseudoscalar potential, which posses a critical value C critic p = √ 2mC Σ for C Σ 0. In the absence of pseudoscalar potential, we obtain the same spectrum for spinless particles [34], confirming the predictions of Ref. [35].
This work can illustrate some general conclusions drawn in previous works about spin and pseudospin symmetries, we can obtain the solutions for Σ = 0 from the ∆ = 0 case, using the chiral transformation (changing the signs of m and C p in the relevant expressions). Finally, it is well known that square potentials, wells and barriers are of certain interest in solid state physics, therefore our results could be applied to refine one-dimensional potential models caused by ions in a periodic crystal lattice, as the Kronig-Penney model [36]. Other possible application of our results could be in the neutron scattering on nucleus, where bound-state informations are extract for many isotopes in well [37]. | 2015-11-04T12:04:18.000Z | 2015-07-29T00:00:00.000 | {
"year": 2015,
"sha1": "f955b7049bdb353478fdffff63cc68ed90b524fc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1507.08318",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e063352d5964d122a2e9367a8524c38ef2ec30c2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
7075692 | pes2o/s2orc | v3-fos-license | A Naturally Occurring Lgr4 Splice Variant Encodes a Soluble Antagonist Useful for Demonstrating the Gonadal Roles of Lgr4 in Mammals
Leucine-rich repeat containing G protein-coupled receptor 4 (LGR4) promotes the Wnt signaling through interaction with R-spondins or norrin. Using PCR amplification from rat ovarian cDNAs, we identified a naturally occurring Lgr4 splice variant encoding only the ectodomain of Lgr4, which was named Lgr4-ED. Lgr4-ED can be detected as a secreted protein in the extracts from rodent and bovine postnatal gonads, suggesting conservation of Lgr4-ED in mammals. Recombinant Lgr4-ED purified from the conditioned media of transfected 293T cells was found to dose-dependently inhibit the LGR4-mediated Wnt signaling induced by RSPO2 or norrin, suggesting that it is capable of ligand absorption and could have a potential role as an antagonist. Intraperitoneal injection of purified recombinant Lgr4-ED into newborn mice was found to significantly decrease the testicular expression of estrogen receptor alpha and aquaporin 1, which is similar to the phenotype found in Lgr4-null mice. Administration of recombinant Lgr4-ED to superovulated female rats can also decrease the expression of estrogen receptor alpha, aquaporin 1, LH receptor and other key steroidogenic genes as well as bring about the suppression of progesterone production. Thus, these findings suggest that endogenously expressed Lgr4-ED may act as an antagonist molecule and help to fine-tune the R-spondin/norrin-mediated Lgr4-Wnt signaling during gonadal development.
Introduction
Genomic studies of leucine-rich repeat containing G proteincoupled receptors (LGRs) from diverse species have indicated that LGRs can be subdivided into three groups [1,2]. The ligands for the group A and group C LGRs are glycoprotein hormones and relaxin/insulin-like peptides, respectively. Intriguingly, several studies have demonstrated that the group B LGRs are able to interact with R-spondins [3,4], whereas our recent study has shown that they are also able to bind to bursicon-like molecules such as bursicon and norrin [5,6].
The mammalian group B LGRs have recently gained prominence as potential stem cell markers and they seem to play crucial roles in maintaining stem cell functions in diverse tissues. For example, LGR4 is required for maintenance of stem cells in the intestine, mammary gland and prostate [7][8][9].
LGR5 is a marker of stem cells located in the crypts of the gastrointestinal tracts [10,11], the nascent nephrons of the kidney [12], and the hair follicles [13].
LGR6-positive stem cells in the hair follicles have been found to be capable of generating all cell lineages of the skin [14].
In addition to their vital roles in stem cells, studies using mutant animal models have also shown that the group B LGRs are essential during mammalian development. For example, it has been demonstrated that the Lgr4 gene displays a very wide expression, with stronger signals being present in the kidney, adrenal gland, bone/cartilage, gastrointestinal tracts, heart, reproductive tracts and nervous systems [15,16]. As a result of this wide distribution, the phenotypes of Lgr4-null mice are quite complicated. Disruption of the Lgr4 gene in mice on the C57B1/ 6J x Swiss Webster background led to perinatal lethality and intrauterine growth retardation; these effects were associated with pronounced decreases in the weights of the kidney and liver [16]. In contrast, Lgr4-null mice that have a CD1 background are viable; nevertheless, male Lgr4-null mice are sterile and have a number of major defects affecting their reproductive tracts; these include the dilated rete testis and absence of sperms in the epididymis [17]. Further studies have suggested that Lgr4 plays pivotal roles in regulating the expression of estrogen receptor alpha (Esr1) and aquaporin (Aqp1), remodeling of basement membrane, and regional differentiation of the male reproductive tracts via epithelial-mesenchymal interactions [17,18].
In addition to the testis, the Lgr4 gene is also abundantly expressed in ovarian follicles and has an even higher expression level in corpus luteum [15]. However, the expression profiles of Lgr4 in the postnatal gonads as well as how the Lgr4 signaling affects postnatal ovarian development have not yet been well characterized. In the present study, we identified a naturally occurring Lgr4 splice variant encoding only the extracellular ectodomain of Lgr4 and named it as Lgr4-ED. The recombinant Lgr4-ED protein was further generated to characterize its potential roles in the postnatal testis and ovary.
Ethics statement
All animals were housed under a controlled humidity, temperature, and light regimen in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Yang-Ming University. Animal treatments and sacrifice were approved by the Institutional Animal Care and Use Committee of the National Yang-Ming University (Permit Number: 1021229). All efforts were made to minimize animal suffering.
Animal treatments and progesterone assays C57B1/6J mice and Sprague-Dawley rats were obtained from the laboratory animal center (National Yang-Ming University, Taipei, Taiwan) and randomly allocated into groups. For testing the Lgr4-ED effects on mouse testes, neonatal male mice were intraperitoneally injected with 10 mg of the purified Lgr4-ED protein every week and the testes were harvested on day 22. For time-course analyses of Lgr4 expression in the superovulation model, immature female rats (26-day old) were primed with 15 IU pregnant mare serum gonadotropin (PMSG) at 0900-1000 A.M. and received an intraperitoneal injection of 10 IU human chorionic gonadotropin (hCG) 48 h later. For Lgr4-ED injection in the superovulation model, 20 mg of the purified Lgr4-ED protein was co-injected with PMSG intraperitoneally and the ovaries were harvested at 48 h after injection.
For analyzing the changes of progesterone, the harvested ovaries were weighed and homogenized in cold PBS. The protein contents in collected supernatants were measured for normalization using the Micro BCA protein assay kit (Pierce Biotechnology). The amounts of progesterone in the supernatants were measured by ELISA using the specific anti-progesterone antibody (Sigma).
Cloning of the Lgr4 splice variant and generation of the recombinant Lgr4-ED protein
The Lgr4 splice variant was identified during PCR amplification of Lgr4 from the mature rat ovarian cDNA by Dr. Masataka Kudo using primers 5-TTGGAGAGTCTAACCTTG-3 and 5-TTAATAGCACTAAGGTCACAG-3. The cDNA construct encoding the Lgr4-ED protein was kindly provided by Dr. Aaron Hsueh at Stanford University. To facilitate recombinant protein purification, the construct was designed by adding a FLAG epitope tag at the N-terminus. The constructed plasmid was purified and then transfected into human 293T cells using LipofectAMINE 2000 (Life Technologies). Transfected cells were selected by the Zeocin-containing medium. The selected cells were allowed to reach confluence and then cultured for 72 h in the serum-free medium. Conditioned media were collected, filtrated and then subjected to anti-FLAG M1 affinity gel (Sigma) for protein purification. Measurement of the protein content was carried out by Micro BCA protein assay kit (Pierce Biotechnology). The purity and biochemical characteristics of the purified protein were analyzed by electrophoresis on a 10% SDS polyacrylamide gel.
Immunoblotting and immunohistochemical analyses
The rabbit anti-Esr1 antibody was purchased from Santa Cruz Biotechnology. The mouse anti-FLAG monoclonal antibody was from Sigma-Aldrich Corp. The rabbit anti-Lgr4 antibodies were from Abcam and Dr. Aaron Hsueh. To extract full-length Lgr4 from transfected 293T cells, the cells were lysed in ice-cold RIPA lysis buffer (50 mM Tris-HCl, pH 7.4, 150 mM NaCl, 1 mM dithiothreitol, 1% NP40, 0.1% SDS) supplemented with protease inhibitor cocktails (Roche). The lysate was pelleted by centrifugation at 16,000 g for 15 min at 4uC and the supernatant was mixed with 5X sample buffer and incubated at 37uC for 1 hr before analyzed by SDS-PAGE. The bovine follicular fluid was collected from a local slaughterhouse. Rat ovaries and mouse testes were homogenized in ice-cold PBS. Following centrifugation, protein amounts in the supernatants were quantified by Micro BCA protein assay kit (Pierce Biotechnology) before subjecting to electrophoresis and immunoblotting assays. For immunohistochemical analyses, the ovaries and testes harvested from treated animals were fixed in Bouin's fixative and embedded. Tissue sections were probed with specific primary antibodies. Substitution for the primary antibodies with the rabbit preimmune serum or normal IgG served as the negative controls. Staining was visualized using the HRP-conjugated secondary antibody followed by the Nova Red kit (DakoCytomation) or Alexa Fluor 488-conjugated secondary antibody (Life Technologies) followed by observation under a fluorescent microscope.
Wnt reporter luciferase assays
TOPFLASH-luciferase assays were performed by transfecting human LGR4 or LGR4 and human norrin plasmids as described previously [5]. Briefly, for RSPO2 treatment, 293T cells were seeded into 24-well plates and transfected with human LGR4 (0.05 mg) and pCMV-b-Gal (0.01 mg) plasmids using LipofectA-MINE 2000 (Life Technologies). At 24 h after transfection, cells were further treated with or without WNT3A (1 nM) and RSPO2 (3 nM) for another 16 h in serum-free medium. For norrin expression, 293T cells were transfected with LGR4 (0.05 mg), pCMV-b-Gal (0.01 mg) and/or increasing amounts of norrin plasmid using LipofectAMINE 2000 (Life Technologies). At 24 h after transfection, cells were further cultured in serum-free medium for another 16 h. Luciferase activities were determined using luciferase assay kits (Promega) and normalized using bgalactosidase activities. All experiments were performed at least three times in triplicate.
Data analysis
For comparisons of significant difference, the values were subjected to analysis by the Student's t-test. Significance was accepted at P,0.05 and is indicated by asterisks in the figures unless noted otherwise.
Results
Identification of a Lgr4 splice variant encoding only the ectodomain of Lgr4 protein During PCR amplification of the Lgr4 fragment from mature rat ovaries, a previously uncharacterized splice form was found (Fig. 1A, left panel). The splice variant was then cloned and sequenced. Genomic structure analysis indicated that this splice form consists of the first 16 exons of Lgr4 (Fig. 1A, right panel). Sequencing data indicated that alternative splicing had resulted in deletion of exon 17 and recreation of a previously unidentified exon/intron junction in exon 18 of the Lgr4 gene, thus leading to a frame-shift and the introduction of an early termination codon before the translation of seven-transmembrane domains of Lgr4 (Fig. 1B). Consequently, this splice variant encodes the first 500 amino acids of Lgr4, including the signal peptide, N-flanking cysteine-rich and leucine-rich repeat sequence, 17 leucine-rich repeat motifs and C-flanking cysteine-rich sequence (Fig. 1A, right panel). The resulting protein contains only the extracellular portion of Lgr4 and thus was named Lgr4 ectodomain, Lgr4-ED. Lgr4-ED was predicted to be a secreted protein due to the lack of seven-transmembrane domains of the original protein. To confirm this, the cDNA corresponding to full-length rat Lgr4 or Lgr4-ED was constructed into pcDNA3.1 to allow mammalian cell expression. Western blot analyses indicated that the Lgr4 antibody can specifically recognize the full-length Lgr4 protein migrated as a band around 130 kDa. The recombinant Lgr4-ED protein was capable of being secreted from transfected 293T cells and had a molecular weight around 70 kDa ( Fig. 2A, right panel). We further purified the recombinant FLAG-tagged Lgr4-ED protein from the conditioned media using affinity chromatography. Under reducing conditions, purified Lgr4-ED migrated as a band around 70 kDa ( Fig. 2A, left panel, lane 5). The protein migrated faster after treatment with N-glycosidase F ( Fig. 2A, left panel, lane 6), indicating the N-linked glycosylation of the recombinant Lgr4-ED protein.
To further confirm the endogenous translation of Lgr4-ED, protein samples from the gonads of various species, including the follicular fluid collected from bovine ovaries and the supernatants from tissue homogenates of rat ovaries and mouse testes at different ages, were used (Fig. 2B). Western analysis using the anti-Lgr4 antibody showed a specific band around 70 kDa in the bovine follicular fluid. This signal did not overlap with prominent albumin bands as compared to the protein migration positions visualized by Coomassie blue staining, suggesting that the Lgr4 antibody was specific. In homogenates prepared from immature or mature rat ovaries and immature or mature mouse testes, signals of a similar molecular size were also detected. Taken together, these findings suggest that the Lgr4-ED protein is naturally secreted by both types of gonads of various mammals.
Functional tests of Lgr4-ED
We further evaluated the bioactivity of the recombinant Lgr4-ED protein. Lgr4 has been demonstrated to be a receptor for Rspondins that mediate the Wnt signal enhancement [3,4]. The combination of WNT3A with RSPO2 led to a significant increase of the Wnt signaling as measured by the TOPFLASH reporter assay. In contrast, exogenous addition of the recombinant Lgr4- Lgr4 Ectodomain as an Antagonist PLOS ONE | www.plosone.org ED suppressed this response in a concentration-dependent manner (Fig. 3A). In addition, norrin is also able to stimulate the Wnt signaling mediated by human LGR4 [5], whereas such a stimulation can be reversed by co-expression of Lgr4-ED (Fig. 3B). These findings suggest that the recombinant Lgr4-ED protein acts as a functional antagonist capable of neutralizing the Wnt signaling induced by R-spondins and norrin.
Testicular and ovarian expression of Lgr4
To determine the testicular cell types that express Lgr4, the testes from mice at 7 days of age were digested and the cell types were further separated using magnetic activated cell sorting (MACS) based on antibodies against Thy1 antigens expressed in spermatogonial stem cells [19]. As shown in Fig. 4A, the Lgr4 mRNA is mainly detected in the Thy1 2 somatic cell-enriched fraction but not in the Thy1 + spermatogonial stem cell-enriched germ cell fraction.
To determine the expression of Lgr4 in the ovary, the ovaries were collected from superovulated immature rats primed with PMSG followed by hCG for different intervals. Quantitative realtime PCR analyses showed that the Lgr4 mRNA expression was elevated and remained at high levels 24 h after hCG injection (Fig. 4B). Subsequent analyses of the Lgr4 transcript in various ovarian compartments, including granulosa cells, theca shells, cumulus-oocyte complexes and corpora lutea, indicated that Lgr4 is widely expressed in these different ovarian cell types with corpora lutea showing the highest expression level (Fig. 4C).
In addition, immunohistochemical analyses were also carried out to confirm the distribution of the Lgr4 protein. In mouse testes, the Lgr4 positive cells were mainly located around the periphery of the seminiferous tubules ( Fig. 5A and 5B). Considering the Lgr4 mRNA profile above and the cell morphology shown in the immunohistochemical staining, this data suggests that the Lgr4 protein is probably expressed in peritubular myoid cells but not in spermatogonia or Sertoli cells. In the ovaries harvested from adult rats (8 weeks old), a strong Lgr4 immunoreactivity was detected in corpora lutea (Fig. 5C and 5D), consistent with its mRNA profile.
Lgr4-ED treatment decreases the expression of estrogen receptors and aquaporin 1 in the postnatal testis
It has been shown that Lgr4 knockout leads to swelling and liquid accumulation in mouse testes as a consequence of downregulation of the expression of estrogen receptor alpha (Esr1) and aquaporin (Apq1) [17,18]. To test whether Lgr4-ED is a functional antagonist on testicular development in vivo, newborn male mice were intraperitoneally injected with Lgr4-ED on day 1, 8 and 15 after birth and their testes were harvested on day 22 for analyses. We found that Lgr4-ED administration did not affect the survival, body weights (controls: 8.461.3 g; treated: 8.561.4 g, n = 6) and testicular weights (Fig. 6A) of the treated mice. However, Lgr4-ED treatment was found to suppress the mRNA expression of estrogen receptors and Aqp1, with Esr1 showing the most significant decrease (Fig. 6B). To confirm this, immunochemical staining against Esr1 was carried out and it was found that the Esr1 protein signal was significantly decreased in the testes of mice after Lgr4-ED injection (Figs. 6C and 6D). Taken together, our results suggest that Lgr4-ED administration can produce some of the phenotypes associated with Lgr4-knockout mice and this is likely via neutralizing the actions of endogenous Lgr4 ligands by Lgr4-ED.
Lgr4-ED treatment suppresses ovarian development and steriodogenesis
To test the Lgr4-ED effects on ovarian development and steriodogenesis, immature female rats were intraperitoneally injected with PMSG together with or without the recombinant Lgr4-ED protein and their ovaries were then collected 2 days after injection. Although no significant changes in the ovarian weight (control: 0.03160.0015 g; treated: 0.03960.0041 g, n = 7) were detected, Lgr4-ED administration did significantly decrease the expression levels of Esr1, Aqp1 and Lhr (Fig. 7A). In addition, Lgr4-ED injection also led to the suppression of PMSG-induced progesterone production (Fig. 7B) and this was accompanied by decreases in the transcript levels of various steroidogenic enzymes including steroidogenic acute regulatory protein and 3-b-hydroxysteroid dehydrogenase (Fig. 7C).
Discussion
In the testis, undifferentiated spermatogonia are discontinuously scattered around the basement of seminiferous tubules. Although Lgr4 has been proposed as a stem cell marker in many tissues [7][8][9], our staining results showed that the Lgr4 signal is mainly located in the connected cells that form a continuous sheet surrounding the basement membrane of seminiferous tubules (Fig. 5A), suggesting that Lgr4 is unlikely to be located in spermatogonia. Real-time PCR quantification of isolated testicular cells further supported this hypothesis (Fig. 4A). Indeed, a previous study has also proposed that the Lgr4 transcript is probably expressed in the myoepithelial cells of seminiferous tubules [15]. Using a gene trap approach, Qian et al. further demonstrated that the Lgr4 transcript is selectively located in peritubular myoid cells but not in spermatogonia of postnatal mouse testes [20]; these findings are consistent with our results.
By disrupting the Lgr4 gene with a trap vector carrying the bgalactosidase tracing enzyme, Lgr4 in mouse ovaries has been proposed to be expressed in corpus lutea but not in other follicular compartments [15]. Although our results of mRNA quantification and protein staining in rat ovaries indicated that corpus lutea did express the highest level of Lgr4, we also detected moderate levels of the Lgr4 transcript in other ovarian cells isolated from antral follicles (Fig. 4C). The stage difference between the appearance of Lgr4 mRNA in rat ovaries and the appearance of translated bgalactosidase protein signal in mouse ovaries may be explained by the fact that different species were used or by effects of as yet uncharacterized regulatory factors that are induced after luteogenesis to control the initiation of translation of the Lgr4 mRNA. In addition, sequence difference between endogenous Lgr4 and the tracing LacZ-containing cassette may also result in different effects on post-transcriptional or translational regulation.
Our findings indicating the presence of a Lgr4 splice variant encoding only the Lgr4 ectodomain is novel for the group B LGRs. Although it is well known that splicing errors may happen in the posttranscriptional processing of premRNAs derived from complex genes, these erroneous mRNAs, which encode potentially toxic polypeptide fragments, are generally eliminated in the cytoplasm rapidly; this will lead to undetectable levels of the Lgr4 Ectodomain as an Antagonist PLOS ONE | www.plosone.org corresponding proteins [21]. Our findings showing the consistent presence of the Lgr4-ED protein signal in mouse, rat and bovine gonads strongly suggest this Lgr4 transcript variant is unlikely to be due to a random splicing error (Fig. 2). Therefore, this naturally occurring Lgr4 splice variant seems to be highly conserved across a range of mammalian species and thus the translated product may play important, but as yet uncharacterized, roles in modulation of the Lgr4 signaling.
Indeed, alternative splicing of G protein-coupled receptor (GPCR) genes has been demonstrated to be involved in the signal modulation of physiological processes and diseases. For GPCRs with a relatively large N-terminal ectodomain, alternative splicing may generate truncated receptors lacking the transmembrane region and these then serve as dominant-negative antagonists to full-length receptors. Not only found in our study of Lgr4, alternative splice variants encoding only the extracellular region of GPCRs have also been reported for several members of the LGR family. For example, in the group A LGRs, a truncated soluble luteinizing hormone receptor lacking the transmembrane region and a putative soluble fragment of follicle-stimulating hormone receptor have been isolated from the turkey ovary and the ovine testis, respectively [22,23]. Furthermore, a human thyroidstimulating hormone receptor mRNA variant encoding only the extracellular ligand-binding domain has also been found and reported to play a potential role in thyroid physiology and/or autoimmune thyroid disease [24]. Among the group C LGRs, an alternative Lgr7 transcript in rodents that encodes a secreted protein containing the low-density lipoprotein class A module has been identified. Expression of this truncated fragment significantly decreases the relaxin-induced signaling of Lgr7 [25]. Beyond the LGR family, splice variants containing only the extracellular region of receptors have also been reported for other GPCRs that have a large ectodomain, such as corticotropin-releasing hormone receptor [26], metabotropic glutamate receptors [27,28] and gamma-aminobutyric acid B receptor [29]. In the majority of these cases, the truncated proteins seem to act as molecules that are able to bind but are unable to bring about signaling; this allows fine-tuning of the full-length receptor signals.
Using reporter assays, we demonstrated that the recombinant Lgr4-ED can indeed dampen the Wnt/b-catenin signaling in vitro (Fig. 3). Interestingly, balance of the Wnt/b-catenin signaling has been demonstrated to be crucial for normal development of the male and female reproductive systems. For examples, deficiency in the Wnt signaling, including in Wnt4, Wnt5a, and Wnt7a, will result in severe malformation of the genitals and infertility [30][31][32], whereas hyperactivation of the Wnt/b-catenin pathway can also lead to germ cell apoptosis and male infertility [33,34]. Thus, the endogenous expression of Lgr4-ED may act as a decoy molecule that aids modulation of the strength of the Wnt/bcatenin signaling in order to maintain appropriate development conditions for the gonads.
Lgr4-null mice show strong dilation of the rete testis and efferent ducts due to defects in liquid reabsorption. These phenotypes are accompanied by down-regulation of steroid receptors, water transporters and ion transporters, including estrogen receptor, androgen receptor, aquaporin 1, aquaporin 9, Na + -K + -ATPase and sodium/hydrogen exchanger 3 [17,35]. Of interest, not only showing the antagonizing effect against the Wnt/b-catenin signaling in vitro (Fig. 3), injection of the recombinant Lgr4-ED into mice can also down-regulate the expression of Esr1 and Aqp1 in the testis in vivo (Fig. 6). Although there is still no consensus regarding the testicular expression and localization of Esr1 in different species and previous studies on Lgr4-null mice also indicated the reduction of Esr1 immunostaining was mainly observed in the epididymis and efferent ducts but not in the testis, several recent reports have clearly demonstrated that both mRNA and corresponding protein of Esr1 can be detected in the Sertoli cells in mouse and rat testes [36][37][38]. In addition, the Esr1knockut male mice are sterile and show atrophic testes with a loss of germ cells in the dilated seminiferous tubules [39], consistent with the phenotypes observed in Lgr4-knock male mice [17,20]. Therefore, the testicular effects of Lgr4 on controlling the Esr1 expression might potentially explain the consequence of infertility in Lgr4-null mice. In addition, although expression of Aqp1 has been reported to be abundant in the epididymis, several studies have also demonstrated it can be detected locally in some specific cells of the testis. For example, the Aqp1 protein signal can be detected in the endothelium of the blood vessels in the testicular interstitium of cats [40]. Similar results have also been reported in monkeys, rats and mice [41,42]. These might explain why we can observe the expressional change of Aqp1 in Lgr4-ED treated mice. Taken together, these findings suggest that Lgr4-ED will be useful as a tool when exploring the possible physiological functions of Lgr4 in various organs. Taking advantage of this possibility, we used Lgr4-ED to study the potential roles of Lgr4 in the ovary. Although Lgr4-null female mice also show significant reduction in fertility [35], no obvious histological change has ever been reported in the ovary. By using superovulated female rats for the Lgr4-ED injection, we showed that Lgr4-ED had suppressive effects on the PMSG induction of Lhr expression and on progesterone production in the ovary (Fig. 7). However, we also noticed that changes of steroidogenic gene levels in Lgr4-ED treated animals are not consistent with the dramatic down-regulation of progesterone production. This might be explained by the following reasons. It has been known that the levels of steroidogenic genes can be elevated immediately in the gonadotropin-primed superovulated rodents but then gradually decrease along with the time. Taking Star as an example, the Star transcript in the superovulated rats has been reported to reach the highest level at 6 hr after PMSG injection and return to almost the basal level after two days [43]. However, for clearly comparing the difference of progesterone levels between control and treatment groups, we designed to harvest the ovaries at 48 hr after treatment to provide enough time for progesterone production and accumulation. This may lead us to miss the best time point for quantification of these genes. In addition, down-regulation of Lhr, steroidogenic genes and other yet uncharacterized candidates in Lgr4-ED treated animals may also cause synergetic effects on suppression of progesterone production.
Taken together, our findings suggest that the Lgr4 signaling may help to accelerate the ovarian luteinization process. However, there is a need to consider the possibility that the lack of apparent phenotypes in the Lgr4-null mouse ovaries may be due to signal compensation by other ovary-expressed receptors such as Lgr5, Lgr6, and frizzled, which are also involved in similar downstream to activate the Wnt/b-catenin signaling as Lgr4. One possibility to explain our data is that administration of the recombinant Lgr4-ED protein may absorb all the potential Wnt activators, such as Rspondins and norrin, and thus there will be a magnification of Lgr4-ED's ovarian effects because of the interruption of other potential Wnt/b-catenin signaling driven by other group B LGRs and frizzled proteins. This hypothesis suggests that there is a need to characterize the detailed expression profiles of all the group B LGR members and to identify their ligand signatures across the various ovarian compartments; this will greatly help our understanding of their interplay and relationships. | 2016-05-12T22:15:10.714Z | 2014-09-04T00:00:00.000 | {
"year": 2014,
"sha1": "0099115ff5d85e2c0afbc757d214cdbe59d89697",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0106804&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0099115ff5d85e2c0afbc757d214cdbe59d89697",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248975767 | pes2o/s2orc | v3-fos-license | The amelioration of alcohol-induced liver and intestinal barrier injury by Lactobacillus rhamnosus Gorbach-Goldin (LGG) is dependent on Interleukin 22 (IL-22) expression
ABSTRACT Alcoholic liver disease (ALD) is a common clinical liver injury disease. Lactobacillus rhamnosus Gorbach-Goldin (LGG) has been revealed to alleviate alcohol-induced intestinal barrier and liver injury. However, the underlying mechanism of LGG treatment for ALD remains unclear. To clarify this aspect, a chronic plus binge ALD model was constructed using C57BL/6 mice in line with a chronic alcohol binge feeding protocol. Interleukin 22 (IL-22) level was determined by quantitative real-time polymerase-chain reaction and enzyme-linked immunosorbent assays. Effects of LGG in model or IL-22 knockdown in LGG-treated model on the liver injury and steatosis status, as well as intestinal barrier function were assessed by hematoxylin eosin (HE) staining. Levels of alanine aminotransferase (ALT), triglyceride (TG), and aspartate aminotransferase (AST) in serum were measured by the corresponding kit. Western blot analysis was conducted to detect protein expressions of intestinal tight junction protein 1 (ZO-1) and Claudin-1. Concretely, LGG elevated IL-22 level in liver tissues and serum, while inhibiting ALT, TG, and AST levels in alcohol-exposed mice. Moreover, LGG alleviated liver injury, steatosis, and intestinal barrier injury caused by alcohol, and enhanced ZO-1 and Claudin-1 expressions. Furthermore, IL-22 knockdown increased ALT, TG, and AST levels in serum, and aggravated liver injury, steatosis, and intestinal barrier injury. ZO-1 and Claudin-1 levels were downregulated by IL-22 silencing. Importantly, downregulation of IL-22 reversed the effect of LGG on the liver and intestinal barrier injury. To conclude, LGG protects against chronic alcohol-induced intestinal and liver injury via regulating the intestinal IL-22 signaling pathway.
Introduction
Excessive alcohol consumption could lead to several problems, seriously affecting public health in the world [1,2]. One of the cases is alcoholic liver disease (ALD), which is also the main cause of chronic liver diseases globally, could bring about extensive liver damage, including simple fatty liver, alcoholic steatohepatitis, liver fibrosis, cirrhosis, and even hepatocellular carcinoma [3][4][5]. ALD also exhibits an extremely complicated pathogenesis, mainly containing inflammatory immune response to injury, ethanol-mediated liver injury, and microbiome and intestinal permeability changes [6,7]. Scientists have long sought a safe and effective way to treat and prevent ALD, and despite remarkable progress, complete control toward ALD remains elusive. Fortunately, new pathophysiology-based therapies, such as interleukin 22 and anakinra, have shed light on the treatment of ALD [8]. Previous studies have revealed that IL-22 treatment has antisteatotic, antiapoptotic, antioxidant, proliferative, and antimicrobial effects, demonstrating that it may be a potential option for ameliorating ALD [9]. However, there is still a lack of in-depth investigation on the potential mechanism of its action.
Probiotics have good antioxidant activity and the ability to improve intestinal barrier function [10]. In nearly a decade, researchers have begun to take probiotics as a new way for the prevention and treatment of ALD, especially for Lactobacillus rhamnosus Gorbach-Goldin (LGG) [11,12].
LGG is a Gram-positive bacterium that plays an important role in lipid modulation, immunoregulation, and gene expression in diseases, such as ALD, nonalcoholic-liver disease, and inflammatory bowel disease [13]. Wang et al. reported that LGG treatment ameliorates alcohol-induced liver injury, promotes intestinal integrity, and enhances intestinal hypoxia-inducible factor [14]. Moreover, Bruch-Bertani et al. found that LGG exerts a hepatoprotective effect on an alcoholic liver disease model in Zebrafish by regulating gut permeability and inflammasomes [15]. Also, LGG treatment could reduce alcohol-induced hepatic inflammation [16]. Although numerous studies have suggested that supplementation with LGG can effectively ameliorate or prevent alcoholinduced liver injury, the mechanism of the beneficial action has not been well defined.
Here, we hypothesized that LGG may effectively alleviate alcohol-induced liver injury via modulating the IL-22 signaling pathway. Wild-type mouse model of chronic plus binge ALD was established to explore the effect of LGG on the expression level of IL-22, liver injury, and intestinal barrier. Besides, we managed to fathom out whether LGG generates protective effects on liver injury and intestinal barrier injury depending on IL-22 expression. Collectively, this study probed into the potential molecular mechanism regarding the protective effect of LGG on ALD using animal experiments, providing a potential basis for the treatment of chronic plus binge ALD in clinical practice.
Wild-type mouse models of chronic plus binge ALD and LGG experiment
The present study used 8-to 10-week-old male C57BL/6 J mice to construct chronic plus binge ALD model. All animals were acquired from Beijing Vital River Laboratory Animal Technology Co., Ltd (Beijing, China) and received humane care. The animal experiments were performed in accordance with the protocols approved by the Institutional Animal Care and Use Committee of Nanfang Hospital (No. G202010235).
All mice were placed in an environmentally controlled room (temperature: 23°C ± 1°C, humidity: 55% ± 5%, 12-h light/12-h dark, light cycle beginning at 07:00) with unrestricted access to water and food. The C57BL/6 J mice were subjected to a chronic plus a single binge ethanol feeding protocol proposed by the National Institute on Alcohol Abuse and Alcoholism [17]. In brief, these mice were first given the Lieber-DeCarli liquid control diet (F1259SP, Bioserv, Flemington, NJ, USA) for 5 days and further pairfed with the Lieber-DeCarli liquid control or ethanol diet (F1258SP, Bioserv, Flemington, NJ) for 10 days. At day 11, the ethanol-treated mice were subjected to 5 g/kg ethanol (140,029, JiuYi Reagent Co., Ltd., Nanjing, Jiangsu, China) while controltreated mice were given 9 g/kg maltose dextran (00895, Sigma-Aldrich, St. Louis, MO, USA).
For LGG treatment, mice received live LGG (suspended in physiological saline, 53,103, American Type Culture Collection, Manassas, VA, USA) at a dose of 10 9 colony-forming units (CFU)/day for 15 days by gavage [18]. The control samples were fed with the same volume of physiological saline (PB180353, Procell, Wuhan, Hubei, China).
On the last day of feeding, after gavage for 8 hours (h), mice were euthanized by a continuous exposure to 5% isoflurane (Keyuan Pharma, Jinan, Shandong, China) until 1 min after the breath stop according to the 2013 American Veterinary Medical Association Guidelines for the Euthanasia of Animals. Then, blood, liver, and colon tissues from mice were collected.
Enzyme-linked immunosorbent assay (ELISA)
IL-22 level was determined with an ELISA kit (PI591, Beyotime, Jiangsu, China) based on the manufacturer's protocol. Briefly, each well of enzyme-labeled-coated plate was added with 40 μl of sample analysis buffer followed by the addition of 40 μl sample and 100 μl of enzymelabeled reagent in turn. After cultivation with biotinylated antibodies, streptavidin-conjugated horseradish peroxidase (HRP) was added into each well and reacted with HRP substrate solution. The plates were read at a wavelength of 450 nm using a Multiskan FC device (1410101, Thermo Fisher Scientific).
To measure the levels of ALT and AST, the matrix solution was pre-heated in a 37°C thermostat, 20 μl of which was added to the 96-well plate. Next, 5 μl of serum samples were introduced into the assay well which was then placed in the 37°C thermostat for 30 min. Subsequently, 2,4-dinitrophenylhydrazine solution was used to stop the reaction. 20 min later, sodium hydroxide solution was supplied to each well for color development. Next, the absorbance was measured at 505 nm using the Multiskan FC device (1410101, Thermo Fisher Scientific). For hepatic TG analysis, liver samples were homogenized in ice-cold phosphatic buffer solution. The supernatant solution was obtained by centrifuging (12,000 g) at 4°C for 10 min. 2.5 μl of samples were mixed with 250 μl working solution. After incubation for 10 min, the absorbance was measured at 510 nm through the Multiskan FC device (1410101, Thermo Fisher Scientific).
Hematoxylin eosin (HE) staining
The liver and colon tissues were fixed in 4% paraformaldehyde (C104190, Aladdin, Shanghai, China), embedded in paraffin (P100933, Aladdin), and sliced into 5-µm-thick sections. After a day of baking at 37°C, these sections were immersed in xylene (X139941, Aladdin) twice for 10 min, 100% ethanol (140,029, JiuYi Reagent Co., Ltd) for 5 min, 80% ethanol for 5 min, and 70% ethanol for 5 min. Upon rinsing with tap water, these sections were stained with hematoxylin (H292717, Aladdin) for 5 min, and then soaked in tap water for 15 min. Afterward, the sections were subjected to 70% ethanol for 5 min and 80% ethanol for 5 min, then stained with eosin (E301878, Aladdin) for 1 min, and sealed with neutral resin (N305043, Aladdin). Finally, an inverted microscope (XSP-8CA, Shanghai Optical Instrument Factory, Shanghai, China) was served to observe and photograph the pathological changes.
Statistical analysis
All statistical analyses were conducted using GraphPad Prism 8.0 software (San Diego CA, USA) and the data were expressed as mean ± standard deviation. One-way analysis of variance was adopted to compare the differences among four groups, together with a Bonferroni post hoc test. The p value less than 0.05 was perceived as statistical significance.
Results
In this study, we surmised that LGG may effectively ameliorate alcohol-induced liver injury through regulating the IL-22 signaling pathway. Wild-type mouse model of chronic plus binge ALD was established to explore the effect of LGG on the expression level of IL-22, liver injury and intestinal barrier. This study dug deeply into the potential molecular mechanism concerning the protective effect of LGG on ALD using animal experiments. The results indicated that LGG alleviated alcohol-induced liver and intestinal barrier injury by modulating IL-22 expression, providing a potential basis for the treatment of chronic plus binge ALD in clinical practice.
LGG affected the expression of IL-22 in the chronic plus binge ALD model
Initially, we constructed a chronic plus binge ALD model to assess the effect of LGG on the level of IL-22 and liver function. As shown in Figure 1(a, b), qRT-PCR and ELISA results revealed that compared with the control, IL-22 level was notably reduced in the model (p < 0.01), whereas LGG could rescue such reduction (p < 0.01) and dramatically enhanced IL-22 level (p < 0.01, Figure 1 (a,b)). All these data signified that LGG increased the level of IL-22 in the chronic plus binge ALD model.
LGG improved liver injury in the chronic plus binge ALD model
We detected the levels of ALT, TG, and AST to evaluate the effect of LGG on liver function. Results exhibited in Figure 1(c-e) demonstrated that these levels were notably elevated in the chronic plus binge ALD model by contrast with those in the control (p < 0.001), while such elevation can be offset by introduction of LGG (p < 0.05). Additionally, compared with LGG treatment in the model, LGG treatment under the basic condition could significantly reduce the levels of ALT and AST (p < 0.05, Figure 1(c-e)).
Moreover, we evaluated the effect of LGG on the histopathological morphology of liver in mouse model through HE staining, and unveiled that alcohol treatment resulted in liver injury, inflammatory infiltration, steatosis, and lipid droplets elevated (Figure 2(a)). Fortunately, LGG could reduced this liver injury, steatosis, and lipid droplets in the model ( Figure 2(a)). Collectively, all these findings illustrated that LGG might play a protective role in the alcohol-damaged liver tissues.
LGG alleviated intestinal barrier injury caused by alcohol
The effect of LGG on the colon tissues in alcoholexposed mice was evaluated via HE staining. Compared with the control, the model mice showed severe injury in intestinal barrier, intestinal villi become thinner, shorter and irregular, which could be prominently alleviated after receiving gavage administration with LGG (Figure 2(b)).
Zonula occluden-1 (ZO-1) and Claudin-1 are instrumental in mediating tight junction barrier function, and also the main functional regulatory factors of tight junction [20]. Therefore, we investigated the impact of LGG on protein levels of ZO-1 and Claudin-1 in the colon using western blot analysis. The data demonstrated that protein levels of ZO-1 and Claudin-1 were both markedly lessened in the model group (p < 0.001, Figure 2(c,d)), but, comparatively, was signally augmented in the model +LGG group, and most augmented in the LGG group (p < 0.05, Figure 2(c,d)). All these discoveries manifested that gavage administration with LGG could alleviate intestinal barrier injury in mice with chronic plus binge ALD.
The amelioration of alcohol-induced liver injury by LGG was dependent on IL-22 expression
Since our previous results proved that LGG regulated IL-22 expression as well as ameliorated liver injury in the chronic plus binge ALD model, we wondered whether the amelioration of alcoholinduced liver injury by LGG was associated with IL-22 expression. As shown in Figure 3(a), the mRNA level of IL-22 in liver tissues was significantly reduced in shIL-22 group than that in shNC group (p < 0.001, Figure 3(a)). Also, the mRNA level of IL-22 was dwindled in the model+LGG +shIL-22 group compared with model+LGG +shNC group (p < 0.001, Figure 3(a)). Moreover, we observed that compared with shNC group, the levels of ALT, TG, and AST in serum were promoted in shIL-22 group (p < 0.05, Figure 3(b-d)). As expected, levels of these three factors were increased in the model+LGG+shIL-22 group as comparison with those in the model+LGG+shNC group (p < 0.05, Figure 3(b-d)).
The amelioration of alcohol-induced intestinal barrier damage by LGG was dependent on IL-22 expression
Given that the amelioration of the alcohol-induced intestinal barrier damage by LGG treatment might be associated with IL-22 expression, we examined the effect of IL-22 silencing on the intestinal barrier injury and protein expressions of ZO-1 and Claudin-1 through HE staining and western blot, respectively. In light of Figure 4(b), intestinal barrier injury was severer in the shIL-22 group than in shNC group. Similarly, intestinal barrier dysfunction, intestinal villi become thinner, shorter and irregular, was exacerbated in model+LGG +shIL-22 group (Figure 4(b)). Meanwhile, the western blot analysis revealed that expression levels of ZO-1 and Claudin-1 were both downregulated in shIL-22 group when compared to those in the shNC group (p < 0.01, Figure 4(c,d)), and were also inhibited in model+LGG+shIL-22 group by contrast with those in model+LGG+shNC group (p < 0.01, Figure 4(c,d)). Taken together, these data manifested that the amelioration of alcoholinduced intestinal barrier damage by LGG also relied on IL-22 expression.
Discussion
In the present study, we constructed an alcoholinduced ALD model to evaluate the potential mechanism concerning the effect of LGG on the intestinal barrier and liver injury. The results uncovered that supplementation of LGG could significantly ameliorate the liver function damage caused by long-term alcohol feeding. Moreover, the protective effect of LGG was associated with the IL-22 signaling pathway. LGG played its hepatoprotective activity by upregulating the expressions of IL-22 and intestinal barrier-related proteins. These data implicated that the IL-22 signaling pathway acts a crucial role in the probiotics LGG treatment of ALD.
Our results verified previous reports that probiotics ameliorate liver injury caused by chronic alcohol and the integrity of intestinal epithelial cell barrier function is a potential mechanism of liver injury [21,22]. Forsyth et al. reported that probiotics LGG could effectively reduce the alcohol-induced oxidative stress to maintain the intestinal barrier function and thus improve the liver injury using an animal model of ALD [23]. Probiotics also prevented liver damage in acute alcohol-fed models [24]. In this study, a model of chronic plus binge ALD was established. The data revealed that supplementation with LGG to model mice diminished the levels of ALT, TG, and AST. The following HE staining analysis confirmed that LGG reversed liver damage and steatosis induced by alcohol, which were consistent with previous studies.
Previous studies pointed out that, in a mouse model of ALD, chronic alcohol brings about loss of intestinal tight junctions and increased intestinal permeability [25,26]. Indeed, it has been revealed that LGG effectively protects intestinal barrier function [27,28]. LGG treatment protects intestinal epithelial cells from oxidant stress, possibly by maintaining cytoskeletal integrity [29]. Short-term oral supplementation of certain probiotics is more effective than standard treatment alone in promoting the recovery of intestinal flora and improving alcohol-caused liver injury [30]. As early as 1993, it has been confirmed that LGG alleviated the increase in intestinal mucosal permeability induced by milk in suckling mice [31]. In this study, LGG was confirmed to mitigate intestinal barrier damage by HE staining analysis, which well supported the results of previous studies. Additionally, the disintegration of tight junction proteins was found to be a contributing factor in the pathogenesis of chronic alcohol-caused intestinal barrier dysfunction [25]. In this study, ALD induction was associated with downregulations of ZO-1 and Claudin-1 in colon tissues. However, LGG effectively inhibited ALD-related decreases in the expression levels of these intestinal barrier-related proteins. Combined with our results and previous reports, we can conclude that LGG improves intestinal barrier function and ameliorate alcohol-caused liver injury.
Currently, the action mechanism of LGG in protecting against intestinal barrier and alcoholic liver injury is still unclear. As we know, probiotics could alter the gut microbiota in such a way that the intestinal cavity is altered in favor of the anti-inflammatory environment, thereby reducing the production of proinflammatory bacterial products and improving barrier integrity [23]. This reminded us to think about whether or not the inflammatory pathway is involved in the hepatoprotective effect or the restoration effect of intestinal barrier function of LGG in alcoholinduced mice. In the intestine, IL-22 was discovered to support the regeneration of intestinal epithelial barrier functions [32]. Moreover, IL-22 is implicated in several aspects of intestinal epithelial barrier functions, including epithelial cell growth and permeability, mucus and antimicrobial protein production, and complement production [33]. In our initial experiments, we observed that IL-22 expression was obviously upregulated after LGG treatment in chronic plus binge ALD mouse models, hinting that IL-22 may be involved in the protective role of LGG in intestinal barrier. As expected, downregulation of IL-22 aggravated liver function and intestinal barrier, suggesting that LGG prevented chronic alcohol-induced intestinal barrier and liver injury partly via regulating the expression of IL-22, which may be the potential mechanism of its beneficial action.
Conclusion
In summary, this study corroborates that LGG treatment in chronic plus binge ALD mouse model protects against alcohol-induced intestinal barrier and liver injury. Moreover, this is the first report to reveal that IL-22 signaling pathway plays a vital role in the process of LGG protection of ALD. However, a limitation should be addressed here is that this study is only conducted on liver and colon samples from healthy male mice. Further studies should be carried out in females, infant, and aged mouse models. Nevertheless, the present study explores the potential mechanism of the role of LGG in liver protection, providing a theoretical basis for the further optimization of probiotics in the prevention and therapy of ALD.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Data availability statement
The analyzed data sets generated during the study are available from the corresponding authors on reasonable request. | 2022-05-23T15:08:38.331Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "7e7a7d57deee9fad36fd6233661259e091529b71",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21655979.2022.2070998?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f341cc61d88665ddb72234c5a520670f6d1875a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16023670 | pes2o/s2orc | v3-fos-license | Relative entropy in diffusive relaxation
We establish convergence in the diffusive limit from entropy weak solutions of the equations of compressible gas dynamics with friction to the porous media equation away from vacuum. The result is based on a Lyapunov type of functional provided by a calculation of the relative entropy. The relative entropy method is also employed to establish convergence from entropic weak solutions of viscoelasticity with memory to the system of viscoelasticity of the rate-type.
Introduction
The relative entropy method of Dafermos and DiPerna [5,6,10] provides an efficient mathematical tool for studying stability and limiting processes among thermomechanical theories. It is intimately connected to the second law of thermodynamics and has been tested in various situations involving stability and asymptotic behavior of shocks (e.g. [10,3,19]), relaxation or kinetic limits in the hydrodynamic regime [27,1], stability and limiting processes among thermomechanical theories [5,16,17,8].
The method hinges on a direct calculation of the relative entropy between a dissipative solution and an entropy conservative (smooth) solution for the underlying thermomechanical process, which provides a remarkable stability formula [5,6]. In more complicated situations involving the comparison of two solutions with shocks it is supplemented with additional information, e.g. [10,3,19]. The objective of this article is to extend the relative entropy formula in situations where a dissipative solution of a thermomechanical system is directly compared to a dissipative solution of a limiting system. We use as test cases various paradigms of diffusive limits, the most significant perhaps being the validation of the limit from the Euler equations with friction to the porous media equation in the zero-relaxation limit. We consider the system of isentropic gas dynamics with friction with the so called diffusive scaling, which captures the effective long-time response. In the limit ε → 0 this system approaches the porous media equation ρ t − △ x p(ρ) = 0. (1.2) This problem has served as a paradigm for the theory of diffusive relaxation [24,18,12] and has been justified either by asymptotic in time analysis [13,25,21,14,15], or via direct analysis of the relaxation limit, for weak solutions in [23,22,24] or for smooth solutions near equilibrium in [4,20].
In this paper we compare directly a weak entropy solution of (1.1) to a smooth solution of (1.2) using a relative entropy analysis (Proposition 2.1). This, in turn, provides a convergence result to solutions of the porous media equation that stay away from vacuum (Theorems 2.7 and 2.8). The novelty of the present work is the simplicity of the proof following a Lyapunov type of analysis; in addition some new situations are analyzed (for instance, solutions approaching different end-states at ±∞), plus a rate of convergence is obtained. Finally, in the spirit of [2,8], the relative entropy inequality is extended between entropy measure-valued solutions of the Euler equation and the porous media in Section 2.4.
We then test some other cases of diffusive relaxation using the relative entropy method. In Section 3, we consider the p-system with damping in Lagrangian coordinates and establish convergence to a parabolic equation, in the high-friction limit (Theorem 3.3). In Section 4, we consider the limiting process from viscoelasticity of the memory type (4.1) to the system of viscoelasticity of the rate-type (4.2) in the diffusive regime. We provide a relative entropy estimation between the two theories and a convergence result (see Proposition 4.1 and Theorem 4.2) thereby extending for quasilinear systems previous convergence results in the semilinear case from [9,11].
It is remarkable that in all those examples the dissipation of the approximating system can be split in two separate parts: the dissipation of the limit diffusion equation, and a second part that captures the dissipation of the approximating system relative to its diffusive-scale limit.
Isentropic gas dynamics in Eulerian coordinates with damping
We consider the system of isentropic gas dynamics in three space dimensions with a damping term: where t ∈ R, x ∈ R 3 , the density ρ ≥ 0 and the momentum flux m ∈ R 3 . The pressure p(ρ) satisfies p ′ (ρ) > 0 which makes the system hyperbolic. An important particular case is that of the γ-law: p(ρ) = kρ γ with γ ≥ 1 and k > 0. In (2.1), the variables (x, t) are already scaled in the so called diffusive scaling. In the diffusive relaxation limit ε → 0, solutions of (2.1) formally converge to the porous media equation The goal of this work is to study this limit via the relative entropy method. We recall that (η, q 1 , q 2 , q 3 )(ρ, m) : R + × R 3 → R × R 3 is an entropyentropy flux pair for the hyperbolic system (2.1) if it satisfies the differential relations: where i, j = 1, 2, 3, δ ij stands for the Kronecker symbol, and the summation convention is used. Moreover, the entropy η(ρ, m) is dissipative (for the underlying relaxation process) if An example of an entropy pair is provided by the mechanical energy η(ρ, m) = 1 2 4) and the associated flux of mechanical work Here, h(ρ) = ρe(ρ), where e(ρ) is the internal energy of the gas connected to the pressure via e ′ (ρ) = p(ρ) ρ 2 . Accordingly, For the particular case of γ-law gases, h takes the form kρ log ρ for γ = 1.
Smooth solutions of (2.1) satisfy the identity The mechanical energy η(ρ, m) is dissipative for the relaxation process (2.1).
2.1. Hilbert expansion. We start by reviewing the Hilbert expansion associated to the relaxation process from (2.1) to (2.2). We introduce the asymptotic expansions to the balance of mass and momentum equations in (2.1), and collect together the terms of the same order, to obtain, respectively, In particular, we recover the equilibrium relation m 0 = 0 for the state variables, Darcy's law m 1 = −∇ x p(ρ 0 ), and observe that ρ 0 satisfies (2.2). Next, we focus on the asymptotic expansion of the entropy equation (2.7), and in particular on how the hyperbolic entropy (the mechanical energy) captures in the ε → 0 limit the entropy structure of the porous media equation. Introducing the Hilbert expansion into (2.7) and using m 0 = 0, we see that Again, collecting together terms of the same order gives Since m 1 = −∇ x p(ρ 0 ), the leading order term ρ 0 in the diffusive limit satisfies the energy identity Equation (2.8) captures the entropy dissipation of the porous medium equation (2.2) and h(ρ) is indeed the entropy selected by Otto [26] in his gradient flow interpretation of (2.2).
2.2.
Relative entropy identity. Let (ρ ε , m ε ) be a weak solution of (2.1) that satisfies the weak form of the entropy inequality (We drop the ε-dependence of (ρ ε , m ε ) except where emphasis makes it necessary.) Letρ be a smooth solution of the porous media equation (2.2); such a solution will also satisfy the entropy identity (2.8). Our goal is to devise an identity that monitors the distance between ρ ε andρ.
Such identities have been obtained via the relative entropy method in the context of problems of hyperbolic relaxation [17,27,1]. The relative entropy is defined as the quadratic part of the Taylor series expansion between two solutions (ρ, m) and (ρ,m); it takes the form while the corresponding relative entropy-flux reads where i = 1, 2, 3, f i stands for the (vector) of the flux in (2.1), and I i is the i-th column of the 3 × 3 identity matrix. Now the question arises about how to selectm in (2.10), (2.11). This relates to a significant difference among the hyperbolic relaxation and the diffusive relaxation frameworks: in the existing studies of hyperbolic relaxation limits one compares an energy dissipative with an energy conservative solution. The fact that the limiting solution is energy conservative (and smooth) is an important restriction in the derivation of the relative entropy identities available in the hyperbolic relaxation framework (see [27,1]). By contrast, by the nature of the diffusive relaxation framework, the solutions to be compared have both to be energy dissipative. To effect the comparison we select an ε-dependent solution (ρ,m) that adapts itself in the relaxation process.
A suitable selection ofm is proposed by rewriting (2.2) in the form, of the conservation of mass equation in (2.1) together with (a rescaled form of) Darcy's law. The energy identity (2.8) may also be expressed in terms of (ρ,m) as In turn, (2.13) is embedded into the system of Euler equations with relaxation, plus additional terms purported to be higher-order errors. A simple calculation shows that (ρ,m) satisfies (2.14) where (we use the convention of summation over repeated indices and)ē is given bȳ and is thus an error term.
The main result of this section is: be a weak entropy solution of (2.1) satisfying (2.9) and let (ρ,m) be a smooth solution of (2.13). Then, and e(ρ,m) is defined in (2.15).
Remark 2.2. The following remarks are in order, concerning the terms appearing on the right of (2.16): (a) The coefficient of the quadratic term Q depends only on (ρ,m); it is explicitly given by (c) The term R(ρ, m |ρ,m ) captures the dissipation of the relaxation system (2.1) relative to its diffusive scale limit (2.2). It turns out to be the quadratic part of the dissipative relaxation term with respect to (ρ,m). Indeed, for we compute the Hessian of R, and see that it has eigenvalues Proof of Proposition 2.1. By hypothesis (ρ, m) satisfies the weak from of the entropy inequality Also, (ρ,m) satisfies the energy identity From (2.1) and (2.14) we obtain and use the smoothness of (ρ,m) and (2.14) to compute To obtain (2.21) we used the identities (resulting from the entropy consistency relations) and the notation and Q is as in (2.17).
We conclude this section with two lemmas. The first establishes (under additional hypotheses on p) a bound of the quadratic term Q in terms of the relative entropy (2.10).
The second lemma indicates a relation between the "metric" induced by the relative entropy (2.10) and more traditional norms.
for some constant k > 0 and for γ > 1. Ifρ ∈ K = [δ, M ] with δ > 0 and M < +∞, then there exist positive constants R 0 (depending on K) and C 1 , C 2 (depending on K and R 0 ) such that Proof. Sinceρ ∈ K, there exist positive constants A and B such that for In view of (2.24), there exists R 0 depending on K such that with k > 0, γ > 1, then h(ρ) defined through h ′′ (ρ) = p ′ (ρ) ρ verifies hypothesis (2.24) and the results of Lemma 2.4 apply in that case.
2.3. Convergence in the diffusive relaxation limit. Proposition 2.1 is used in order to prove convergence from the Euler equations with friction in the diffusive limit towards the porous media equation. We carry out the analysis in two frameworks: • multi-d periodic solutions; • 1-d solutions in the real line with (possibly distinct) constant states ρ ± at ±∞. In both cases, the main hypothesis is thatρ is a smooth solution of (2.2) that sits away from vacuum.
Remark 2.6. It is worth to observe that other possible frameworks can be analyzed with these techniques, and we restrict our ourselves to the aforementioned cases to avoid further technicalities. For instance, with small modification in the arguments below, we can consider multi-d solutions (ρ,m) such thatρ → ρ * > 0 as |x| → +∞;m = −ε∇p(ρ) and such that ρ ≥ 0, 2.3.1. Multidimensional periodic solutions. In the periodic case, we work within the following framework, collectively referred to as (H 1 ): (i) (ρ , m) : (0, T ) × T 3 → R 4 is a (periodic) dissipative weak solution of (2.1) with ρ ≥ 0, satisfying the weak form of (2.1) and the integrated form of the entropy inequality (2.9): [0,+∞)×T 3 1 2 where θ(t) is a nonnegative Lipschitz test function compactly supported in [0, T ). The family (ρ ε , m ε ) is assumed to satisfy the uniform bounds which are natural within the given framework, and follow from corresponding uniform bounds on the initial data. will be used as a measure to control the distance between two solutions. We prove:
Theorem 2.7. Let T > 0 be fixed and assume p(ρ) satisfies (A) and (B).
Under hypothesis (H 1 ), the stability estimate Proof. We proceed to establish the integrated version of (2.16) under the regularity framework (H 1 ). To this end, we introduce in (2.25) the choice of test function (2.30) Taking the limit κ ↓ 0 gives Finally, to justify the calculations leading to (2.21), we start from the weak form of (2.20): where φ, ψ are Lipschitz test functions compactly supported in [0, T ) × T 3 and ψ is vector valued. Using the test functions with θ(τ ) as in (2.30) and ω(x) = 1, and upon taking κ ↓ 0, this gives where J is as in (2.21). Combining the above inequalities leads to and, by (2.15) and (2.26), where C 2 depends on K 1 , T andρ through the following norms of derivatives up to third order: Introducing the above estimates into (2.34), we obtain Gronwall's inequality then implies and (2.29) follows.
2.3.2.
The Cauchy problem on the real line. Next, we consider the Cauchy problem in one-space dimension for (2.36) To avoid unnecessary technicalities with the behavior as |x| → ∞, we assume the initial data (ρ 0 , m 0 ) take constant values outside a compact set [−R 0 .R 0 ], for some ρ ± > 0. By the finite speed of propagation property, any solution (ρ, m) will assume the same values outside the cones x < −R 0 − kt and for x > R 0 + kt, respectively, with k calculated in terms of the maximum wave speed on the range of the data. Letρ > 0 be a smooth solution of with initial dataρ 0 taking constant values outside some compact set [−R 0 , R 0 ], with ρ ± > 0 as above. By standard theory for the porous media equation (see [28]), the solution of (2.37) satisfiesρ(x, t) ≥ c > 0, and satisfiesρ(x, t) → ρ ± as x → ±∞ with sufficiently fast decay (in fact exponential). Definingm = −εp(ρ) x , we obtainm → 0 as x → ±∞. By modifying the entropy pair (2.4)-(2.5) (using a trivial linear pair), we definẽ is an entropy pair, and vanishes at the end states (ρ ± , 0). We next summarize the framework (H 2 ) for the relaxation limit: (i) (ρ , m) : (0, T ) × R → R 2 with ρ ≥ 0 is a dissipative weak solution of (2.36), that is, it satisfies the weak form of (2.36) and the integrated form of the entropy inequality with θ(t) a non negative Lipschitz test function compactly supported in [0, +∞). The family (ρ ε , m ε ) is assumed to satisfy the uniform bounds with K 1 , K 2 independent of ε. Of course, this dictates analogous uniform bounds on the energy norm of the initial data (ρ ε 0 , m ε 0 ). (ii)ρ is a smooth (C 3 ) solution of (2.37) that satisfiesρ ≥ c > 0;m is defined viam = −ε∇p(ρ).
We now denote by This will replace (2.27) as a yardstick for measuring distance between solutions in the one-dimensional Cauchy problem. Then we have: Proof. Proceeding along the lines of the proof of Theorem 2.7, one derives the analog of (2.34) for φ in (2.39). There is however a difference in the derivation as applies to the Cauchy problem: the equations (2.31) and (2.32) hold for test functions compactly supported in [0, T )×R. Thus we introduce the test functions where θ(τ ) defined in (2.30) and into (2.31) and (2.32). Sending R → ∞, using the asymptotic properties in x of (ρ, m) and (ρ,m), and subsequently sending κ ↓ 0, we obtain the analog of (2.33) and through that the analog of (2.34). A second difference lies in replacing (2.35) by the estimation where we used (2.15) and the constant C depends on T , K 1 in (2.38), and also onρ through the L ∞ norms of space-time derivatives up to third order and the norms div x .
Again using Gronwall, we deduce which completes the proof.
2.4.
Relative entropy for entropic measure-valued solutions. A variant of the relative entropy identity can be derived for comparing entropic measure-valued solutions of (2.1) with smooth solutions of (2.2). Such calculations are in the spirit of the recent works [2,8], the difference here being that two dissipative systems are compared. Let ν = ν x,t (x,t)∈Q T be a parametrized family of probability measures (Young measures) that acts on continuous functions and such that the integral (when defined) is measurable in (x, t) ∈ Q T . A measure-valued solution of (2.1) consists of a Young measure ν x,t (x,t)∈Q T with averages ν x,t , λ ρ = ρ , ν x,t , λ m = m , (2.40) that satisfies in the sense of distributions the measure-valued version of (2.1) The Young-measure ν = ν x,t (x,t)∈Q T is called an entropy measure valued solution if it also satisfies in the sense of distributions the averaged version of the entropy inequality for η − q as in (2.4)-(2.5).
Proof. We use (2.10) to define the averaged relative entropy The inequality (2.42) is built by using (2.41), (2.19) and the averaged version of (2.20) and following verbatim the steps and calculations in the proof of Proposition 2.1.
The p-system with damping
The p-system with damping in one space dimension is the system of conservation laws where τ satisfies τ ′ (u) > 0 to guarantee strict hyperbolicity. The system (3.1) is a model either for elasticity with friction or for isentropic gas dynamics in Lagrangian coordinates (denoted by (x, t)). Then u stands for the strain (or the specific volume for gases), v for the velocity and τ for the stress.
In the high friction limit ε → 0, solutions of (3.1) converge towards a solution of the parabolic equation (see [23]) We will indicate in this section a simple proof of that convergence using the relative entropy identity. For concreteness, we interpret (3.1) as a model for shear motions, u, v take values in R. We place the hypothesis that τ : R → R satisfies τ ′ (u) > 0 and the growth assumptions for some p ≥ 1.
Preliminaries. The approach uses the mechanical energy
is the stored energy. The associated flux is and they satisfy the entropy inequality indicating the dissipation of the mechanical energy. The minimum of the mechanical energy E(u, v) on the "equilibrium manifold" of the relaxation process M = {(u, v) : v = 0} is achieved and is given by Moreover, solutions of (3.2) satisfy the following energy estimate: or equivalently Relation (3.4) captures the equilibrium version of (3.3), as can be seen by applying the Hilbert expansion to the relaxation system (3.1). Indeed, introducing the Hilbert expansion to (3.1), we see after collecting the terms of similar orders that In particular, we recover the equilibrium relation v 0 = 0, the Darcy's law v 1 = τ (u 0 ) x , and the diffusion equation (3.2) satisfied by u 0 at equilibrium. If the same expansion is introduced in (3.3), we obtain
3.2.
Relative entropy estimate and study of the relaxation limit.
To analyze the relaxation process, we consider the quadratic part of E(u, v) with respect to the "algebraic-differential equilibrium" (ū,v), whereū = u 0 andv = εv 1 = ετ (ū) x . Namely, As corresponding flux we shall consider As in the previous section, to simplify the calculations, we rewrite the equilibrium equation (3.2) as follows: (3.5) In this way, we are able to treat the termv t = ετ (ū) xt as an error of order O(ε). A direct computation, along the lines of Proposition 2.1 gives: For any weak, entropy solution (u, v) of (3.1) and any smooth solution (ū,v) of (3.5) it holds: The terms in the right hand side of (3.6) are analogous to the terms in (2.16) of Proposition 2.1 for the Eulerian case, namely, the first term is dissipative and is due to the damping of the relaxation system relative to its diffusion limit, the second is quadratic in the flux, and the last term is a linear error term. The quadratic term is estimated with the help of the following lemma from [8].
Proof. The proof proceeds along the lines of Theorem 2.7 and Theorem 2.8; here we shall just sketch it. We integrate (3.6) over R × [0, t], t < T . The right hand side of (3.6) is estimated using Lemma 3.2, and Young's inequality (4.1) 3 ). The system is scaled appropriately so that it relaxes as ε → 0 to the equations of viscoelasticity of the rate type, In the latter system, the total stress T = σ(u) + µv x consists of an elastic part and a Newtonian viscous stress. We refer to [9,11] for studies of a corresponding semilinear relaxation framework, using energy bounds. Here, we focus at the quasilinear level, and pursue a relative entropy analysis to explore the relation between the two systems. The mechanical energy for (4.1) is Weak solutions of (4.1) are required to satisfy the entropy inequality where E(u, v, 0) = Σ(u) + 1 2 v 2 is the equilibrium energy for E(u, v, z) and Note that (4.4) is the leading order (with respect to the relaxation parameter) asymptotic development of the energy dissipation inequality (4.3). This may be seen, as in the previous sections, by expanding (4.3) in terms of the Hilbert expansion; we omit the details here.
4.1.
Relative entropy estimate and study of the relaxation limit.
Following the general procedure, outlined in Section 2.2, we recast the equilibrium system (4.2) and the corresponding stress-strain response in the variables (ū,v,z) withz = εµv x as follows: where we shall treat the termz t as an O(ε) error: z t = εµv xt = εµ σ(ū) x + µv xx x .
If σ verifies (a 1 ), (a 2 ), then The proof employs the relative entropy inequality (4.6) and proceeds following Theorems 3.3 and 2.7; the details are omitted here. | 2012-09-13T10:14:14.000Z | 2012-09-13T00:00:00.000 | {
"year": 2012,
"sha1": "e84ba63ca8f8c8684f64d9ce9e6744f5a0b8af0f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1209.2843",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e84ba63ca8f8c8684f64d9ce9e6744f5a0b8af0f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
237538813 | pes2o/s2orc | v3-fos-license | Exploring the Interplay Between Oral Diseases, Microbiome, and Chronic Diseases Driven by Metabolic Dysfunction in Childhood
Oral childhood diseases, such as caries and gingivitis, have much more than a local impact on the dentition and tooth surrounding tissues, which can affect systemic conditions. While the mouth is frequently exposed to microbial stressors that can contribute to an inflammatory state in the entire body, chronic disorders can also interfere with oral health. Sharing common risk factors, a dynamic interplay can be driven between 1. dental caries, gingivitis, and type I diabetes mellitus, 2. early childhood caries and obesity, and 3. caries and cardiovascular diseases. Considering that there are ~2.2 billion children worldwide and that childhood provides unique opportunities for intervention targeting future health promotion, this review is of prime importance and aimed to explore the relationship between the oral microbiome and oral chronic diseases driven by metabolic dysfunction in childhood.
INTRODUCTION
The mouth is a part of the human body and cannot be considered independent. The oral cavity harbors a diverse microbiome and the second largest number of microorganisms after the gut (1), with ∼500-700 species (2). There are many distinct niches in the oral cavity that characterize a complex habitat providing shedding (soft tissues/mucosa) and non-shedding (teeth) surfaces for microbial colonization (1). The dysbiotic state of the oral microbiome triggers the most common biofilm-mediated oral diseases in children: caries and gingivitis (2).
Dental caries affects more than 530 million children worldwide (3) and is characterized by tooth demineralization due to the action of organic acids after bacterial dietary substrate fermentation (4,5), while gingivitis is characterized by bleeding and swelling due to the initial inflammatory process of the gums, which can progress to the destruction of tooth-supported tissues (periodontitis) (6). Both diseases culminate in tooth loss, prejudicing the mastication process, phonetics, respiration, swallowing, and even the quality of life.
Advances in the knowledge of how host-associated microbial communities promote or protect against pathogenic microbes and how microorganisms contribute to inflammatory diseases are extremely important. In light of this, studies targeting the oral microbiota in health and disease will provide valuable information on the functional and metabolic changes in diverse pathological states, as well as the identification of molecular signatures, which could lead to assertive therapies considering precision medicine (1). Interestingly, oral samples are easy to collect, and, therefore, studies in this regard have been increasing in the past few years. Progress in the field of molecular biology has led to culture-independent techniques, which have revealed many uncultivable microorganisms that better represent the oral microbiota and its complexities.
Systemic diseases such as obesity, cardiovascular problems, and type I diabetes mellitus (T1DM) have been shown to be influenced by dental plaque-associated oral diseases. It should be noted that oral bacteria are frequently swallowed along with saliva and solid and liquid foods during the digestion process, reaching the stomach and gut (7). Moreover, studies have demonstrated that gut microbial communities are associated with obesity (8)(9)(10)(11) and are associated with T1DM through the immune system (12). Immunological changes in the gut can be reflected in the pancreas, where insulin is produced in response to increasing glucose levels in the bloodstream (12). In addition, as tooth nourishment is derived from the pulp and blood vessels from the surrounding tissues, oral bacteria can also spread into many organs, such as the heart via the bloodstream (13).
In light of the above knowledge, oral diseases have much more than a local impact on the dentition and tooth contiguous tissues, interacting with systemic conditions. However, chronic disorders can also interfere with oral health. A dynamic interplay can be driven between 1. dental caries, gingivitis, and T1DM; 2. early childhood caries and obesity, and 3. caries and cardiovascular diseases as they share common risk factors (Figure 1). Thus, the present review is an attempt to investigate the relationship between the oral microbiome and oral and chronic diseases driven by metabolic dysfunction in childhood.
T1DM
Diabetes mellitus is a group of chronic metabolic diseases characterized by elevated levels of blood glucose as a result of defects in insulin production, action, or both (14). The most common type of diabetes mellitus in children and adolescents is type 1 or insulin-dependent diabetes (juvenile or childhood-onset diabetes). T1DM is caused by genetic autoimmune destruction of β-cells in the pancreas, in which all or a subset of islets in the pancreas lack insulin-secreting β-cells, leading to hyperglycemia and a decrease in insulin production (15). The production of multiple islet autoantibodies can be precipitated by several environmental factors, including enterovirus infections, nutritional factors (deficiency of vitamin D, excessive consumption of cow milk proteins and nitrates) excessive amounts of glucagon, epinephrine, growth hormones, glucocorticoids, and thiazides and others (16)(17)(18). According to WHO, there are large differences in the incidence and prevalence of T1DM, ranging from over 60 to under 0.5 cases annually per 100,000 children aged under 15 years (19). The clinical symptoms of T1DM are polydipsia, polyphagia, polyuria, weight loss, blurred vision, difficulty concentrating, hypotension, abdominal pain, and dehydration, among others. Laboratory findings are hyperglycemia, glycosuria, and ketonuria (20).
Caries and T1DM
The oral cavity is composed of several ecosystems, such as teeth, gingival tissues, tongue, mucosa, palate, and tonsils that harbor diverse bacteria, fungi, or viruses that coexist in symbiosis to maintain a healthy state. When a disturbance in the diversity and proportions of species or taxa within the microbiota occurs (dysbiosis), disease-promoting microorganisms proliferate, causing pathologies such as dental caries, gingivitis, and periodontitis (21). The microbiota of the oral cavity can also play role in many systemic diseases such as diabetes, cardiovascular diseases, and obesity (22).
Dental caries is a biofilm-mediated, diet modulated, multifactorial, non-communicable, dynamic disease resulting in enamel demineralization, determined by biological, behavioral, psychosocial, and environmental factors (23). Although there is still a need for longitudinal studies, recent meta-analyses have found that T1DM is associated with a high risk for dental caries (24,25). The prevalence of dental caries among 538 children and adolescents with T1DM from 10 different studies worldwide was 67%. The prevalence was the highest in South America (84%) and the lowest in patients with diabetes having good metabolic control (47%) (24). In another meta-analysis, T1DM patients had significantly higher levels of dental caries in permanent teeth but not in deciduous teeth than the non-diabetic group. However, no significant differences were found between patients with well-controlled and poorly controlled T1DM (25). Some studies have found correlations between metabolic control and diabetes course, together with dental caries stages (26)(27)(28)(29). The divergent findings described above are probably related to the cut-offs of HbA1c as well as to age strata in the studies (25). The groups of T1DM children with HbA1c of > 10% exhibited more caries lesions and bleeding gums than the other groups (28).
Different species of bacteria, such as Streptococcus, Veillonella, Actinomyces, Granulicatella, Leptotrichia, Thiomonas, Bifidobacterium, and Prevotella, have been associated with the development of dental caries in children (30,31). The majority of studies on T1DM patients were conducted using laboratory culture techniques or polymerase chain reaction (PCR) analysis (22,(32)(33)(34)(35). Generally, patients with wellcontrolled diabetes have fewer decayed surfaces and lower counts of Streptococcus mutans, lactobacilli, and yeast than those with poorly controlled diabetes (33)(34)(35). Samples from the bottom of the oral cavity and dorsum of the tongue were collected from 50 T1DM children aged 10-18 years and assigned into two groups: well-controlled and poorly controlled groups. Twenty-five children were used as healthy controls. Collected samples were analyzed for total bacteria and different species of Streptococcus, FIGURE 1 | Interplay between oral and systemic diseases. The oral cavity, particularly when caries and gingivitis/periodontitis are present, could act as microbial reservoir, interplaying with cardiovascular disease (via the bloodstream), obesity (via the digestive system), and diabetes type I (via the immune system).
Enterococcus, Staphylococcus, Candida, and anaerobic bacteria. The authors found an increased amount of Streptococcus mitis in T1DM children than in healthy children. A significantly higher number of different strains was isolated from diabetic groups, mainly in poorly controlled diabetes (22). Another study revealed significantly higher levels of dental plaque and higher counts of S. mutans in T1DM children with poor glycemic control than in the healthy control group. Candida albicans levels were not statistically different among the groups, but those with poor glycemic control showed an increased frequency of detection (32). Some risk factors inherent to patients with diabetes could potentialize the development or progression of tooth decay. Of interest, diabetic children consume daily meals more frequently, which favors in the saliva: higher episodes of low pH, lower concentration of bicarbonate, reduced unstimulated and stimulated secretion flow rates leading to xerostomia, increased glucose levels, lower levels of antimicrobial proteins such as lactoferrin and lysozyme, and bacterial proliferation (36)(37)(38).
Given the importance of the disturbances mentioned above, further scientific evidence is necessary to elucidate the relationship between the development of dental caries lesions in children with diabetes considering the associated factors.
Plaque-Induced Gingivitis and T1DM
Commonly, there is a symbiotic relationship between the host and the oral microbiome to maintain homeostasis, and a dysbiosis between the dental biofilm and the host's immuneinflammatory response may initiate gingivitis (39). In addition, poor nutrition can cause increased inflammation (40), and biofilm can accumulate rapidly in inflamed gingiva. The clinical signs (redness and edema) and symptoms of inflammation confined to the gingiva is reversible when the biofilm is disrupted or removed (39,41). However, if gingivitis is not controlled, it can progress to periodontal disease comprising the periodontal ligament, cementum, and alveolar bone in older ages.
The primary parameter to evaluate the presence of gingivitis is bleeding on probing (BOP) (41,42). A patient with an intact periodontium is diagnosed with gingivitis when the BOP score is ≥10%. Localized gingivitis involves a BOP score of 10-30%, whereas a score of >30% is classified as generalized gingivitis (41). When only a few sites are affected by mild inflammation, the condition is referred to as incipient gingivitis (39,41).
In adolescents, other local factors, such as dental caries, mouth breathing, crowding of the teeth, and tooth eruption can modify the incidence and severity of gingivitis. Significant changes in steroid hormone levels during puberty also have a transient effect on inflammation of the gingiva (39,43).
Clinical studies have demonstrated that the presence of diabetes may be considered a risk factor for periodontal disease in children and adolescents (44,45). Gingivitis is the predominant form of periodontal disease in childhood, and the level of glycemic control may be more important in determining the severity of gingival inflammation than the quality of plaque control (46)(47)(48).
Hyperglycemia causes a hyperinflammatory response in the presence of bacterial biofilm. Individuals with diabetes have impaired neutrophil and macrophage function, altered collagen production, exaggerated collagenase activity, hyperinflammatory responsive monocytes, and an increased release of proinflammatory cytokines (49,50). Another factor that may modify host responses is the accumulation of advanced glycation endproducts (AGE) and their interaction with AGE receptors in children with diabetes (49).
Previous studies have shown that gram-positive species (e.g., Streptococcus spp., Actinomyces viscosus, Peptostreptococcus micros) and gram-negative species (e.g., Campylobacter gracilis, Fusobacterium nucleatum, Prevotella intermedia, Veillonella spp.) are associated with gingivitis (51). A clinical study showed that Capnocytophaga sputigena and Capnocytophaga ochracea were associated with gingivitis in children with T1DM and that glycemic and lipid parameters were higher in patients with T1DM, albeit within normal values (46).
Periodontopathogenic bacteria can cause direct damage to periodontal tissues or indirect tissue damage by inducing the release of inflammatory cytokines and other mediator bacteria (51). The transition from health to disease follows the principles of primary ecological succession, rather than the acquisition of new organisms (41), suggesting that clusters of bacteria may be a more robust discriminant of disease (41).
Of interest, poorly controlled diabetes may cause xerostomia due to hyposalivation. Xerostomia is indirectly related to gingival disease activity through the accumulation of dental plaque in young adults (52).
Despite an increase in the number of studies that have assessed the association between diabetes and gingival inflammation, no consensus has yet emerged about a possible causal relationship (53,54). A recent systematic review and meta-analysis concluded that the severity of periodontal inflammation is higher in children and adolescents with T1DM than in healthy individuals. However, the authors did not provide strong evidence that periodontitis is a significant risk factor for T1DM in children (53). Other studies on childhood diabetes have also shown that gingival inflammation is higher in children with T1DM than in non-diabetic children (44,45) and suggested that periodontal destruction can begin early in children with diabetes.
Regarding the influence of glycemic control elements on the presence of gingivitis, data are not conclusive, suggesting other factors, such as those related to patients' immunological responses (55). A recent study showed no significant differences in periodontal status between controlled and poorly controlled diabetic patients and healthy children (56). In a case-control study involving 80 children and adolescents (aged 5-18 years) with T1DM, a significant effect of diabetes on an increased risk of oral and periodontal diseases in children was not confirmed (57). In the same context, a comparative cross-sectional study on children with T1DM and non-diabetic children with mixed dentition, both sexes (7-13 years) and without a distinction of race demonstrated that the periodontal conditions were similar among patients in both groups, without statistical differences in any periodontal indexes (46). This study also demonstrated through microbiological analysis that red-complex bacteria were present at a few sites. Fusobacterium nucleatum and Campylobacter rectus were more frequently detected, and interleukin (IL)-6 levels were similar between the groups (46). On the other hand, an up-to-date research by Jensen et al. (58) demonstrated that worsening glycemic control is associated with increased severity of early markers of periodontal disease in children and adolescents with T1DM. In that study, it was also observed that glycemic control was related to the complexity and richness of the microbiota of the gingival plaque and lower brushing frequency, independent of glycated hemoglobin (HbA1c) (58). Thus, well-designed clinical studies are still required to clarify the interplay between diabetes and inflammation of gingival and periodontal tissues.
EARLY CHILDHOOD CARIES AND OBESITY
As mentioned above, dental caries is a major oral health problem and in the early childhood, is characterized by the presence of one or more deciduous teeth with the presence of a carious lesion, cavitated or not, in children under the age of 6 year (59).
It is important to highlight that primary teeth maintain the space for adequate development of the permanent dentition and are essential for the child's well-being, phonetics, esthetics, and mastication. Unfortunately, most early childhood caries (ECC) lesions remain untreated (59), leading to chronic pain, infections, and other comorbidities (60).
In the last 45 years, worldwide, obesity has increased threefold, and ∼38 million children under the age of 5 years were overweight or obese in 2019 (61). While overweight is characterized by a body mass index (BMI) of the 97-99.9th percentile, obesity is defined by a BMI of >99.9th percentile in those aged younger than 5 years (62). Overweight or obesity in childhood is considered a risk factor for adulthood obesity and might be directly related to diabetes and cardiovascular disorders.
The effect of obesity on functional and metabolic changes in the human body is an important topic to explore. As believed before, the adipose tissue is not only responsible for energy storage, but an endocrine organ, producing adipokines (leptin, adiponectin, visfatin, resistin, apelin). As weight gain is connected to increased adipose tissue mass, these hormones might probably be produced in higher concentrations, significantly affecting the metabolism of macronutrients (63) and causing a "metainflammation" (64). An usual consequence of obesity is the metabolic syndrome characterized by a clustering of risk factors (insulin resistance, hyperleptinemia, hypoadiponectinemia) predisposing individuals to the development of future comorbidities (64).
A recent systematic review showed that children with high BMI scores had about a two-times higher chance of experiencing ECC than lean children (65). Despite both diseases (ECC and obesity) being complex and sharing a common risk factor (diet), microbial dysbiosis also plays a critical role (9), profoundly affecting disease course/development. Remarkably, the human oral and gut microbiomes present enormous complexity and several functions such as the development of immunity and defense against pathogens. Gut microorganisms also produce short chains of fatty acids that are important for energy metabolism, synthesis of vitamins, and fat storage (66). Unlike the human genome, which is relatively constant, the microbiome is dynamic and is altered by changes in development, environmental factors such as diet and use of antibiotics, and the response to disease (67).
Harboring billions of microbes (68), the oral cavity microbiome is composed mainly of the following phyla: Proteobacteria, Bacteroidetes, Firmicutes, Actinobacteria, and Fusobacteria (69,70). Despite being a polymicrobial disease, the predominance of acidogenic and aciduric bacteria (4,5,71) favors the demineralization process of the dental tissues after carbohydrate fermentation, leading to white chalky spot lesions that further progress into dentin cavitation (4,5). Notably, it was estimated that children with severe ECC exhibited 94.5 phylotypes vs. 113.4 in caries-free children, suggesting that microbial variety and complexity in dental biofilm are significantly higher in healthy subjects (72). This is because carious lesions could act as retentive niches for cariogenic bacteria, which dominate as the disease progresses, leading to a decrease in the overall richness of the biofilm community (72).
While high numbers of mutans streptococci are significantly associated with early caries lesions, lactobacilli are linked to an advanced staged cavitation (73,74). Conversely, quantitative PCR analysis of biofilm bacteria according to different stages of ECC indicated that S. mutans were also present in higher numbers in dentine caries lesion/cavitations, as well as Bifidobacterium spp. (75). Scardovia wiggsiae, a specie belonging to the phylum Actinobacteria, has also been linked to ECC (30), as well Veillonella, Prevotella, Porphyromonas, Actinomyces species, and the fungus C. albicans (76)(77)(78)(79). Even with genetic sequencing using the 16S ribosomal RNA gene, and better understood of the richness and diversity of the oral microbiome, S. mutans has still been identified as the most discriminatory specie between health and disease (80).
The classical main pathogens of dental caries, S. mutans and lactobacilli, belong to the Firmicutes phylum, which was found to be enhanced in samples collected from cavitated carious lesions (70). Interestingly, an increase in the abundance of the Firmicutes phylum, one of the largest in the gut microbiome, is commonly observed in childhood obesity (8,10,81). In this respect, bacteria belonging to this phylum may be related to weight gain, such as an increase in the species of Eubacterium halllii, Clostridium leptum, and certain Lactobacillus species. Clostridium leptum is an important carbohydrate-fermenting bacterium belonging to the Clostridial IV set. Along with other intestinal microorganisms, they are capable of fermenting fiber and unabsorbed sugars from the diet, producing short-chain fatty acids that can act as an energy source for the human host, and can also influence intestinal epithelial function (9,82). In line with this information, germ-free mice receiving a microbiota transplant increase their caloric uptake, energy harvest and body fat (83).
The microbiota could be considered an endocrine organ related to the maintenance of energy homeostasis and host immunity (84). It is understood that gut microorganisms are capable of 1. increasing energy production from food, 2. contributing to subclinical inflammation, and 3. regulating fatty acid tissue composition (85,86). Moreover, under dysbiotic conditions, the functioning of the intestinal barrier and gutassociated lymphoid tissues is altered, favoring the passage of lipopolysaccharides, which activate inflammatory pathways that might contribute to the development of insulin resistance (84). Additionally, the production of gastrointestinal peptides associated with satiety is also changed, leading to increased food intake.
It is important to highlight that the oral cavity and gut provide ideal niches for the largest microbiomes in the human body, due to the moist, warm, and nutrient-rich environments. The difference between them relies on the shedding characteristics of the mucosa vs. the non-shedding characteristics of the teeth. However, due to the arsenal of adhesive molecules, streptococci can colonize many types of surfaces (87). Intriguingly, some groups of bacteria could overlap in oral and stool samples (88)(89)(90), due to oral bacteria often being swallowed together with saliva and food during the digestion process. A recent study involving preschoolers investigated whether Firmicutes and Bacteroidetes levels in the mouth reflected the gut condition in obesity and ECC, demonstrating that Firmicutes phyla behave differently according to the nutritional status (obesity or eutrophy) and caries experience, and that dental biofilm and gut microbiome might share levels of similarity. In addition, the authors found significantly higher numbers of Firmicutes in obese children with ECC than in those with obesity and free of caries in both the mouth and gut (88).
The pivotal role of oral bacteria ectopically colonizing the gut remains unknown (91). In addition, it is challenging to distinguish between bacteria that truly reside in the gut and those that are temporarily present in the gut (92). In animal models, bacterial colonization success in the gut has been suggested to depend on their ability to metabolize dietary and host carbohydrates, as well as bile acids (93).
Although ECC and obesity are preventable, they continue to affect millions of children (59); therefore, studies involving common approaches should be conducted and will certainly be more effective. Moreover, the hypothesis that the mouth might act as a reservoir for intestinal pathogens that can aggravate diseases connected to the gut microflora (93) is of prime importance and should be further explored.
CARIES AND CARDIOVASCULAR DISEASES
The first common risk approach for cardiovascular pathologies and caries can be established by considering the individual's lifestyle, particularly eating habits. The high consumption of ultra-processed foods, fermentable carbohydrates, and saturated fats has led to an increase in the number of cases of hypertension, atherosclerosis, and cardiovascular diseases, as well as the number of individuals affected by caries (94).
Remarkably, bacteria found in cariogenic biofilms can synthesize extracellular polysaccharide matrix from dietary sugars, favoring the adhesion of multispecies microorganisms (95,96). As described in the previous sections of the present review, when this biofilm is undisturbed, that is, when brushing and flossing are not frequent, the propensity for carious lesions is enhanced. In addition, when dental caries progresses until a severe stage the pulp, an organ full of nerves and capillaries, is exposed and there is higher risk of bacteremia via the bloodstream. This way, the typical pathogens associated with tooth decay, S. mutans and Lactobacillus spp., together with Veillonella spp., Scardovia spp. and other oral streptococci get access to other organs, such as the heart, causing an increase in the levels of systemic antibodies, and with possible development of a variety of cardiovascular disturbances, i.e., infectious endocarditis (97,98).
Infectious endocarditis is characterized by endocardial surface contagions. Valves are the most affected structures, but other endocardial tissue locations might also be involved (99). Endocarditis is intimately linked to microorganisms in the group of oral streptococci, staphylococci, enterococci, gram-negative bacilli, some fungi (Candida spp.), fastidious microbes, and cultivable intracellular microorganisms such as Chlamydophila spp., S. mutans, and Staphylococcus aureus (99). Severe sepsis or septic shock has a mortality rate of 20-25% and is associated with microorganisms such as Staphylococcus aureus and nonhemolytic streptococci.
Curiously, S. mutans was the most frequently detected bacteria in atheromatous plaques and unhealthy heart valve tissues (100)(101)(102). When S. mutans and other oral bacteria enter the circulatory system (103) reaching the heart tissues, they easily adhere to heart valves, producing an insoluble dextran from blood glucose and forming biofilms (97). According to the composition and structure of the rhamnose glucose polysaccharide connected to the cell wall, S. mutans can be divided into four different serotypes: c, e, f, and k. Although serotype c is the most common in the oral cavity, serotypes e and f are shown to invade primary human coronary artery endothelial cells. Intriguingly, invasive strains carry the surface protein with collagen-and laminin-binding activity (cnm) gene, which can bind to collagen and laminin in vitro, favoring adherence to endothelial tissues and triggering inflammatory responses, similar to other surface structures of S. mutans (104)(105)(106).
Another important oral disease that begins with the imbalance of the healthy microbiota in the subgingival environment is periodontitis. As already mentioned, it is an oral infectious disease that can develop in late childhood or adolescence, caused mainly by gram-negative bacteria, with the destruction of the tissues supporting the teeth as a result of an injury caused by the pathogenic biofilm. Hence, in the presence of periodontal disease, the junctional epithelium and connective tissue are not firm and the risk of bleeding is enhanced, favoring the access of oral microorganisms, especially S. mutans, to the capillaries and bloodstream (97,103).
Chronic periodontitis can alter the lipid profile, contributing to the progression of atherosclerosis (107). Furthermore, the host's response to gram-negative periodontopathogens bacterial lipopolysaccharides is a pro-inflammatory response, with the production of IL-6, prostaglandin E2, and matrix metalloproteinases, culminating in tissue destruction. In addition, the production of IL-1 beta, IL-6, and tumor necrosis factor-alpha can promote hyperlipidemia, potentializing the risk of atherosclerosis, which is the main cause of heart disease. Studies have shown that cardiovascular problems, such as coronary heart disease, stroke, peripheral vascular disease, cardiomyopathy, atherosclerosis, and myocardial infarction, are linked to chronic infection and inflammation, which is the case in periodontitis (95,108).
Antibiotic therapy is used for the treatment of diseases such as infectious endocarditis and sepsis. In this regard, we have to be mindful that due to the high resistance rate of some microorganisms in the infectious processes, the combination of antimicrobials may be necessary, as well as prolonged drug treatment to avoid recurrence. More than 50% of patients require surgery in cases of heart failure, uncontrolled infection, and embolism prevention (99).
Finally, it is important to point out that the oral cavity is a reservoir for complex commensal microbiota, which is a dysbiotic condition that favors caries and periodontitis development. Jointly with the presence of microbes in the mouth, relevant risk factors like sugar-rich food and lack of proper tooth brushing or flossing are also closely associated with the installation and progression of oral diseases. Regarding a common risk approach, a balanced diet with low to moderate fermentable carbohydrates intake/ultraprocessed foods not only reduces the chances of cariogenic biofilm formation, but contributes to improving the general functioning of the body. Thus, healthy gums and teeth are associated with a low risk of developing infectious oral diseases, bacteremia, and associated cardiovascular disturbances (95,108).
FINAL CONSIDERATIONS
Altogether, in a critical point of view all the diseases described above are of high complexity and reinforce the holistic concept that the mouth could not be separated of the body. It should be prohibitive focusing too narrowly on single chronic diseases alone. In this regard, a multidisciplinary approach should be emphasized, bringing together healthcare professionals from different fields, with different expertise, such as dentists, physicians, nutritionists, psychologists and nurses. The organization and interrelationship between these professionals, will favor since the early diagnosis and effective preventive strategies, until assertive diagnosis and treatment plan, improving prognosis and patient's quality of life.
It should be kept in mind that multidisciplinary teams have higher chances of meeting the demands of patients with complex care needs, attaining in the development of a special routine supporting their care goals. When the right attention in the communities is delivered, well-being is favored and unnecessary complicated treatments or hospitalizations could be avoided, reducing the oral/systemic health budget expenditure.
Of interest, the clinical practice based on scientific evidences requires the ability to locate information and appraise it critically. Literature reviews play important roles in this regard.
In summary, considering the relationship between the oral microbiome and chronic diseases driven by metabolic dysfunction in childhood, it should be highlighted that: -Microbe establishment is linked to biological, behavioral, and psychosocial factors associated with an individual's environment.
-A better understanding of the human microbiome could indicate the potential microorganisms connected to health or disease. -Current molecular biology technologies favor knowledge acquisition concerning microbial diversity and its relationship with physiopathological conditions, but the exact mechanism connecting oral diseases and microbiota to chronic diseases driven by metabolic dysfunction during childhood is far from being completely understood.
AUTHOR CONTRIBUTIONS
CD and AR: conceptualization. FS, TP, SB, LT, JH, and CD: writing-original draft preparation. FS, TP, and CD: writing-review and editing. TP and CD: supervision. All authors significantly contributed to the manuscript preparation and approved the final version. | 2021-09-17T13:12:41.276Z | 2021-09-17T00:00:00.000 | {
"year": 2021,
"sha1": "cb1224f41c9ff9db510e3e21df254bb3114d4f01",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fdmed.2021.718441/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "cb1224f41c9ff9db510e3e21df254bb3114d4f01",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
15577383 | pes2o/s2orc | v3-fos-license | Low-Dimensional Spin Systems: Hidden Symmetries, Conformal Field Theories and Numerical Checks
We review here some general properties of antiferromagnetic Heisenberg spin chains, emphasizing and discussing the role of hidden symmetries in the classification of the various phases of the models. We present also some recent results that have been obtained with a combined use of Conformal Field Theory and of numerical Density Matrix Renormalization Group techniques.
Introduction and Summary.
For quite some time low-dimensional magnetic systems (i.e. (quantum) spins on 1D and/or 2D lattices) have been considered essentially only as interesting models in Statistical Mechanics with no realistic counterpart. It is only in recent times that systems that can be considered to a high degree of accuracy as assemblies of isolated or almost isolated spin chains and/or of spin ladders (a few chains coupled together) have began to be produced and have hence become experimentally accessible, thus renewing the interest in their study, which is by now one of the most active fields of experimental and theoretical research in Condensed Matter Physics.
In this paper we will discuss only some relevant properties of isolated spin chains, referring to the literature [15] for a general review of the properties of spin ladders.
More than one decade ago it was pointed out [20,31] that integer spin chains (more specifically, spin-1 chains, but extensions to different values of the spin have also been devised in the literature [43]) possess unexpected and highly nontrivial hidden symmetries, whose spontaneous breaking manifests itself through the appearance of unusual and highly nonlocal "string" order parameters. The string order parameters, together with the more conventional magnetic order parameters, can be used to classify the various phases that the phase diagram of one-dimensional magnets can display.
In the present paper, which is a slightly enlarged version of the talk presented by one of us (G.M.) at the XIII − th Conference on "Symmetries in Physics" 1 , we will concentrate, without pretensions to full generality, on the discussion of a few models of antiferromagnetic Heisenberg chains, of their phase diagrams and on the rôle of hidden symmetries in their explanation. The paper is organized as follows. In Sect.2 we review some general facts concerning Heisenberg spin chains and discuss how in the continuum limit one can map a "standard" (see below for the terminology) Heisenberg chain onto an effective field theory described by a nonlinear sigma-model, and how the presence in the latter of a topological term can account for the radically different behaviors of integer versus half-odd-integer spin chains. In Sect.3, concentrating on spin-1 chains, we consider the effects of the addition to the "standard" model of biquadratic exchange terms and/or of Ising-like as well as of single-ion anisotropies, and how the addition of such terms can drive the model away from what is commonly called the "Haldane phase" (again, see below for an explanation) towards other phases. In this context we will introduce in a more explicit manner the notion of hidden symmetries and discuss their rôle. Sects.4 and 5 will be devoted to the discussion of more recent results that have been obtained by some of us [17] with a careful and combined use of analytical (effective actions and Conformal Field Theory) and numerical (Density Matrix Renormalization Group) techniques. The final Sect.6 is devoted to the conclusions and to some general comments.
General Features of Spin Chains.
Let us begin by discussing here what can be considered as the "standard" model of an isotropic antiferromagnetic (AF M ) Heisenberg chain with nearestneighbor (nn) interactions, which is described by the Hamiltonian: where, for each i = 1, ..., N , − → S i is a spin operator 2 : S α i , S β j = i δ ij ε αβγ S γ i ; α, β, γ = x, y, z; − → S 2 i = 2 S(S + 1) (S integer or half-odd integer) located at the i − th site of a one-dimensional lattice of N sites, interacting with its neighbors with an AF M (J > 0) nn interaction of strength J. Later on we will consider more general models in which J ⊥ = J z will be allowed 3 .
1 The Conference, organized by Prof. B. Gruber, was held in Schloss Mehrerau in Bregenz (Voralberg, Austria), in the period 21-24 July, 2003. 2 and: S ± i = S x i ±iS y i . 3 J ⊥ = 0, in particular, corresponds to the one-dimensional Ising model, a trivially soluble classical model. Notice however that an Ising model in a transverse magnetic field becomes a genuinely quantum and nontrivial model. It may be useful to define a vector − → n i as: − → n i =: − → S i / S, whereby: Although one is ultimately interested in the thermodynamic (N → ∞) limit, for finite N one can adopt either periodic boundary conditions (P BC's), by imposing: In the classical limit ( → 0 and S → ∞ with: S = const.) the spins (the − → n i 's) become (see Eq.(4)) classical vectors (and − → n i ∈ S 2 , the unit sphere in R 3 ). The minimum-energy configuration of the spins corresponds to: − → n i · − → n i+1 = const. = −1. Neighboring spins are then aligned antiparallel to each other and, in the absence of any external magnetic field, can point in a common but otherwise arbitrary direction on the sphere. This is the Néel state. Let us remark that, at variance with the ferromagnetic (J < 0) case, in which neighboring spins are all aligned parallel, at the quantum level the Néel state is not an eigenstate of the Hamiltonian (1). This points to the fact that quantum fluctuations will play a much more relevant rôle in the (quantum) antiferromagnetic case than in the ferromagnetic one.
The classical energy of the Néel state is of course: E N = −JN ( S) 2 . In this state the O(3) symmetry is spontaneously broken down to O(2) 5 , and the state exhibits long-range order (LRO).
Elementary excitations over the Néel state are well-known to be in the form of spin waves [38]: coherent deviations of the spins with a dispersion: ω( − → k ) ∝ k in the long-wavelength limit (ka ≪ 1, with a the lattice spacing). Hence, the (classical) spectrum of the Hamiltonian (1) is gapless. We would like to stress that nothing of what has been said hitherto depends on the value of the spin. At the classical level, the spin S 6 can be simply reabsorbed into a redefinition of the coupling constant ( J → J( S) 2 ) and will contribute only an essentially irrelevant and additional multiplicative overall scale factor.
All this is elementary and well known. Let us turn now to the quantum case 7 . In the early 30's Bethe [8] and Hulthén [30], employing what has been known since as the "Bethe-Ansatz", were able to show that the quantum S = 1/2 Heisenberg chain is actually an integrable model. We will not discuss here the Bethe-Ansatz in any detail [38], but will only summarize the main features of the solution of the S = 1/2 model. The (exact) ground state is nondegenerate, it exhibits only short-range AF M correlations, but no LRO. Parenthetically, 4 In which case the Hamiltonian should be actually rewritten as: 5 Translational symmetry, if present is also broken, as the Néel state is not invariant under translations of a lattice spacing as the original Hamiltonian but only of twice the lattice spacing. This has important consequences on the location of the Goldstone mode [4,40,55] in momentum space, that we will not discuss here, however. 6 Or better S. 7 From now on we will set for simplicity = 1.
this is in agreement with a general, and later, theorem [14]. The (staggered) static spin-spin correlation functions: where ... stands for the expectation value in the ground state, are all equal and decay algebraically to zero at large distances. We recall here that genuine LRO would imply (we omit here the index α): this defining the Néel order parameter O N (actually the square of the equilibrium staggered (i.e sublattice) magnetization). On the other extreme, an exponential decay of the correlations of the form, say: with P (.) some inverse power of |i − j| would imply a finite correlation length ξ and a mass gap (or, better, a spin gap) ∆ in the excitation spectrum roughly given by: ∆ ∝ cξ −1 ,with c a typical spin-wave velocity. Algebraic decay of correlations (formally corresponding to ξ → ∞) implies then that the system is gapless. Summarizing, the main features of the S = 1/2 Heisenberg AF M chain are that it has a (quantum) disordered ground state, with only shortrange AF M correlations, and that it is gapless. It is therefore a (actually the first) prototype of a (quantum disordered and) quantum critical system [48]. It can be said then that, as compared with the classical limit, the system remains gapless but quantum fluctuations destroy LRO. About thirty years later Lieb, Schutz and Mattis [36] (LSM ) proved an important theorem stating that an S = 1/2 chain has either a degenerate ground state or is gapless. No surprise that the Bethe solution obeys the Lieb-Schutz-Mattis theorem, which is however of much wider reach, as it can cover models that are more general than the "standard" nn chain, such as, e.g., the Majumdar-Ghosh [37] model, another integrable model that we will nor discuss here, though. The results of LSM were extended later on by other authors [3] beyond S = 1/2 to cover all the half-odd-integer values of the spin. One can then take as rigorously proven that (at T = 0) isotropic half-odd-integer Heisenberg chains (with constant nn interactions) are all quantum disordered and quantum critical (i.e. gapless). This result was thought for quite some time to be "generic", i.e. valid for chains of arbitrary spin until, in the early 80's, Haldane [28] put forward what has become known since as "Haldane's conjecture", according to which half-odd-integer spin chains should be quantum disordered and gapless but integer spin chains should instead exhibit a spin gap and an exponential decay of correlations. This implied that, contrary to what happens in the classical limit, the physical behavior of spin chains should be a highly discontinuous function of the value of the spin.
Completely rigorous proofs of (the second part of) Haldane's conjecture are still lacking. However, strong support to it comes from the analysis of the continuum limit of the Heisenberg chain, which we will briefly describe now, referring to the existing literature [1,6,23] for more details.
The canonical partition function for the Hamiltonian of Eq.(1) at temperature T = (k B β) −1 (with k B the Boltzmann constant): can be written as a spin coherent-state path-integral [32], whereby the spin variables get replaced, inside the path-integral, by classical variables according to: with Ω i a classical unit vector: Ω i = 1. The next (and perhaps the most important) step in Haldane's analysis is the parametrization of the Ω i 's as 8 : with: | n i | = 1 and: n i · − → l i = 0. The n i 's are assumed to be slowly-varying (on the scale of the lattice spacing). In this way, capitalizing, so-to-speak, on the information gained from the Bethe-Ansatz solution of the S = 1/2 model, they incorporate the information that the system still retains some short-range AF M ordering, which would be global only for n i = const. (and − → l i = 0). The − → l i 's can be shown [1] to be the (local) generators of angular momentum. In the semiclassical (large S) limit, an expansion of the action in the path-integral up to lowest (second) order in the − → l i 's is justified. Taking then the continuum limit together with a gradient expansion, and integrating out the − → l i 's, one ends up with the following expression for the partition function: where [D n] stands for the functional measure and the δ inside the integral is a functional δ. The first term in the action is given by: where L(= N ×lattice spacing) is the length of the chain, g = 2/S is the coupling constant and: c = 2JS is the spin-wave velocity. This is simply the Euclidean action of an O(3) nonlinear sigma model [6,23,59] (N LσM ). The second term is the integral of a Berry phase [50], and is given by: with: θ = 2πS. The coefficient of θ is easily recognized to be the Pontrjagin index [11,41,44], or winding number, of the map: from spacetime compactified to a sphere and the two-sphere where n takes values, and it is an integer: S B is therefore a topological term, and: S B = 2πnS, n ∈ Z. Therefore, exp {−iS B } ≡ 1 for integer S (θ = 0 mod (2π)), but: exp {−iS B } = (−1) n (θ = π mod (2π)) if the spin is half-odd integer. This will generate interference between the different topological sectors, and it is the at the heart of the different behaviors of the two types of chains.
The pure (θ = 0 in our case) (1 + 1) O(3) N LσM is a completely integrable model [60]. It has a unique ground state, and the excitation spectrum is exhausted by a degenerate triplet of massive excitations that are separated from the ground state by a finite gap. On the contrary, the θ = π model was shown [45] to be gapless. Therefore, Haldane's conjecture is fully confirmed by the analysis of the continuum limit of the Heisenberg model.
We would like only to mention in passing that quite a similar behavior occurs in spin ladders [15,19,46], namely even-legged ladders are gapped, while oddlegged ladders are gapped for integer spin and gapless for half-odd-integer spin. This "even-odd " effect has been shown [19,51] to have the same topological origin an in single chains.
How do these results compare with the gaplessness (irrespective of the value of S) of the S → ∞ classical limit? The answer resides in the dependence of the spin gap on S. Already at the mean-field level, but more accurately from large-N expansions and/or renormalization group analyses [42], it turns out that the spin gap ∆ behaves as: ∆ ∝ exp {−πS} for large S 9 . Hence, integerspin chains become exponentially gapless for large S, and the classical limit is recovered correctly.
3 More general Models. Hidden Symmetries and String Order Parameters.
In view of what has been said up to now, the second part of Haldane's conjecture is by far the most intriguing part of it. Therefore integer spin chains are the most interesting ones, and we will concentrate from now on on S = 1 chains. What has been called in the previous Section the "standard" AF M Heisenberg model is actually a member of at least two larger families of models that we will illustrate briefly here. The first class of models, that we will call "θ−models", includes a biquadratic term in the spins, and is described, setting J = 1, by the Hamiltonian: with θ = 0 corresponding of course to the "standard" model. Most of the phase diagram has been obtained [31] numerically, except for the points at θ = ±π/4, that correspond to integrable models. The point θ = π/4 is the Sutherland model [53], while θ = −π/4 is the integrable model [9,54] of Babujian and Takhatajan 10 . Both models are gapless, while the entire region −π/4 < θ < π/4 is known (numerically, again) to be gapfull. This whole region has been called the "Haldane phase". It includes a particularly interesting point that has been studied extensively by Affleck, Kennedy, Lieb and Tasaki [2] (AKLT ), namely θ = θ * , with: tan θ * = 1/3. The corresponding Hamiltonian (omitting an irrelevant overall numerical factor) is given by: This model is not completely integrable, but the ground state is known, it is unique in the thermodynamic limit and can be exhibited explicitly. The ultimate reason for this is that, apart from numerical constants, the i − th term in curly brackets is just: where P 2 (i, i + 1) is the projector [39] onto the state of total spin S tot = 2 of the pair of S = 1 spins located at sites i and i + 1. Therefore, the ground state of H AKLT must lie in the sector of the Hilbert space that is annihilated by all the projectors. It was shown by AKLT that the exact ground state (also called the "Valence-Bond-Solid" (V BS) state) can be constructed as a linear superposition of states Φ σ that have the following characteristics. Let: σ = {σ 1 , ...σ N } be a given spin configuration, with: σ i = 0, ±1, i = 1, ..., N . Then, Φ σ is such that: i) S z i Φ σ = σ i Φ σ and moreover: ii) If a given spin is, say, +1, then the next nozero spin must be −1, and viceversa. Typical such states correspond therefore to spin configurations of the form: In other words, "up" and "down" spins do alternate in Φ σ , but their spatial distribution is completely random, as an arbitrary number of zeroes can be inserted between any two nonzero spins. So, if a given spin in nonzero, we can predict what the value of the next nonzero spin will be, but not where it will be located. There is therefore no long-range (Néel) order in any conventional sense in the V BS ground state, but a sort of "Liquid Néel Order " (LN O). Conventional Néel order would be characterized by a nonvanishing of (at least one of) the Néel order parameters: In the V BS state and (numerically) in the whole of the Haldane phase one finds instead [2,31]: O α N = 0, α = x, y, z, and this is consistent with the absence of a "rigid" Néel order.
There remains however what we have called the "liquid" Néel order, and it has been argued convincingly in the literature [20,31] that this is connected with the nonvanishing of a novel class of order parameters that we will discuss now briefly. Let us begin by defining the string correlation functions as: These are similar to the standard two-point correlation functions: whose asymptotic (n → ∞) limit yields the Néel order parameter(s), except that a string of exponentials of intermediate spins has been inserted between the leftmost and the rightmost spins.
The string order parameters (SOP 's) O α S are then defined as: It turns out [2,25] that the string correlation functions are strictly constant in the AKLT ground state, namely: The ground-state spin-spin correlation functions have also been evaluated exactly for the V BS state [2], and they turn out to be given by: In other words: where the correlation length ξ AKLT is given, in units of the lattice spacing, by: less than unity in units of the lattice spacing, implying a rather large spin gap. So far for the ground state of the AKLT model. String and ordinary correlation functions as well as Néel and string order parameters have also been evaluated (numerically away from the AKLT point) for other points of the Haldane phase [25]. For example, at the Heisenberg point, exact diagonalization methods 11 have shown that the string correlation functions are not strictly constant, but still decay exponentially to a value of the string order parameter that is somewhat smaller (O α S ≃ 0.36...) than the AKLT value (O α S = 4/9 ≃ 0.44...) but still nonzero. The spin correlation length was also found [25] to be slightly larger than the AKLT value, but still finite. So, there is convincing evidence that the entire Haldane phase is characterized by vanishing Néel order parameters but by nonzero SOP 's. There is also convincing numerical evidence [25] that the string order parameters vanish at the integrable boundaries of the Haldane phase (i.e. for θ = ±π/4).
That the nonvanishing of the SOP 's is connected to the breaking of a symmetry, and hence to the onset of an ordering that is not apparent in the original Hamiltonian was clarified in a seminal paper by Kennedy and Tasaki [31] (KT ). With reference to a given configuration {σ}, and defining N (σ) as the number of odd sites at which the spins are zero, one defines a new configuration {σ} via: 11 With the Lanczos method and for chains with up to no more than 14 sites. and then a unitary operator U via: In a nutshell, the action of U amounts to leaving the first nonzero spin unchanged and to flipping every other nonzero spin proceeding to the right of the chain. For example: and so on. It is obvious that U is a unitary 12 . What is less obvious is that the unitary transformation is a nonlocal one, in the sense that U cannot be written as a product of unitary operators acting at each single site. This has the important consequence that symmetries that are local (in the above sense) for the Hamiltonian H will of course remain symmetries of the transformed Hamiltonian H (as U is unitary) but need not survive as local symmetries of H. Specifically, the symmetry group of H is SU (2), that includes a discrete Z 2 ×Z 2 subgroup of rotations of π around the coordinate axes. Explicitly, the transformed Hamiltonian has the form [31]: where: and it evident that Z 2 ×Z 2 is the only local surviving symmetry group of the transformed Hamiltonian H. Even more important is how the string order parameters transform. The result is [31]: where: and the r.h.s stands here for an average taken w.r.t. the ground state of the transformed Hamiltonian. The transformed order parameter is now a ferromagnetic order parameter. Therefore: O α S (H) = 0 =⇒ O α f erro H = 0, and this implies the onset of a spontaneous ferromagnetic polarization in the α−th direction in the ground state of H. This in turns entails a partial (if O α f erro H = 0 for just one value of α) or total (if this happens in more than one direction) spontaneous breaking of the discrete Z 2 ×Z 2 symmetry. It is known [4,55] that spontaneous breaking of a continuous symmetry is accompanied by massless excitations (the Goldstone modes), while breaking of a discrete symmetry usually implies the opening of a gap (the most conspicuous and familiar example being the 2D Ising model). Therefore, KT were led to consider the spontaneous breaking of the Z 2 ×Z 2 symmetry as the origin of the Haldane gap. 12 Notice also that: N (σ) = N (σ), as zero spins are mapped into zero spins.
One has however to be a bit careful on this point. It appears to be true that spontaneous (partial or total) breaking of the Z 2 ×Z 2 symmetry implies the generation of a spin gap. But: i) The converse need not be true. We will see that there are spin models that exhibit gapped phases 13 while retaining the full Z 2 ×Z 2 symmetry, and: ii) The mere nonvanishing of (one or more) string order parameters is not enough to fully determine in which (gapped) phase the system is. It is the full set of order parameters, both string and Néel, that allows for a full characterization of the various phases. In particular, the Haldane phase is fully characterized by the vanishing of all the Néel order parameters and by all the three string parameters being nonzero.
We turn now to a different class of models, the so-called "λ − D" family of models 14 . They are described by the family of Hamiltonians (parametrized by two real parameters, λ and D): The "standard" (isotropic) AF M Heisenberg model corresponds of course to λ = 1 and D = 0. λ = −1 (and D = 0) can be easily shown 15 to correspond to a (isotropic) ferromagnetic Heisenberg model. |λ| = 1 introduces an "Ising-like" anisotropy, while D = 0 introduces what is called "single-ion" anisotropy.
The model can be solved exactly for S = 1/2 [33], but no exact solutions are available for integer spin. There are obvious asymptotic limits when either λ (resp. D) is large and D (resp. λ) not too large, so that the "λ-term" (resp. "D-term") can be considered as a zeroth-order Hamiltonian and the rest as a perturbation: i) |λ| ≫ 1. The reference ground state is either a Néel AF M state (λ > 0) or a ferromagnetic (λ < 0) state.
ii) |D| ≫ 1. For D > 0 (the so-called "large-D" phase) the reference state becomes a planar state with S z i = 0 for all i's, while for D < 0 the reference state is a state where S z i = 0 is excluded, hence a state where the S = 1 spins become effectively two-level systems, and a detailed map of the model into an effective spin-1/2 model [17,47] can be successfully performed. For |λ| = 1 and D = 0 the symmetry group of the Hamiltonian is O(2)×Z 2 (the Z 2 factor corresponding to a reflection in the x − y plane: S z i → −S z i ). Apart from these limiting cases, the model has been studied analytically [49] as well as numerically [10,13,26,52] , and the corresponding phase diagram is displayed in Fig.1.
The various sectors of the phase diagram can be characterized as follows [13,17,22] phase, and lies on a line separating the two subphases, that are denoted as H1 and H2 in the literature [10].
ii) The Néel phase. The ground state is doubly degenerate, and the order parameters are: iii) The large-D phase.The ground state is unique, it is gapped, but here: iv) The two XY phases. These are both gapless phases. They are distinguished by the nature of the low-lying spin excitations (spin-1 in the XY 1 phase, spin-2 in the XY 2 phase).
v) The ferromagnetic phase. The ground state is doubly degenerate, with maximal magnetization: S z tot = ±N , and the phase is gapped. In this case it is the ferromagnetic order parameter that is nonvanishing, and actually [16]: . Selected values of the parameters are (D = 0.5, λ = 1). Note that with this choice the transverse correlation length is appreciably larger than the longitudinal one. The data have been obtained with finite-size DM RG on a chain of L = 100 spins (S = 1) with P BC and M = 216 states (Sect.5 for details).
The "λ − D" model has also been studied by KT . Applying the same nonlocal unitary transformation that was discussed previously, they showed that the transformed Hamiltonian, whose explicit form we will not give here, is still given in terms of the operators h i (see Eq.(31)), and retains therefore Z 2 ×Z 2 as the only local symmetry,just as in the case of the Hamiltonian of Eq. (16). Therefore, the same conclusions as before apply concerning the connection of the nonvanishing of the string order parameters with the spontaneous breaking of the Z 2 ×Z 2 symmetry.
In the present paper we will address mainly to the detailed nature of the Haldane-large-D and Haldane-Néel critical transition lines. It is known that the (large distance) critical behavior of one-dimensional quantum systems is well described by Conformal Field Theory [12,21,24,27,33] (CF T ). In the next Section we will report on a proposal of an effective CF T for the "λ − D" model on the Haldane-large-D critical line. This allows for the prediction of the operator content of the theory, and hence also for the prediction of the structure of the conformal tower of excited states above the ground state. To confirm the predictions, we will report also on extended numerical analyses, whose details will be reported elsewhere [18], that fully confirm the theoretical predictions.
Conformal Field Theory and Effective Actions.
Let us begin by recalling some basic results and examples of CF T that will be used in the forthcoming analysis of the critical properties of the spin-1 λ − D chain.
It is well known [21,27] that critical properties of two-dimensional systems are completely classified by CF T 's: since in 2D the conformal group is infinite dimensional, the Hilbert space of a conformally invariant theory can be completely understood in terms of the irreducible representations of its algebra, the Virasoro algebra. We recall that the latter has an infinite number of generators, denoted with L n ,L n (n ∈ Z) for its holomorphic and antiholomorphic part respectively, satisfying the commutation relations: and similarly for theL n . The constant c is called the central charge of the algebra or the conformal anomaly. Since we are interesting in a comparison between theoretical predictions and numerical data, which are performed on a finite lattice, we will consider a CF T defined on a cylinder with spatial dimension of finite length L. In this case [12,21], the energy and the momentum operator are represented respectively by: In order for H to be bounded from below, we must restrict our attention to highest weight representations of the Virasoro algebra, for which there exists a highest weight (or primary) state |∆,∆ satisfying: L 0 |∆,∆ = ∆|∆,∆ , L n |∆,∆ = 0 for n > 0 (38) and analogous relations with repect to theL n generators. Each of these representations is thus identified by the values of the central charge c and of the couple (∆,∆) (the conformal dimensions). They fix both the energy and the momentum of the primary state |∆,∆ , according to: Notice that, in a finite geometry (with P BC), the vacuum state, corresponding to ∆ =∆ = 0, has a non zero energy (Casimir effect): Also, the two-point correlation function of the operator creating a given primary state out of the vacuum (|∆,∆ = O ∆,∆ |0 ) has an algebraic decay whose critical exponents are determined by the values of the conformal dimensions (∆,∆): one has [21,27] Finally, from the primary state |∆,∆ one can obtain all excited (or secondary) states by applying strings of powers of L n ,L n with n < 0. It is easy to see that, if m, n < 0, the commutation relations (35) imply : L 0 (L m ) j |∆,∆ = (∆ + mj)|∆,∆ ,L 0 (L n ) k |∆,∆ = (∆ + nk)|∆,∆ , so that the secondary states have energies and momenta: with r,r ∈ N and a degeneracy that can be explicitely calculated for each representation. It may happen that some of these states have null norms. In this case the true (non-degenerate) Hilbert space of states is obtained after projecting out these null vectors, which therefore do not contribute to the operator content of the corresponding CF T . The quantity in brackets in the right hand side of Eq. (43) yields the coefficients with which the energy of the corresponding state scales to zero in the thermodynamic limit. It is therefore called "scaling dimension" and will be denoted by d The first set of values corresponds to the so called minimal models [21,27], whose primary states are of finite number. Their conformal dimensions are given by the formula: Theories with c ≥ 1 have instead an infinite number of primary states. The simplest case of a CF T corresponds to c = 1/2 (p = 3 in Eq. (45)) and describes the univerality class of the two-dimensional Ising model. According to (47), there are only three primary operators: the identity I corresponding to the vacuum, (∆,∆) I = (0, 0), the Ising spin σ with (∆,∆) σ = (1/16, 1/16) and the energy density ε with (∆,∆) ε = (1/2, 1/2). Notice that the spin-spin correlator σ(x)σ(0) decays with a critical exponent η z = 4∆ σ = 0.25. In Table 1 we list the lowest conformal (primary and secondary) states, together with their scaling dimensions and momenta. As explained in the next section, a comparison with the numerical data given in the last column will allow us to conclude that the Haldane-Néel critical transition line is indeed of the Ising type. We discuss now briefly the c = 1 case, which exhibits a much richer structure. It corresponds to the field theory of a free compactified bosonic field, i.e. to a Gaussian model with Lagrangian: where Θ represents an angular variable spanning a circle of a given radius R and the constant v, which has the dimension of a velocity, is called spin velocity. If we assume for Θ, and hence for its dual field Φ 16 , periodic boundary conditions, the Hilbert space of the theory splits into a direct sum of distinct topological sectors labeled by the winding numbers n, m ∈ Z of the fields Θ and Φ respectively. The primary fields are then vertex operators of the form [21,27] V mn = exp i √ 4πKnΦ + i π/KmΘ (49) whose scaling dimensions are given by Notice that the latter depend explicitely on the radius of compactification. Thus we obtain a different c = 1 theory for each value of R, i.e. of K. For example, K = 1 corresponds (via fermionization [21,27]) to a 1D model of free Dirac (F D) fermions. The K = 1/2 point is said to be self-dual (SD) since it is invariant under the duality transformation Θ ⇔ Φ, m ⇔ n, while the point K = 2 corresponds to the BKT critical theory. We remark also that the energy operator (∂Θ) 2 has conformal dimension 2 for any value of R and hence it is always marginal. The effect of adding it to the Lagrangian (48) results only in a change of the coupling constant in front, which, in turn, can be absorbed into a rescaling of the radius of compactification of Θ. Thus we generate a continuous line of inequivalent critical c = 1 theories, corresponding to different values of K.
It is well known [27] that the Gaussian model (48) describes the continuum limit of the spin 1/2 XXZ chain with anisotropy parameter ∆, as long as −1 ≤ ∆ ≤ 1. From the exact Bethe-ansätz results, one can show [27] that the interesting cases ∆ = −1, 0, 1 corrrespond to the SD, F D and BKT points of the bosonic theory, respectively. We would like to show now, that the Gaussian model (48) describes also the critical properties of the spin-1 λ − D Hamiltonian (34) on the Haldane-large-D transition line. In doing so, we will also establish a relationship between the coupling constants D, λ of the discrete model and those of the continuum theory, namely the spin-wave velocity v and the compactification radius. This will allow us to make quantitative theoretical predictions to be compared, in next section, to the numerical results.
In the spirit of Haldane'a mapping, we start from a classical solution, which for D > λ − 1, is a planar state where the unit vectors Ω j (τ ) that represent our spins ( − → S j → S Ω j (τ ) , see Sect.2) are Néel ordered in the xy-plane: Ω j (τ ) = (cos(θ 0 + jπ), sin(θ 0 + jπ), 0). Hence we make the Haldane-like ansätz: wheren j (τ ) = e iθj (τ ) ∈ O(2) xy ,ẑ is the unitary vector (0, 0, 1), and the fluctuation field l j is supposed to be small. Thus, as for the isotropic case, it is possible to obtain an effective Lagrangian that describes the low-energy physics of the Hamiltonian (34) in the continuum limit. Carrying out this calculation as explained in Sect.2, one obtains in this case a Gaussian model (48), where now Θ = θ/ √ g and In other words, we have a free theory for a bosonic field Θ, which is compactified along a circle of radius 1/ √ g. Thus, the operator content of the theory can be read from Eq. (49): the list of primary operators is exhausted by the vertex oprators V mn whose scaling dimensions are given by Eq. (50), with K = π/g.
In addition, the scaling dimensions (50) fix also the (non universal) critical exponents of the correlation functions. For instance it is easy to see that the transverse spin-spin correlator should decay according to:
The Density Matrix Renormalization Group and Spin Chains.
The code that we have used for density matrix renormalization group (DM RG) calculations follows rather closely the algorithms reported in White's seminal papers [56,57], with the following points to be mentioned: • The superblock geometry was chosen to be The rationale for adopting this configuration is that, being effectively on a ring, the two blocks are always separated by a single site, for which the operators are small matrices that are treated exactly (no truncation) [57]. In this way we expect a better precision in the correlation functions calculated fixing one of the two point on these sites and moving the other one along the block. Moreover, whenever the system has an underlying antiferromagnetic structure (typically when a staggered field is switched on), this geometry seems to be the one that preserve it at best, both for even and odd values of s.
• We used the finite-system algorithm with three iterations. This prescription should ensure the virtual elimination of the so-called environment error [35], which is expected to dominate in the very first iterations for L < L * (m) (see below). Normally the correlations are computed at the end of the third iteration, once that the best approximation of the ground state is available. This has the advantage of using less memory during the finite-size iterations but requires the storage of all the matrices needed to represent, on the reduced basis of the last step, the operators entering the correlation functions of interest. At the moment, disk storage is the ultimate factor that limits the size of the systems that we are able to treat.
• We always exploit the conservation of S z tot . With the exception of the ferromagnetic phase, that we do not address now, the ground state(s) is (are) at S z tot = 0 [10]. In order to maximize their accuracy, the correlations are calculated targeting only the lowest-energy state within this sector. However, in order to analyze the energy spectrum, we had to target also the lowest-energy states in the other sectors |S z tot | = 1, 2, . . . and/or a few excited states within the S z tot = 0 sector, depending on the phase of interest. On the one hand, this requires a modification of the basic Lanczos method to go beyond the lowest eigenvalue of the superblock Hamiltonian. On the other hand, once the N t eigenvalues of interest are found, one can build the block density matrix as the average (mixture) of the matrices associated with the corresponding N t eigenvectors. At present we are not aware of any specific "recipe" other than that of equal weights.
Going back to the modified Lanczos routine, our DMRG code implements the so-called Thick Restart algorithm of Wu and Simon [58]. Once S z tot is fixed, in a given run we want to determine simultaneously the first N t levels |S z tot ; b with b=0,1,2,. . . , N t −1 (the ground state being identified by (S z tot = 0,b= 0)). Then, as in the conventional Lanczos scheme, we have to push the iteration until the norms of the residual vectors and/or the differences of the energies in consecutive steps are smaller than prescribed tolerances (10 −9 − 10 −12 in our calculations). The delicate point to keep under control is that, once the lowest state |S z tot ; 0 is found, if we keep iterating searching for higher levels the orthogonality of the basis may be lost, just because the eigenvectors corresponding to these levels tend to overlap again with the vector |S z tot ; 0 . As a result, the procedure is computationally more demanding to the extent that one has to re-orthogonalize the basis from time to time. Typically, we have seen that this part takes a 10-20% of the total time spent in each call to the Lanczos routine. We have also observed that if this re-orthogonalization is not performed, one of the undesired effects is that the excited doublets (generally due to momentum degeneracy) are not correctly computed. More specifically, it seems that while the two energy values are nearly the same in the asymmetric stages of the iterations, when the superblock geometry becomes symmetric (s = s ′ in the notations of the preceding point) the double degeneracy is suddenly lost and only one of the two states appears in the numerical spectrum.
So far for the specific algorithm. Now, the crucial point to consider in accurate DM RG calculations is the choice of M , that is, the number of optimized states. White argued [57] that the convergence of the ground state energy is almost exponential in M with a step-like behaviour, probably related to the successive inclusion of more and more complete spin sectors. Unfortunately, the effective accuracy gets poorer when we deal with energy differences and correlation functions, for which little is known about convergence. It must be told, however, that despite its name the DM RG performs somehow better for systems with a definite gap rather than for gapless (critical) ones. We refer to the papers by Andersson, Boman andÖstlund [5] and by Legeza and Fáth [35] where, for different systems and in terms of different observables, the following common feature emerges: Even if the quantum system is rigorously critical in the limit L → ∞, the DM RG truncation introduces a spurious length, L * (m), which, as expected, diverges as M is increased. (Our analysis of the accuracy of the energy levels in some selected points of the λ − D chain near criticality leads to a similar conclusion [18]). Hence, even if we are technically able to deal with sizes L > L * (m) (at a given M ), as far as criticality is concerned we cannot rely completely on the DM RG data because the system experiences an effective length which should be absent in the critical regime.
Therefore, our strategy can be summarised as follows: We fix a rather high value of M such that the trustable values of L are sufficiently large to see the scaling limit of CF T , but not too large as compared to L * (m). In other words, even in the study of (supposed) critical systems, we prefer to exploit the computing resources to include as many DM RG states as possible, and to refine the calculations with finite-size iterations, rather than trying to take naïvely the limit L → ∞. In addition, to judge whether M is sufficiently large or not we checked the properties of traslational and reflectional invariance that the correlation functions should have 17 . To be specific, if G(0, k) is a certain correlation function computed starting at j = 0, we have always increased M (at the expenses of L) until the bound |G(ℓ, ℓ ± k) − G(0, k)|/G(0, k) 0.05 was met for k varying from 0 to ℓ = L/2, possibly with the exception of the ranges where G(0, k) is below numerical uncertainties (10 −6 , say).
The quality of the numerical analysis of the critical properties depends heavily on the location of the critical points of interest. As far as the transitions from the Haldane phase are concerned, it is convenient to fix some representative values of λ and let D vary across the phase boundaries. This preliminary task of finding D c (λ) turns out to be crucial for subsequent calculations and is divided in two steps. First, one has to get an approximate idea of the transition points using a direct extrapolation in 1/L of the numerical values of the gaps, computed at increasing L with a moderate number of DM RG states. Clearly, one may want to explore a rather large interval of values and so the increments in D will not be particularly small (0.1, say). Then, the analysis must be refined around the minima of the curves ∆E-vs-D with smaller increments in D and a larger value of M . In our problem, the approach that seems to give better results is standard finite-size scaling (F SS) theory [26,29] (for instance as compared to the phenomenological renormalization group).
Once the critical point is located, we take full advantage of the conformal structure by looking at the finite-size spectrum (see Eqs. (41) and (44)) of relevant and marginal operators. In practice, we select a number of states that tend to become degenerate with the ground state and look for straight lines in the ∆E-vs-L −1 plot. Then, from a best fit we expect to have a very small offset (ideally a zero gap in the thermodynamic limit) and a slope given by the scaling dimension d multiplied by the velocity prefactor, v, which is absent in the field-theoretical formulation but has to be determined (in terms of the microscopic parameters) in a lattice system. In the latter case, Eq. (41) should contain also a term e ∞ L, e ∞ being the energy density of the problem at hand. Actually, due to the prefactor v, we have to imagine a self-consistent procedure: Depending on the type of the transition we have in mind (that is, depending on the central charge c), we stick on one or more levels in the spectrum that have exactly d = 1. Then the slope of these is nothing but v. Once the velocity is estimated, one uses Eq. (41) to best fit the product cv and see whether the value of c and the hypotesis on the universality class are self-consistent or not.
To clarify the matter, let us start with the simpler case of the Haldane-Néel transition, that is thought to be in the 2D Ising universality class. Fixing λ = 0.5 we find D c (0.5) = −1.2, and the β-function method [29] yields ν(0.5) = 1.023 ± 0.009, as far as the gap exponent, ∆E ∝ (D − D c ) ν is concerned. Moreover, we observe the following nontrivial feature of the spectrum: The massless modes described by the CF T seem to be all and only the levels within S z tot = 0, while those with S z tot = 0 mantain a finite energy gap in the limit of large L. Hence, the reference state for the calculation of v will be the second excited state in S z tot = 0, corresponding to the primary field of conformal dimensions (1/2,1/2). Using quadratic extrapolations in 1/L we get v = 2.44, and consequently e ∞ = −2.0011961 ± 0.0000006 and c = 0.5008 ± 0.0008, thereby confirming the Ising universality class. The scaling dimensions can be estimated from the slopes of the straight lines in a plot like that of Fig.4. In Table 1 the theoretical values anticipated in Sect.4 are compared with these numerical estimates. The overall agreement is good (7 % in the worst case). Note that all the marginal operators have nonzero momentum and so they cannot represent a valid perturbation to the continuum Hamiltonian in as much as they would break traslational invariance. The absence of marginal operators suggests that each point of the Haldane-Néel transition corresponds to the same c = 1/2 theory and the line in the phase diagram is "generated" by the mapping from the discrete spin model to the continuum CFT. Repeating the same passages at λ = 1 we get D c (1) = −0.315, ν(1) = 1.003 ± 0.006 together with v = 2.65, e ∞ = −1.62651, c = 0.498 ± 0.002, that is, again a c = 1/2 continuum theory.
We Continuous lines are best-fit whose slopes are given in Table 1, together with the theoretical predictions of the scaling dimensions (the labels on the right indicate the multiplicities, all correctly met). critical points, namely the transition from the Haldane to the large-D phase. In the past [20], a similarity with the critical fan of the Ashkin-Teller model has been suggested. The operator content of this model arises from Ginsparg's orbifold construction [24] and consists of a number of K-independent scaling dimensions plus the contributions coming from the pure Gaussian part (free boson) discussed in the previous Section. The fact that we do not observe Kindependent dimensions (apart from trivial secondaries of the identity) indicates that the continuum description of our spin-1 Hamiltonian with P BC at the Haldane-large-D transition should be purely Gaussian rather than "orbifoldlike".
In order to support this claim, we try again to match the whole spectrum of the relevant and marginal operators (d ≤ 2). The difference with the c = 1/2 case is that here we have to fix not one but two nonuniversal parameters, v and K (see Eq. (50)). As regards the former, the velocity stems from the first and second excited states in S z tot = 0. Note that in choosing these levels we are assuming, self-consistently, that K > 1 so that the two secondaries of the identity (d = 1) come first than the primaries with (m = 0, n = ±1), having d 0,±1 = K. As far as the Luttinger parameter K is concerned, we have to inspect the spectrum in other sectors of S z tot too. In particular, the first excited state lies in |S z tot | = 1, that corresponds to m = ±1, n = 0 in Eq. (50). The value of K is obtained from the slope d ±1,0 = 1/4K in a plot similar to that of Fig.4.
More generally, in order to check the self-consistency of the hypotesis c = 1, we have computed the finite-size spectrum of relevant and marginal operators in different sectors of S z tot for a couple of critical points on the Haldane-large-D line (first two rows of Table 2). Once that v and K are numerically determined, the structure of the Gaussian spectrum is correctly reproduced (including the multiplicities) and the overall comparison is satisfactory since, in worst cases, the relative difference does not exceed 3% (see plots and tables of Ref. [17]). The agreement with the theoretical predictions of the mapping in the planar regime is also remarkable. If we plug the coordinates of the critical points in the formulae of g and v for the Gaussian model derived above (Eq. (52)), we obtain v = g = 2.07, K = π/g = 1.52 at (λ = 0.5, D = 065) and v = g = 2.45, K = π/g = 1.28 at (λ = 1, D = 0.99). Enforced by these quantitative predictions, we try to approach the multicritical point where the c = 1 line meets the c = 1/2 one. Supposedly, the central charge at this point is c = 3/2 and it has been proposed [49] that the corresponding CFT is a SU(2) 2 Wess-Zumino-Witten-Novikov model. If this was true, the two lines should join at the point where the effective Gaussian theory has K = 1 [27] (F D point). Using λ ≃ D in the expression of g we find that K = π/g(λ) = 1 is satisfied for λ ≃ 2, while it is believed [13] that the multicritical point lies at λ 3. We guess that the two lines join at K < 1, and in order to test this conjecture we study two more points: (λ = 2.59, D = 2.30), again on the c = 1 line, and (λ = 3.20, D = 2.90) proposed in [13] as the multicritical point itself. Altough the steps are conceptually the same as above, here we encounter two additional complications. First, due to the closeness (or almost coincidence in the multicritical case) of the Ising transition, we observe the merging of the two (quasi)critical spectra. Hence, we have to target more states and separate the ones belonging to c = 1 from the ones belonging instead to c = 1/2. Second, we observe sizeable finite-size corrections from irrelevant operators. In fact, our analysis shows that we are moving at values of K smaller than one towards K = 1/2 where certain irrelevant operators become marginal. As explained in [17], the last two rows of Table 2 are obtained by extracting K not from the first excited state, but rather from half the sum of the pair of levels with m = 0, n = ±1 in S z tot = 0, to get rid of finite-size corrections. As anticipated, moving to the right of the Haldane-large-D line the value of K keeps on decreasing towards 1/2 (SD point) where we argue that this line meets the Haldane-Néel one and a first order transition starts.
We close the section with a few comments on the hidden topological order measured by string order parameters (Eq. 23). It is expected that, leaving the Haldane phase, the Z 2 ×Z 2 symmetry is partially or totally restored. More precisely, when the c = 1 line is crossed, both O z S and O x,y S vanish. As customary, we can introduce two off-critical exponents that control the closure of these order parameters. For instance, fixing λ and varying D about D c (λ): Now, according to F SS arguments (sec. 5.1 of [24]), β S and β z S are related, via the gap exponent ν, to their counterparts at criticality, that is, the scaling dimensions of the operators entering the associated string correlation functions. These dimensions, in turn, can be extracted from the slopes, η S and η z S , in the log-log plots of O x,z S (D = D c ) evaluated at half of the chain. Using the relation 2β S = νη S (and analogously for the z channel) we find the values reported in Table 3 for a couple of critical points already discussed above. We should observe that the scaling dimensions η S /2 and η z S /2 are not contained in the c = 1 spectra cited above. However, we notice also that the numerical estimates of η z S are rather close to the values 2 d0,±1 4 = K/2 and that these levels actually exist in the effective continuum theory provided that half-intger values of n are allowed in Eq. (50). In the XXZ spin-1/2 formulation this is known to correspond to twisted boundary conditions on the chain. Thus, considering that the calculations presented here for the spin-1 case are with P BC, it's not surprising that the scaling dimensions associated with O x,z S are absent in the numerical spectra. Nonetheless, we believe that the closeness to K/2 is not accidental and in Ref. [17] we speculated about the possibility that the longitudinal string correlation functions (Eq. (21) with α = z) acquires, in the continuum limit, the asymptotic form so that the lattice string S z 0 exp iπ r−1 l=1 S z l is somehow related to the continuum twist operator exp [±i √ πKΦ(r)]. In the present paper we have reviewed, to the best of our knowledge, part of the status-of the-art concerning Heisenberg spin chains, including biquadratic interaction terms and various kinds of anisotropies, concentrating on the rôle of hidden symmetries in the various families of spin models. We have discussed how the inclusion of anisotropy terms can drive the "standard" Heisenberg chain away from the Haldane phase and how hidden symmetries (and their spontaneous breaking) are of great help in classifying the "massive" (gapfull) phases of the model. The location of the critical lines of the model has been accurately obtained numerically, confirming and extending earlier predictions [13].
The combined use proposed here of analytical (CF T ) and numerical (DM RG) techniques to investigate the critical properties of the models has proved to be a rather successful strategy to clarify the nature and structure of the critical phases of the models. Numerical simulation techniques (Monte Carlo and DM RG, to quote only the most known ones) are of more and more frequent and extended use in almost all branches of Theoretical Physics. A blind use of them can however be more dangerous than helpful in understanding the physical properties of the systems for whose study they are employed. We believe instead that an "educated" use of numerical techniques in support of analytical approaches, as described here, can result in a powerful synergy that can be of great help in understanding the physics of many problems in Theoretical Physics. | 2014-10-01T00:00:00.000Z | 2003-09-29T00:00:00.000 | {
"year": 2003,
"sha1": "49fa96859db8c9bc886465b8d3e59065108cce52",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0309658",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "49fa96859db8c9bc886465b8d3e59065108cce52",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
6504949 | pes2o/s2orc | v3-fos-license | What makes a phase transition? Analysis of the random satisfiability problem
In the last 30 years it was found that many combinatorial systems undergo phase transitions. One of the most important examples of these can be found among the random k-satisfiability problems (often referred to as k-SAT), asking whether there exists an assignment of Boolean values satisfying a Boolean formula composed of clauses with k random variables each. The random 3-SAT problem is reported to show various phase transitions at different critical values of the ratio of the number of clauses to the number of variables. The most famous of these occurs when the probability of finding a satisfiable instance suddenly drops from 1 to 0. This transition is associated with a rise in the hardness of the problem, but until now the correlation between any of the proposed phase transitions and the hardness is not totally clear. In this paper we will first show numerically that the number of solutions universally follows a lognormal distribution, thereby explaining the puzzling question of why the number of solutions is still exponential at the critical point. Moreover we provide evidence that the hardness of the closely related problem of counting the total number of solutions does not show any phase transition-like behavior. This raises the question of whether the probability of finding a satisfiable instance is really an order parameter of a phase transition or whether it is more likely to just show a simple sharp threshold phenomenon. More generally, this paper aims at starting a discussion where a simple sharp threshold phenomenon turns into a genuine phase transition.
Abstract
In the last 30 years it was found that many combinatorial systems undergo phase transitions. One of the most important examples of these can be found among the random k-satisfiability problems (often referred to as k-SAT), asking whether there exists an assignment of Boolean values satisfying a Boolean formula composed of clauses with k random variables each. The random 3-SAT problem is reported to show various phase transitions at different critical values of the ratio of the number of clauses to the number of variables. The most famous of these occurs when the probability of finding a satisfiable instance suddenly drops from 1 to 0. This transition is associated with a rise in the hardness of the problem, but until now the correlation between any of the proposed phase transitions and the hardness is not totally clear. In this paper we will first show numerically that the number of solutions universally follows a lognormal distribution, thereby explaining the puzzling question of why the number of solutions is still exponential at the critical point. Moreover we provide evidence that the hardness of the closely related problem of counting the total number of solutions does not show any phase transition-like behavior. This raises the question of whether the probability of finding a satisfiable instance is really an order parameter of a phase transition or whether it is more likely to just show a simple sharp threshold phenomenon. More generally, this paper aims at starting a discussion where a simple sharp threshold phenomenon turns into a genuine phase transition.
Introduction
The analysis of phase transitions and the associated microscopic structures is a well-developed scientific approach in physics. In real systems, the observation of phases and their different macroscopic behavior comes first, and a subsequent analysis reveals how the structure of one phase is transformed into the structure of the other phase. This transition is associated with the change of a so-called control parameter, such as the temperature. Most interesting are abrupt changes in functions measuring the macroscopic behavior, e.g., the density or heat capacity, that happen with small changes in the control parameter. The function showing the non-analytic behavior or singularities is the order parameter of the system and can be seen as a fingerprint of the underlying phase transition. Starting with the analysis of random graphs [2] and simple percolation models [23,3], combinatorial objects came into the focus of statistical physicists. A thorough analysis revealed that these simple systems also show phase transitions.
Whereas in percolating systems the phases and their different behaviors are visually accessible, this is not the case for other combinatorial systems with a proposed phase transition. One of the most important of these systems is the so-called satisfiability problem (SAT). Given some Boolean formula, it asks whether there exists an assignment of Boolean values to its variables such that it is satisfied, i.e., such that it evaluates to true. SAT problems belong to the set of NP-hard problems, i.e., so far there is no algorithm to solve them in polynomial time [8]. As with many other NP-hard problems, satisfiability problems arise not only in theory but also in industry, e.g., in automotive configuration [22], in software and hardware design [12], biological sciences [7], and artificial intelligence [1]. Since satisfiability problems are so abundant, understanding when and why they are hard and developing better algorithms is crucial. A classic family for analyzing the hardness is the random k-SAT family in which the k variables of each clause are drawn uniformly at random and without repetition from the set of all variables. Each variable is negated with probability 0.5. The ratio between the number of clauses m and variables n denoted by α ≡ m/n parameterizes the probability P[UNSAT] of finding an unsatisfiable instance at a given α. It was observed early [4,17] that plotting P[UNSAT] against α shows a sharp threshold behavior at some critical α c . Furthermore, around this α c it also takes various algorithms the longest time to solve random 3-SAT problems, i.e., the problems are hard. To quantify the hardness, either the number of distinct steps of the solving algorithm is counted, or simply the time measured until the problem is solved. The divergence of the hardness together with the sudden jump of P[UNSAT] at some critical α resembles a phase transition-like behavior [9,21]. The numerical analysis of this sharp threshold behavior resulted in α c = 4.15 ± 0.05 [10]. Kirkpatrick and Selman could also show that there is a non-trivial finite size effect, i.e., that the width of the window in which the transition takes place is proportional to n −2/3 for 3-SAT. It is thought that this sharp threshold phenomenon is of first order, i.e., in the limit of infinite system size and for α < α c P[UNSAT] = 0, and for α > α c P[UNSAT] = 1. For 2-SAT, this could be rigorously shown [5], but for all k ≥ 3 it is an open question. Note that 2-SAT itself is not NP-hard [8]. To analyze the nature of this sharp threshold behavior, the k − SAT problem can also be represented as a spin-glass model, and different theoretical analyzes have arisen from this approach [19,15,16,11]. Since these theoretical analyzes rely on the thermodynamical limit whereas numerical approaches can only tackle system sizes of up to 100 or even be restricted to system sizes below 40 (depending on the specific question), it is not surprising that none of the theoretical approaches matches the numerical value of α c = 4.15 ± 0.05. The approach that comes closest is based on the analysis of survey propagation that results in α c = 4.267, which is believed to be exact [16]. The applied order parameter is very technical and it is difficult to analyze how it relates to P[UNSAT].
To find out more about the behavior of 3-SAT, we first repeated the experiment of Kirkpatrick and Selman, and increased the then available system size from 100 to 200. A subsequent finite size scaling is much more in accordance with the old value of α c = 4.15 ± 0.05 than with α c = 4.267. In the second step, we aim at understanding a different parameter, namely the entropy of the system, i.e., the logarithm of the number of solutions a satisfiable instance has. It was shown by an approach from statistical physics that the entropy is still finite at α c , i.e., the number of solutions is still exponential [18]. Monasson and Zecchina state that " hence (...) the transition itself is due to the abrupt appearance of logical contradictions in all solutions and not to the progressive decreasing of the number of these solutions down to zero." Such a sudden emergence of logical contradictions on a macroscopic level would be a good sign of a genuine phase transition.
In this paper we give numerical evidence that the explanation for the finite entropy at α c is far simpler, namely that the average number of solutions of satisfiable instances is universally described by a lognormal distribution over a range of different system sizes and 4.0 ≤ α ≤ 4.5. This means that, although many of the instances are already unsatisfied at α c , some of the satisfiable instances have a large number of solutions left, which accounts for the high average number of solutions. A lognormal distribution can be the result of the iterative application of a factor drawn from some distribution. This raises the question of whether the phase transition of P[UNSAT] may be only a sharp threshold phenomenon that is not based on the non-trivial restructuring of interacting entities. In the following we will first discuss our numerical findings regarding the average number of solutions, then give an alternative explanation for the rise of the hardness at α c and finally discuss some simple models with different kinds of sharp threshold phenomena. The last model shows qualitatively the same behavior as P [UNSAT].
In summary, we do not attack the idea that k-SAT shows phase transitions in general but we put on display some simple explanations and models that raise doubt about whether the proposed phase transition of P[UNSAT] is more than a simple sharp threshold phenomenon. In general, the once obvious border between first order and continuous phase transitions and their respective properties has become so blurred that scientists from neighboring disciplines, e.g., computer scientists or chemists and even statistical physicists not specialized in spin-glasses, have difficulties to find out what are the properties that define a phase transition. Our main contribution in this paper are thus the above mentioned toy models that are so simple that they cannot be considered to have a genuine phase transition. Still, they mimic some important properties of the 3-SAT system. With this we would like to open a discussion with the spin-glass community to understand what differentiates the simple models from 3-SAT and what exactly makes a phase transition. The paper thus aims at starting a discussion of the difference between a mere sharp threshold phenomenon and a genuine phase transition. We hope that a discussion of what properties are required for acknowledging a phase transition will help to support the interdisciplinary discussion in this area.
The paper is organized as follows: After giving some definitions in Sec. 2, we will discuss in Sec. 3 the question of whether the sharp threshold phenomenon of P[UNSAT] is directly caused by a continuous phase transition of an order parameter related to P[UNSAT]. We will furthermore discuss whether there is any evidence at all for the existence of two different phases. Sec. 4 finally introduces two simple statistical models that show similar phase transition-like behavior without any underlying interacting elements. The first one is clearly trivial, while the second shows a non-trivial finite size scaling effect. From these models, we develop a simple toy model that qualitatively shows the same properties as P[UNSAT] in random 3-SAT. Finally, we discuss our findings in Sec. 5.
Definitions
Let V be a set of n variables {v 1 , . . . , v n }. Each variable has two literals, a positive literal denoted by v i and a negated literal denoted by −v i . A Boolean formula in conjunctive normal form (CNF) consists of m subsets of and-connected literals, called clauses or constraints. The clauses are orconnected. An assignment is a function a : V → {true, f alse} n that assigns each variable a Boolean value, i.e., true or false. With a given Boolean formula in CNF and a given assignment, the formula can be evaluated: a positive literal which is assigned true evaluates to true, and to f alse if it is assigned f alse. A negated literal which is assigned f alse evaluates to true and to f alse otherwise. A clause evaluates to true if at least one of its literals evaluates to true, and the whole formula evaluates to true if all clauses evaluate to true. The satisfiability problem, or SAT problem for short, asks whether a given Boolean formula has at least one assignment such that it evaluates to true. Such an instance is called satisfiable (sat), and one where no satisfying assignment can be found is called unsatisfiable (unsat). If all clauses contain k literals, we speak of k-SAT. If, moreover, the instance is created by choosing the k literals uniformly at random without repetition, we speak of random k-SAT. α denotes the ratio between the number of clauses m and the number of variables n in a random k-SAT instance.
For each two assignments a and a ′ , the Hamming distance d(a, a ′ ) is defined as the number of different assignments to the variables.
The SAT problem can be solved by different algorithms, the most widely used being based on the following scheme, first proposed by Davis et al. [6]. It is a kind of trial-and-error procedure in which a growing subset of variables is assigned Boolean values until we either find a solution or encounter a contradiction. In each step, take one of the variables that is yet unassigned and assign either true or f alse to it. Say, variable v i is assigned true. Now, the instance can be simplified by (temporarily) removing all clauses which contain the positive literal of v i since they are already satisfied. Furthermore, we can temporarily remove the negated literal from all clauses since it cannot contribute to the satisfaction of the clauses it is contained in. If after this step all clauses have been removed, we have found a solution to the problem. If we encounter an empty clause, all of its originally contained variables have been assigned the wrong value and thus we have found a contradiction. In this case, we have to backtrack and restore the instance up to the point where v i was unassigned. Then, the same procedure is tried, but assigning f alse to v i . As long as there is no solution and no contradiction in the simplified instance, we simply proceed with the partial assignment. If all decisions lead to contradictions, the instance is unsatisfiable. There are many improvements to this basic scheme, e.g., specifying an order in which the variables are assigned [13] and learning [14]. One basic improvement is unit propagation, i.e., whenever a clause has only one literal left, it can only be satisfied when the variable's assignment is set accordingly. Note that the assignment of such a variable is called a dependent decision while the assignment of Boolean values to all other so-called free variables is called independent decision.
Random 3-SAT
It is well known that 3-SAT belongs to the set of the so-called N P -hard problems, i.e., problems for which so far no algorithm with polynomial runtime has been found [8]. In the worst case, finding a solution to these problems can take exponential time such that even relatively small instances cannot be solved within months. On the other hand, many real-world SAT problems can be solved in a short time despite their huge size. Since this behavior is not well understood, research has been dedicated to understanding why and how hard instances emerge and what their structure looks like.
It was observed early [17,4] that plotting P[UNSAT] against α shows a sudden jump at some value α c independent of the system size n. Furthermore, around this value α c it also takes various algorithms the longest time to solve random 3-SAT problems. This divergence of the hardness and the sudden jump of P[UNSAT] at some universal α resembles a phase transition-like behavior [9,21]. In their classic paper from 1994, Kirkpatrick and Selman used the wellunderstood model of percolation in growing random graphs and the techniques deployed in this area for the identification of critical phenomena in random 3-SAT: "We use finite-size scaling, a method from statistical physics in which the observation of how the width of a transition narrows with increasing sample size gives direct evidence for critical behavior at a phase transition." They scaled the curves for different k according to n ν * (α − α c )/α c and evaluated α c to be 4.15 ± 0.05 and the critical exponent ν for k = 3 to be 2/3 [10]. Today a value of α c = 4.267 is often cited for the P[UNSAT] threshold [16], but plotting P[UNSAT] against the rescaled parameter y = n 0.66 (α−4.12)/4.12 yields a much better scaling than that for the rescaled parameter y = n 0.66 (α − 4.267)/4.267 (s. Figure 1). The reason for this mismatch is not totally clear. It could be due to the still quite small system size in our experiments.
In this paper we suggest that the observed threshold phenomenon of P[UNSAT] is not so much a sign of criticality but simply caused by the law of large numbers. In general it is not easy to prove that an observed sharp threshold behavior is not caused by the critical behavior associated with a phase transition since there are many possible interactions that could be causing it. In the next section we will first analyze the typical number of solutions, which is closely related to the entropy of the system.
Number of solutions
A first-order phase transition is deeply connected to a sudden increase in order. For example, when water freezes the molecules are fitted into a neat structure that shows high order. It is difficult to see intuitively what kind of order is measured by P[UNSAT]. However, when a continuous phase transition is studied using an existence parameter instead of a quantitative parameter, it may seem to be rather like a first order transition, as we will exemplify in the case of site percolation in 2D. Here, one can ask about the behavior of two different but related parameters: "Is there a biggest connected component (BCC) of size O(n)?" (this is the existence parameter) or "What is the size of the BCC ?" (and this is the quantitative parameter). Plotting the relative size of the BCC shows a continuous phase transition at some critical value, i.e., the first parameter is a quantitative one that reveals the complex behavior of the system. At the critical value, a finite fraction of all vertices is spanned by the BCC, i.e., it has size O(n). Since the second parameter just asks for the existence of a BCC with size O(n), it will trivially show a first-order phase transition-like behavior at the same value [23]. Thus, in this system, the seemingly first-order phase transitionlike behavior of the existence parameter is just a trivial implication from the true continuous phase transition concerning the quantitative parameter. Since P[UNSAT] asks whether there exists a solution or not, we first analyzed whether the seemingly first-order phase transition of P[UNSAT] also belongs to this type, i.e., whether it is an indicator of a more complex continuous phase transition of a related quantity like the behavior of the number of solutions.
An instance is unsat if and only if it has no solution-this is a typical existence parameter. A possible quantitative parameter of which this existence parameter could be an indicator is the average number of solutions. The logarithm of this quantity is the entropy of the system at a given α [18]. Figure 2 shows that the average number of solutions < s > can be fitted to a simple exponential law, i.e., This simple behavior of the average number of solutions coincides with the socalled annealed estimate of the number of solutions [10], which is based on the fact that any solution will be 'killed' with a probability of 1/8 by a clause drawn uniformly at random. But although this estimate has been used for a long time, it is surprising that the average number of solutions follows it so closely since it does not take into account that in reality the solutions' probability to be deleted are dependent: i.e., two very similar solutions have a higher probability to be killed by the same constraint whereas two solutions that assign the opposite values to variables can never be killed by the same constraint. Thus, it is still surprising that the average number of solutions universally follows this simple law for all system sizes. Furthermore, Figure 2 reveals that at α c there ison average-still an exponential number of solutions although we know that the probability of finding a satisfiable instance drops to zero for large system sizes. This has also been proven rigorously by [18]. It is clear that without the gap between the critical value of α c and the point α = 5.19 where the average number of solutions becomes 1, there would not have been much interest in the seemingly critical behavior of P[UNSAT]. The only possibility to achieve an exponential average number of solutions at α c and P[UNSAT] → 0 for n → ∞ is to have a strongly right-skewed distribution of the number of solutions an instance has. Indeed, as Figure 3 shows, the distributions of satisfiable instances displays a universal behavior. Over an interval of α = 4.0 − 4.5 and different system sizes n = 30 − 100, the cumulative distribution of the number of satisfiable instances, P (s), can be fitted by the cumulative distribution of a lognormal distribution given as where µ and σ correspond to the mean and the standard deviation of ln(s), and erf denotes the error function. The lognormal distribution of s explains that there is no need of a sudden drop of < s > at α c since the average is dominated by some instances with a high number of solutions, although most instances are already unsatisfiable. In summary, neither the typical number of solutions nor its distribution shows critical behavior around α c . Since we know now that the distribution of s is highly skewed, another intuitive measure is the quenched average, i.e., the average < log(s + 1) > of the logarithm of the number of solutions, shown in Figure 4. Note that also this does not show any interesting behavior around α c = 4.15.
In summary, it does not seem to be the case that the sharp-threshold phenomenon of P[UNSAT] is the simple indicator of a related, continuous phase transition of a quantitative measure.
Are there two different phases in k-SAT?
This leads us back to the question of whether we really have two phases in this system, one consisting of satisfiable instances and one consisting of unsatisfiable instances. In k-SAT, the main problem is that we cannot observe two different phases by eye. In this special case, the sharp threshold behavior had been observed first. and this lead to the definition of the "phases" instead of observing and defining the phases first before analyzing the transition between them. This happened because the sharp threshold phenomenon divided the instances into two different groups that match our intuition. Maybe, however, an unsatisfiable instance is just an instance with 0 solutions and not substantially different from an instance with exactly 1 solution. The question is thus whether the two 'phases' are just a differentiation that is convenient for computer scientists or whether they relate to a small structural change in some interaction on a microscopic scale that leads to a huge change in macroscopic behavior.
Hardness has been used to argue that there are two different phases, since it shows a diverging behavior around α c . Of course, hardness, measured as the number of independent decisions of a DPL-like algorithm [6] or simply by the runtime, depends on the specific implementation. Nonetheless, the basic picture is always the same, namely that it peaks around α c 1 . The question is whether this maximum is genuine or directly dependent on the definition of a satisfiable and an unsatisfiable instance. We will give evidence here that the occurrence of a maximal runtime around α c is directly implied by the definition of a decision algorithm. The problem is that a decision algorithm does different things in the two cases: if it runs on a satisfiable instance, it stops after the first solution is encountered. Otherwise, a proof has to be given that no solution exists. For DPL-like algorithms [6], this means that in the first case only some fraction of the whole decision tree has to be searched while for unsatisfiable instances the whole tree has to be traversed. We can assume two things: 1. the decision trees of typical satisfiable and typical unsatisfiable instances at a given α are of approximately the same size; 2. the locations of the solutions in the leaves of the tree are uniform.
Thus, let the size of a typical decision tree at a given α be denoted by t(α). Even if an instance has just one solution, we will on average traverse only half of the tree to find it. For an unsatisfiable instance at the same α, we will on average take double the time to find the solution. Since at α c there are more unsatisfiable than satisfiable instances, this is already an explanation for the increasing runtime at α c . Of course, the behavior of the average hardness is a bit more complicated than this. The average hardness h(α) can be dissected into h sat (α) and h unsat (α), the hardness of satisfiable and unsatisfiable instances at α. With this, Note that the hardness h unsat (α) is simply given by the average size t unsat (α) of the decision tree of unsatisfiable instances at α. While h unsat (α) = t unsat (α) seems to be a simple, exponentially decreasing function with α (s. Figure 5a), h sat (α) is at a maximum around α c (s. Figure 5b). h sat (α) can be approximated as the product of t sat (α), the size of the average decision tree of satisfiable instances at α, and φ T (α), the average fraction of the decision tree that is traversed before a solution is found. While the first is decreasing with α, the latter is increasing with α. Thus, the maximum around α c in h sat (α) is introduced artificially by stopping after the first solution is encountered. If we instead look at the runtime of an algorithm that counts the number of all solutions an instance has, we see no singularity of the hardness around α c as Figure 5. shows. We thus conclude that the hardness supports the view that there are no two phases since the size of the decision tree decreases smoothly with growing α, at least for the system sizes that could be computed. Summarizing the results so far, we could not find a measure which is related to the existence question measured by P[UNSAT] and which shows a continuous phase transition. We also did not find any measure that is independent of P[UNSAT] and therefore proves that indeed an unsatisfiable instance is structurally different from an instance with 1 solution. Instead, we will now present results from two very simple statistical systems that show a sharp threshold phenomenon. We will then use these systems to develop a simple toy model that shows qualitatively the same behavior as 3-SAT and shows quite clearly that no phase transition is needed to produce a 3-SAT-like system.
Sharp threshold phenomena in simple statistical systems
In this section we discuss two simple stochastic processes. The first one, is a simple coin tossing example that is discussed in Sec. 4.1 and the second is a statistical problem, called the coupon collector's problem, discussed in Sec. 4.2.
Throwing a Biased Coin
In the book Computational complexity and statistical physics, the editors briefly discuss the question of whether sharp thresholds are more than just an effect of the law of large numbers. They contrast SAT with the following simple system [21, p.8]: a biased coin is tossed that shows heads with probability β and tails with probability 1 − β. Let an instance consist ofn tosses and letn define the system size. We expect the chance P [#heads > #tails] to see more heads than tails in one of these instances to change from 0 for β < 0.5 to 1 for β > 0.5 with an ever-increasing sharpness with growingn. With this example, Percus et al. indicate that sharp threshold phenomena per se are not so surprising, but they don't settle the question of whether this simple system will already show finite size scaling. The question is thus whether the curve P [#heads > #tails] for lown just fluctuates stronger or is indeed less steep than that of a larger system. This question is settled by Figure 6. Figure 6a shows the fraction of 10, 000 instances ofn tosses each where more heads than tails were shown. The curves meet approximately at β = 0.5.
Plotting them against the rescaled parameter y =n 0.5 (β − 0.5)/0.5 shows a perfect universal scaling. This model is especially interesting since here also the sharp threshold behavior results from asking a peculiar kind of question. Instead of looking at the more natural question of P [heads] which is of course identical to β, the behavior artificially becomes a sharp threshold behavior by asking when it is more likely to see more heads than tails in any given system size. Moreover, this most simple system also displays a finite size scaling effect. Naturally, the corresponding exponent β = 0.5 is the one dictated by the law of large numbers. Thus, although a finite size scaling effect can be seen, nobody would regard it as the effect of a phase transition since the exponent is a trivial one. The next example is much more interesting since it shows a non-trivial exponent.
The Coupon Collector's Problem
The simple system of coin tossing cannot easily be likened to 3-SAT. We will thus introduce a second statistical problem called the coupon collector's problem: let there be a set of n ′ distinguishable objects called coupons, identified by a coupon ID from 1 to n ′ . Each coupon is contained multiple times in a large multi-set and collectors can purchase coupons from this multi-set by drawing one item uniformly at random. We will assume that each coupon ID has the same probability of being drawn. The coupon collector problem asks how many draws have to be made expectedly until each coupon ID is drawn at least once, i.e., the question of when the collection is completed. In essence, once the collector has collected k different IDs, the chance of picking a new ID is n ′ −k n ′ and thus the expected time to find a new one is n ′ n ′ −k . Summing over these expected times gives n ′ n ′ + n ′ n ′ −1 + . . . + n ′ = n ′ 1 1 + 1 2 + . . . + 1 n ′ = n ′ H n ′ . This can be approximated to be n ′ ln n ′ + Υn ′ + 1 2 + O(1), where Υ ≃ 0.57722 denotes the Euler-Mascheroni constant. The variance is bound from above by 2n ′2 .
For a set of x collections, we now define P [f ull, t] to be the fraction of full collections after t draws. Of course, the number of draws depends on the system's size. We thus define γ := t n ′ ln n+0.577n ′ +0.5 and plot P [f ull, γ] against γ. Figure 7a shows the result for different system sizes from 10 to 1000 in dependence of γ. Interestingly, this looks like a phase transition at a critical γ c = 1. Furthermore, we define a rescaled parameter z = n ′0.17 (γ − γ c ) against which we plot the functions, as shown in Figure 7b.
Note that the critical exponent is far away from the trivially expected 0.5. We can now define two phases: full collections and incomplete collections. With this, Figure 7 shows clearly that there exists a first-order phase transition between the two phases. Or does it? But of course, a system as simple as the coupon collector's problem does not meet the intuition about a system with a phase transition and it especially cannot exhibit any non-trivial collective behavior. Just defining that one condition of a system, i.e., whether a collection is complete or not, represents two phases does not make them different phases. Also, the finite size scaling effect cannot justify the notion of a phase transition since it seems to be mainly an effect of the law of large numbers.
In the following we will highlight the connection between the coupon collector's problem and the behavior of P[UNSAT] in 3-SAT.
Connection between random k-SAT and the coupon collector's problem
When α = 0 each random k-SAT instance has exactly 2 n solutions. Every added clause C = {l 1 , l 2 , . . . , l k } excludes all solutions in which all negated literals l i are assigned true and all positive literals l j are assigned f alse. That is, each added clause extinguishes a fraction of 2 −k of all remaining solutions. Of course, some of the solutions might already have been extinguished by a clause added earlier. An instance becomes unsatisfiable when all of its possible assignments have been extinguished by some clause. Thus, the question is very similar to that of the coupon collector's problem: in each time step we draw uniformly at random k literals that extinguish a 2 −k th of all possible assignments and we want to know when all possible assignments are extinguished. Of course, there are two main differences: we draw more than one 'coupon' at once, namely 2 n−k , and moreover these are not independent of each other. The first condition alone would just reduce the expected completion time by some factor, but the effect of the second condition is harder to estimate.
Note that there is really no kind of interaction between the clauses. Given a set of solutions S that are left for some instance I, adding a clause will lead to the following reduced set of solutions S ′ : let s ∈ S be any solution that does not satisfy the newly added clause. This cannot be a solution of the new instance, and thus it is removed from S. Let now s ∈ S be some solution that satisfies the newly added clause. Since it was contained in S, this means that the assignment given by s satisfies at least one literal in all the clauses added so far plus at least one in the newly added clause. Thus, this solution is in S ′ . The clauses are independent of each other in the sense that the only solutions extinguished by a clause are those that don't satisfy it. There is no cumulative effect of the clauses such that after adding some of them a whole avalanche of solutions is extinguished. Note, however, that the solutions in S are not independent of each other since if s ∈ S, other solutions s ′ with a low Hamming distance to s have a higher probability of being in S than those with a large distance.
A toy model for 3-SAT
Neither the coupon collector's problem nor coin tossing displays one of the main qualitative behaviors of 3-SAT. The main point of interest is the gap between α = 5.19 at which the average number of solutions meets 1 and the point α c at which most instances are already unsatisfiable. In the following, we introduce a toy model that shows this more involved behavior but is still quite simple and not likely to have a real phase transition. The toy model is based on the following idea: an instance is represented by a number, starting with 2 n . This represents the number of solutions left at a given α. Adding a clause is mimicked by multiplying this number by some reduction factor.
Of course, simply multiplying the number by 7/8 is already enough to produce the average number of solutions shown in Figure 2, and also a sharp threshold behavior of P [UNSAT]. But, unfortunately, the latter takes place at α = 5.19. Looking at the real reduction factor, it turns out that the distribution broadens with α and is shifted to the right. We used this observation for the toy model of random 3-SAT, in which we draw a multiplicative factor from a normal distribution with a standard deviation σ = 0.0585 * α and an average of µ(α) given by If the drawn number is lower than 0 or higher than 1, we set it to 0 or 1, respectively. This factor is then multiplied with the current number of the toy model instance. An instance of the toy model represents an unsatisfiable instance if its number drops below 1. Thus, P toy [U N SAT, α] gives the fraction of toy model instances at α whose number is below 1.
In Figures 8 and 9 we show our simulation results for the toy model defined above. According to Figure 9a, the average number of solutions follows the same exponential behavior as expected from (1), and s drops below 1 at α = 5. 19. Surprisingly, a sharp threshold behavior can be observed when plotting P[UNSAT] as a function of α as shown in Figure 9b-c. Similar to 3-SAT, the transition point of the threshold behavior at α = 4.76 is separated from the point where s = 1 by a non-negligible gap. Furthermore, the distribution of the numbers P toy [s, α] is best described by a lognormal distribution and shows the same universal scaling behavior as the real P [s, α] distribution, as displayed in Figure 9.
In summary, this toy model shows the same qualitative properties as the real 3-SAT system.
Summary
In this article we have raised the question of whether or not the sharp threshold phenomenon displayed by P[UNSAT] around α = 4.2 is a mere statistical event that does not relate to a phase transition in the classical sense. Our intuition is that there is no interaction of the elements of a Boolean instance, i.e., clauses, variables, or solutions, that leads to this phenomenon. We also see no principal difference between instances with at least 1 solution and those with no solution. We thus believe that the sharp threshold behavior of P[UNSAT] can rather be likened to the sharp threshold phenomena in simpler systems, like the coupon collector's problem. Of course, it is obvious that approaches from statistical physics were successful in describing 3-SAT and that some of these results lead to the most powerful SAT-solver based on survey propagation [19]. It is important that we not question the phase transition shown for other order parameters like backbone size [20], clustering of the solution space [11], or the order parameter associated with the messages in survey propagation [16], but only P[UNSAT] as an order parameter of a real phase transition.
We conclude by describing one of the possibly many examples where asking somewhat different questions about the states of the same system may easily lead to the conclusion that more than one observable transition (and of different kinds) takes place in the system, even though it is widely accepted that there is only a single relevant transition in it.
Consider the Ising model on a face-centered cubic lattice. As the system cools down from high temperatures, we ask two simple questions (without loss of generality we can assume that for low temperatures the up spins take over): 1. What is the total spontaneous magnetization of the system? (ratio of up spins minus the ratio of down spins) 2. Is there a percolating cluster of down spins present?
The (textbook level) answers are: 1. Below a critical temperature T I c , the spontaneous magnetization sharply increases as the number of up spins starts to grow quickly. The associated transition is a prototype of continuous phase transitions (involving fluctuations, etc). 2. At a temperature T I p < T I c , the probability that a percolating (connected infinite) cluster of down spins is present suddenly drops from 1 to 0 (as if a first-order transition was taking place).
We suggest that the lesson from this analogy is the following: the answer one gets depends very much on the question. Our conclusion is that it remains to be demonstrated that asking "What is the probability of having a satisfiable instance in 3-SAT?" is the right question. We argue that this particular question (order parameter) is not closely related to the variety of possible rich transitions taking place in this paradigmatic satisfiability problem.
Has the question of whether or not P[UNSAT] actually undergoes a phase transition, more to it than just being a simple question of how to name something? In this interdisciplinary field it is very important to be careful with terms; a phase transition is more than just a sharp threshold phenomenon and requires proof that the supposed phases behave differently in some aspect that is independent of their definition. The simple stochastical systems presented here stress the point that a sharp threshold phenomenon, even if accompanied by a non-trivial finite size scaling effect, is not enough to show a genuine phase transition -an independent proof of two different phases is needed in addition. We hope that this article will trigger a discussion about the observations to be made in categorizing a sharp threshold phenomenon as a non-trivial phase transition, and thereby support ongoing interdisciplinary research in this field. | 2010-02-01T10:21:37.000Z | 2010-02-01T00:00:00.000 | {
"year": 2010,
"sha1": "cea127aab8157fd3225e4d73d70f7718dfaa96b0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1002.0217",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cea127aab8157fd3225e4d73d70f7718dfaa96b0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Physics"
]
} |
247183625 | pes2o/s2orc | v3-fos-license | Radiotherapy-Related Fatigue Associated Impairments in Lung Cancer Survivors during COVID-19 Voluntary Isolation
The main objective of this study was to investigate the impairments presented after COVID-19 voluntary isolation by lung cancer survivors that experienced radiotherapy-related fatigue. In this observational study, data were collected after COVID-19 voluntary isolation. Patients were divided into two groups according to their fatigue severity reported with the Fatigue Severity Scale. Health status was assessed by the EuroQol-5D, anxiety and depression by the Hospital Anxiety and Depression Scale, and disability by the World Health Organization Disability Assessment Schedule 2.0. A total of 120 patients were included in the study. Patients with severe fatigue obtained higher impairment results compared to patients without severe fatigue, with significant differences in all the variables (p < 0.05). Lung cancer survivors who experienced severe radiotherapy-related fatigue presented higher impairments after COVID-19 voluntary isolation than lung cancer patients who did not experience severe radiotherapy-related fatigue, and showed high levels of anxiety, depression and disability, and a poor self-perceived health status.
Introduction
Concurrent chemo-radiation remains the standard treatment for most cancer patients [1]. Radiotherapy is an integral part of the multidisciplinary treatment of thorax and lung cancer [2], being indicated before and after surgery [3], after chemotherapy in unresectable tumors staged as extensive disease [4], and for frail patients for whom surgery is not recommended [5].
Radiotherapy is an oncological treatment that implies the apoptosis of both tumoral cells [6] and normal cells due to radiation toxicity [7], resulting several side effects. The side effects of radiotherapy are an important factor that explains to a large extent the poor survival compared to surgery [8]; these treatments can lead to musculoskeletal and neuromuscular complications, or the dysfunction of a visceral organ such as the heart or the lungs [9]. Among all side effects (pain, cough, dispnoea, insomnia, oesophagitis, weight loss, nausea, erythema [10]), the fatigue is one of the most common symptoms reported [11,12]. Jones et al. 2016 concluded that one third of cancer survivors suffer clinically relevant levels of fatigue up to 6 years post-radiotherapy treatment [13].
Cancer-related fatigue (CRF) damages the quality of life of cancer patients [14]. Tt interferes with their daily activities [15], is associated with high levels of disability [13], and is reported as highly distressing. However, despite the adverse impact and the high prevalence, health care practitioners infrequently address it, and its impact on the quality of life of cancer patients is underestimated [16]. CRF is multifactorial; it is probably related to psychological and biochemical disorders [17], in addition to several negative health outcomes to be taken into account when managing it, including post-exertional malaise [18], physical pain, unrefreshed sleep [19], and poor general health status [20]. Furthermore, attention must be paid to the development of anxiety, depression, and other co-occurring physical symptoms as contributing factors [13]. A recent review demonstrated that CRF is reduced by exercise [21], but in the same way, reduced activity levels increase fatigue, and further reduce functional capacity and quality of life [22], even showing concerned effects on survivorship [23].
The COVID-19 pandemic has impacted lives around the world, causing high physical and psychological suffering, such that 40% of lung cancer patients have had their quality of life affected by home confinement [24]. The physical activity levels of cancer survivors have declined due to the lack of physical exercise and the mentioned suffering [25,26]. Moreover, the lockdown triggered by COVID-19 has led to an increase of distress among Spanish cancer patients [27], disrupting their psychological well-being and favoring the development of psychiatric disorders in these patients [28]. With all of the above, published studies have found that many people have an increased perception of fatigue during the COVID-19 era [29,30], highlighting possible psychologically-related symptoms of lockdown, quarantine, social distancing, and unprecedented pressure in daily life.
Considering the scientific background that relates perceived fatigue and presented disability with voluntary isolation, and the lack of awareness of the impact of the COVID-19 lockdown on lung cancer patients, the purpose of this study was to investigate the impairment presented after COVID-19 voluntary isolation in lung cancer survivors that experienced radiotherapy-related fatigue.
We hypothesized that lung cancer patients who experienced severe radiotherapyrelated fatigue present higher impairments during COVID-19 voluntary isolation than lung cancer patients who did not experience severe radiotherapy-related fatigue.
Participants and Study Design
A cross-sectional observational study was performed. Patients were recruited from the Oncological Radiotherapy Service of the "Hospital Universitario San Cecilio" (Granada, Spain), between June 2020 and May 2021. The included lung cancer survivors were aged 18-80 years, treated by radiotherapy treatment, and all were informed and signed the informed consent. Exclusion criteria were a diagnosis of fibromyalgia or similar condition, diagnosis of any psychiatric disorder, diagnosis of COVID-19 in the previous year, being in actual chemotherapy treatment, and any cognitive impairment affecting the possible completion of the evaluation protocol. We conducted this study in accordance with the Declaration of Helsinki 1975, revised in 2013. The study protocol was reviewed and approved by the Biomedical Research Ethics Committee of Granada (Granada, Spain).
Group Assignment
Patients were divided into two groups-a group with severe fatigue and another group without severe fatigue-according to the cut-off point of the Fatigue Severity Scale (FSS), which was recorded after COVID-19 voluntary isolation. The FSS has been used in different chronic conditions [31][32][33], including advanced cancer [34]. This scale includes nine items that are scored on a seven-point scale, ranging between 9 (minimum fatigue) and 63 (maximum fatigue). Cancer patients with a scores of 42 or greater were included in the group with severe fatigue, and cancer patients with scores lower than 42 were included in the group without severe fatigue [35]. This cut-off point has been proven in previous studies of CRF [34].
The FSS has been used in previous cancer populations in more than 400 occasions [36]. It has shown a good internal consistency in cancer subjects (Cronbach's α = 0.96) and in healthy subjects (Cronbach's α = 0.88). Additionally, it has a good correlation with the European Organization for Research and Treatment of Cancer (EORTC) Fatigue Scale (Rs = 0.83) and the bidimensional fatigue scale (Rs = 0.62), demonstrating its validity as a measure of fatigue.
Outcome Measures
Data were recorded after COVID-19 voluntary isolation. Anthropometric data, characteristics of the pathology, adjuvant treatment, and characteristics of radiotherapy treatment [37] were collected from medical history at admission.
The main study outcomes evaluated the patient affectation including anxiety and depression levels, disability, and self-perceived health status.
The Hospital Anxiety and Depression Scale (HADS) was used to assess anxiety and depression. The HADS is a self-reported measure that contains 14 statements ranging from 0 to 3, which result in two subscales: anxiety (0-21) and depression (0-21). A score of 8 is an indicator of possible anxiety or depression [38]. This scale has presented good reliability and validity in Spanish populations [39], and it has been used in cancer patients [40].
Disability was measured by the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0). WHODAS 2.0 has good validity and high reliability [41,42]. This scale observes how patients are able to perform their activities. It contains six domains divided into 36 items, scoring from 1 (slight) to 5 (extreme/unable to do it); the total score ranges from 36 to 180, where greater scores mean greater disability [43].
The EuroQol-5D was used to assess self-perceived health status [44]. It has five items that evaluate five dimensions of health status: mobility, self-care, usual activities, pain, and anxiety and depression. Additionally, it has a visual analog scale ranging from 0 (the worst imaginable health) to 100 (the best imaginable health), where the patients indicate their self-perceived health status. The Spanish version of the EuroQol-5D has good validity and high reliability [44].
Statistical Analysis
IBM SPSS version 23.0 was used to perform the statistical analysis [45]. The Kolmogorov-Smirnov test was used to assess the normal distribution of the data, while Fisher's F-test determined the homogeneity of variances. Numerical variables were expressed as mean ± SD. When both conditions were achieved, a parametric test (Student's t test) was used; when any conditions were not achieved, a nonparametric test (Mann-Whitney test) was used. In any cases, α = 5%. Previous to the between group comparison, two groups were created according to fatigue status: having (≥42 FSS score) or not having (<42 FSS score) severe fatigue.
Results
A sample of 130 patients lung cancer survivors treated with radiotherapy was screened in this study. From that sample, 10 patients were excluded due to difficulties communicating with the interviewers (n = 6) or not accomplishing the voluntary isolation (n = 4). Finally, 120 were included in the study. The 120 participants gave their consent to be evaluated and all completed the evaluation. When the presence of fatigue was evaluated, 80 patients did not present severe fatigue and 40 participants presented severe fatigue.
The characteristics of the participants are summarized in Table 1. Of the 120 lung cancer survivors enrolled, 80% were males and 20% were females. The study sample had an average age of 64.1 and 65.3 years in the groups. The cancer entity of the patients without severe fatigue was non-small cell lung cancer. In the group with severe fatigue, 20 participants presented non-small cell lung cancer, and the other 20 patients presented small cell lung cancer. Of all patients, 33.3% received surgery and 66.6% received chemotherapy. The number of radiotherapy sessions had a heterogeneous distribution that was different between the groups. The comparison of the main study outcomes between groups, using Student's t test and the Mann-Whitney test, are presented in Table 2. The HADS results present significant differences between groups (p < 0.001), with higher results in the group with severe fatigue. All the domains of the presented disability questioner also had significant differences (p < 0.001), presenting worse results in the group with severe fatigue for each domain and the total score.
The group with severe fatigue presented worse results in the Euroqol-5D scale, with significant differences in the mobility (p = 0.003), self-care (p = 0.036), activities of daily living, pain and anxiety, and depression (p < 0.001) subscores.
Discussion
The study aimed to identify impairments related to radiotherapy-related fatigue during COVID-19 isolation. As hypothesized, severe fatigue was associated with higher disability and worse anxiety and depression and perceived health status.
CRF is defined by the National Comprehensive Cancer Network (NCCN) as "a distressing persistent, subjective sense of physical, emotional and/or cognitive tiredness related to cancer or cancer treatment that is not proportional to recent activity and interferes with usual functioning" [46]. Although voluntary isolation is a necessary action to reduce the spread of the virus, it can trigger changes in living habits that can represent a physiological challenge that can further interfere with usual functioning, implying significant health risks [47].
The high prevalence of radiotherapy-related severe fatigue found in this study is similar to that found in other studies, such as Tombal et al. [48] Our study showed that lung cancer survivors who experienced severe fatigue presented higher level of anxiety and depression than those who did not experience severe fatigue. Our results are in line with previous studies [13] that reported a significant correlation between mood disturbances and experiencing significant CRF, finding high levels of depression in 67% of participants who experienced significant CRF, compared to 14% who did not experience CRF. However, our study is the first to study radiotherapy-related fatigue of lung cancer survivors in relation to COVID-19 voluntary isolation.
The studies previous to the COVID-19 era [13] have investigated other cancer entities, finding, as in our results, that cancer survivors with significant CRF also presented high levels of disability and depression. Disability levels have great importance during voluntary isolations, considering that high levels of disability can make it difficult to perform the activities of daily life at home. In this way, because of the constraints that it produces, disability is now considered as important as mortality from the public health point of view [43].
With respect to the Euroqol-5D results, our study found significant differences between groups. A recent study of Presley CJ et al. [49], which studied lung cancer survivors with advanced stages, found alterations in 37.6% of the patients for usual activities, 26.6% for mobility, and 5.2% for self-care. Additionally, they also concluded that these results were significantly associated with psychological symptoms.
Our study results explore a possible effect of COVID-19 isolation on lung cancer survivors. Studies in healthy populations [29,30] have highlighted the presence of fatigue during COVID lockdown and the relation between social distancing and high pressure in daily life. Nevertheless, this has not yet been studied in cancer survivors, even though it has been demonstrated that cancer patients have severe stress symptoms and psychological distress. Particularly, those with lung cancer are at higher risk and may need special attention [50].
The stressors (physical, mental, emotional, financial, etc.) are directly related with the development of symptoms, given the high comorbid nature of mood disorders in patients with fatigue [51]. It has been concluded that the COVID-19 pandemic has disturbed the mood and, therefore, the fatigue of several patients [51]. In this way, in line with our results, Wang Y. et al. [52] observed a population of 6213 cancer patients during the COVID-19 pandemic where 23.4% presented depression, 17.7% had anxiety, and 13.5% had hostility; however, they did not observe the relation between these symptoms and the disability presented, as we did in this investigation.
Future studies might focus on psychological disorders and the presented disability. A shared-decision care plan could improve the recovery of these disorders, helping cancer patients to improve and deal with impairments related to radiotherapy treatment, especially lung cancer survivors who present severe fatigue. Guidelines on detection and treatment of CRF during active treatment, follow-up, and at end-of-life [53] have been developed by the American Society of Clinical Oncology and the Canadian Association of Psychosocial Oncology [54].
Limitations
This study presents several strengths, such as the high significance of the results, a high response rate, and the use of a validated and used CRF tool with a cut-off point that has good agreement with the current CRF diagnostic criteria [36]. However, the results must be interpreted taking into account the study limitations. A cross-sectional design was utilized, which provided a one-time estimate of CRF prevalence without follow-up over time, limiting our knowledge of the course of CRF. Additionally, the absence of a control group without isolation affects our ability to draw conclusions about the causality of associated factors.
Conclusions
Lung cancer survivors who experience severe radiotherapy-related fatigue present higher impairments after COVID-19 voluntary isolation than lung cancer patients who do not experience severe radiotherapy-related fatigue, show higher levels of anxiety, depression and disability, and have a poorer self-perceived health status.
An important finding of this study is that presented severe fatigue and the high psychological suffering of lung cancer patients are associated during COVID-19 voluntary isolation.
Clinicians can use the findings of this study to identify lung cancer survivors who have a higher risk for developing greater radiotherapy-related impairments during voluntary isolation, permitting the early initiation of management interventions. In this line, therapeutic approaches need to, first, screen for fatigue in lung cancer patients routinely, and second, propose therapeutic intervention including exercise, psychological assessment, and functional training. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare that there is no conflict of interest. | 2022-03-02T16:22:49.838Z | 2022-02-26T00:00:00.000 | {
"year": 2022,
"sha1": "766f19850fd7746e566e0850d662ee454e968a9c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9032/10/3/448/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "668350749c45aa9697c3b13037342a6a470456c7",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218982202 | pes2o/s2orc | v3-fos-license | Population structure-guided profiling of antibiotic resistance patterns in clinical Listeria monocytogenes isolates from Germany identifies pbpB3 alleles associated with low levels of cephalosporin resistance
ABSTRACT Numbers of listeriosis illnesses have been increasing in Germany and the European Union during the last decade. In addition, reports on the occurrence of antibiotic resistance in Listeria monocytogenes in clinical and environmental isolates are accumulating. The susceptibility towards 14 antibiotics was tested in a selection of clinical L. monocytogenes isolates to get a more precise picture of the development and manifestation of antibiotic resistance in the L. monocytogenes population. Based on the population structure determined by core genome multi locus sequence typing (cgMLST) 544 out of 1220 sequenced strains collected in Germany between 2009 and 2019 were selected to cover the phylogenetic diversity observed in the clinical L. monocytogenes population. All isolates tested were susceptible towards ampicillin, penicillin and co-trimoxazole – the most relevant antibiotics in the treatment of listeriosis. Resistance to daptomycin and ciprofloxacin was observed in 493 (91%) and in 71 (13%) of 544 isolates, respectively. While all tested strains showed resistance towards ceftriaxone, their resistance levels varied widely between 4 mg/L and >128 mg/L. An allelic variation of the penicillin binding protein gene pbpB3 was identified as the cause of this difference in ceftriaxone resistance levels. This study is the first population structure-guided analysis of antimicrobial resistance in recent clinical isolates and confirms the importance of penicillin binding protein B3 (PBP B3) for the high level of intrinsic cephalosporin resistance of L. monocytogenes on a population-wide scale.
Introduction
Listeria monocytogenes is an important foodborne pathogen and the causative agent of listeriosis, an illness with symptoms ranging from gastroenteritis to septicemia, meningoencephalitis and miscarriage in pregnant women. L. monocytogenes infections are mostly associated with ready-to-eat foods, as well as milk products, meat, fish and vegetables [1]. Case numbers of listeriosis have been increasing during the last years. While between 2001 and 2010, 372 ± 101 listeriosis cases were reported per year in Germany, the average listeriosis case number between 2011 and 2019 rose to 617 ± 135 cases, with 699 notified cases in 2018 [2]. The incidence of listeriosis is relatively low (0.3-0.6 per 100,000 persons in Europe and North America) compared to other gastrointestinal infections [3]. However, fatality rates range between 7% and 30% despite antibiotic treatment [4,5]; even though L. monocytogenes is susceptible to a variety of antibiotics in vitro, it is one of the most fatal gastrointestinal foodborne bacterial pathogens.
The incubation period of listeriosis ranges from 1 to 67 days [6]. This rather long time frame complicates back-tracing of food vehicles through patient interviews and thus often has hampered the identification of outbreak sources. Whole genome sequencing (WGS)-based subtyping techniques, such as core genome multi locus sequence typing (cgMLST), have been implemented recently in many countries to improve disease cluster recognition and compare clinical and food isolates. This has enormously facilitated the identification of infection sources of listeriosis outbreaks [7][8][9][10][11][12].
The standard therapy for listeriosis is ampicillin or penicillin, frequently combined with gentamicin. While ampicillin or penicillin alone was reported to be only bacteriostatic, a bactericidal synergism of these antibiotics with gentamicin has been observed against L. monocytogenes in vitro [13,14]. However, the effectivity of the combination therapy has been questioned in retrospective studies investigating the outcome of listeriosis treated either with the combination of both antibiotics versus penicillin monotherapy, with no benefit of the combined treatment on the patient´s outcome [15,16] as well as in a recent study in a listeriosis mouse model [17]. As an alternative, treatment with trimethoprim/sulfamethoxazole (hereinafter referred as co-trimoxazole) has been applied successfully in patients allergic to β-lactam antibiotics [18]. Meropenem is occasionally applied in listeriosis treatment, but therapy failure and mortality rate is higher under these conditions [19,20].
As previously reported, resistance to the clinically used antibiotics is rare in clinical isolates of L. monocytogenes [21][22][23], however, recent studies report increasing numbers of antibiotic-resistant environmental isolates, including isolates from animals, food and food-processing plants [24][25][26]. This observation is alarming since there is evidence that the increase of minimal inhibitory concentrations (MIC) observed in environmental strains also manifested in clinical strains later on [27]. Therefore, monitoring the development of antibiotic resistance in clinical isolates is of utmost importance to ensure appropriate antibiotic therapy of listeriosis in the future.
Beside the potential emergence of resistance to antibiotics used in standard therapy, L. monocytogenes is intrinsically resistant to third-generation cephalosporins such as ceftriaxone [14,28], often used to treat bacterial meningitis. Hence, as long as L. monocytogenes cannot be ruled out as the causative agent, co-administration of ceftriaxone or other cephalosporins with ampicillin is required [29]. Several factors including the penicillin binding protein PBP B3, encoded by the lmo0441 gene, contribute to the intrinsic cephalosporin resistance of L. monocytogenes [30,31]. A L. monocytogenes mutant lacking lmo0441 has strongly reduced cephalosporin resistance but did not reveal any other obvious phenotypes [30,32], suggesting that PBP B3 has a function specifically required during cephalosporin exposure.
Based on genome sequence data, we here designed a selection of 544 clinical L. monocytogenes strains. This strain selection covers the entire phylogenetic biodiversity observed among strains isolated from human infections in Germany between 2009 and 2019, as it includes representatives of listeriosis outbreak clusters as well as isolates obtained from all sporadic cases. This selection was screened for antibiotic susceptibility against 14 clinically relevant antibiotics to describe the current antibiotic resistance levels of clinical L. monocytogenes strains on a population-wide scale, which led to the discovery of pbpB3 mutations associated with reduced levels of cephalosporin resistance.
Materials and methods L. monocytogenes strains and growth conditions All L. monocytogenes strains used within this study were originally received from different senders of the German health care system by the consultant laboratory for Listeria of the Robert Koch Institute. Identity and molecular PCR serogroups were determined by multiplex PCR as previously described [33][34][35] at arrival and the received strains were archived in an inhouse strain collection in 50% glycerol at −80°C. For antibiotic susceptibility testing, individual strains were grown on brain heart infusion (BHI) broth (# 211059, BD-BBL, Franklin Lakes, USA) or BHI agar plates (# CM0375, Oxoid, Basingstoke, UK) at 37°C. The strains used in this study are summarized in the supplementary Table S1. Fragments were cloned into plasmid pIMK3 [36] using NcoI/ Sall (NEB, Ipswich, USA). The sequence of the cloned inserts was confirmed by Sanger sequencing, the corresponding plasmid was introduced into strain LMJR41 (ΔpbpB3), which was constructed in a previous study [32], by electroporation [36] and transformants were selected on BHI agar plates containing 50 mg/L kanamycin. Correct plasmid insertion at the attB site of the tRNA Arg was confirmed by PCR. The sequences of the above mentioned pbpB3 alleles were submitted to NCBI Gen-Bank (MT383155-MT383119).
Genome sequencing
For genome sequencing, chromosomal DNA was extracted using the GenElute Bacterial Genomic DNA Kit (Sigma-Aldrich, St. Louis, USA). One ng of the chromosomal DNA obtained was used in a library preparation using the Nextera XT library preparation kit (Illumina, San Diego, USA) according to manufacturer's instructions. Sequencing was performed on Illumina MiSeq, NextSeq or HiSeq 1500 instruments, using either the MiSeq Reagent Kit v3 (600-cycle kit) or the HiSeq PE Rapid Cluster kit (version 2) in combination with an HiSeq Rapid SBS (version 2) sequencing kit (500-cycle PE or 150-cycle SE kit).
Population structure analysis
Genome sequencing reads were assembled using the velvet algorithm. MLST sequence types (ST) and cgMLST complex types (CT) according to the seven housekeeping gene MLST scheme [37] and the 1701 locus cgMLST scheme [7], respectively, were extracted from the assembled contigs by automated allele submission to the L. monocytogenes cgMLST server (http://www.cgmlst.org/ncs/schema/690488/). Clusters were defined as groups of strains with ≤10 different alleles between neighbouring strains. Generation of the minimal spanning tree was performed in the "pairwise, ignore missing values" mode. All of the aforementioned steps were performed using the built-in functions of the Ridom® SeqSphere Software package version 6.0.0 (2019/04, Ridom GmbH, Münster, Germany).
Antibiotic susceptibility testing
Antibiotic susceptibility testing was performed as a microdilution assay in accordance with the EUCAST guidelines in the January 2019 version [38]. Briefly, selected L. monocytogenes strains were streaked out on BHI agar plates and incubated at 37°C for 24 h. Three to five colonies from each plate were picked, joined and further incubated in 3 mL BHI broth for 6 h. This culture was used to adjust NaCl solution (0.9%, w/w) to an OD 600 of 0.005, representing a concentration of approximately 5·10 6 colony forming units (CFU) per mL. Ten µl of this solution were used to inoculate the individual wells of a 96-well microtiter plate containing 90 µl Mueller-Hinton fastidious (MH-F) broth with different concentrations of each individual tested antibiotic; 1 mM IPTG was added where necessary. The overall plate design was adopted from a study by Noll and colleagues [26], produced in house and included ampicillin (AMP; MIC < 2 mg/L), benzylpenicillin (PEN; MIC < 2 mg/L), ceftriaxone (CRO; MIC < 4 mg/L), meropenem (MEP; MIC < 0.5 mg/L), daptomycin (DAP; MIC < 2 mg/L), ciprofloxacin (CIP; MIC < 2 mg/L), erythromycin (ERY; MIC < 2 mg/L), gentamicin (GEN; MIC < 2 mg/L), linezolid (LNZ; MIC < 8 mg/L), rifampicin (RAM; MIC < 1 mg/L), tetracycline (TET; MIC < 4 mg/L), tigecycline (TGC; MIC < 1 mg/L), vancomycin (VAN; MIC < 4 mg/L) and co-trimoxazole (SXT; MIC < 0.125 mg/L). Antibiotics were purchased form Sigma-Aldrich (St. Louis, USA), with the exception of LIN and DAP, which were purchased from Molekula GmbH (Munich, Germany). Their concentrations were selected to cover the EUCAST-defined MIC breakpoints [38]. In cases where no breakpoint was defined for L. monocytogenes, the MIC breakpoints of Streptococcus pneumoniae or Staphylococcus aureus were used [38]. The plates were quickly mixed and incubated in a sealed polyethylene bag at 37°C for 20 ± 2 h. Results were determined using a mirror for precise optical detection of growth. MICs were reported as the first concentration of the respective antibiotic where no visible growth was detected after the defined incubation period. Besides the Listeria monocytogenes reference strain EGD-e, a set of reference strains recommended by EUCAST guidelines (Escherichia coli ATCC 259226, Pseudomonas aeruginosa ATCC 278538, Staphylococcus aureus ATCC 292139 and Enterococcus faecalis ATCC 29212) with known antibiotic resistance profiles were used to assure effectivity of the antibiotics under the chosen testing conditions.
Statistical analysis
The Kruskal-Wallis rank sum test was performed to determine if there were significant differences between the determined MICs of the tested antibiotics between samples in serogroups IIa, IIb and IVb as well as between different sequence types (where ≥4 strains were available). To further test which groups significantly (p < 0.05) differed from one another, the pairwise Mann-Whitney-U test was performed. Adjusted p-values were obtained using a Bonferroni-Holm correction. All statistical analysis was performed using the stats package in R version 3.6.1 [39].
Identification of alleles associated with reduced ceftriaxone resistance
Group-specific single nucleotide variations (SNV) were sought using the SNV tool implemented in SeqSphere (Ridom GmbH, Münster, Germany). For this purpose, isolates with reduced ceftriaxone resistance belonging to a particular ST were defined as target and isolates outside this phylogenetic group as non-target. Moreover, isolates belonging to one of the other low-ceftriaxone resistance STs were excluded from the nontarget group to increase sensitivity. SNVs occurring in 100% of the target group and which were different to 99% of the non-target group were accepted and only SNVs leading to non-synonymous amino acid exchanges were considered for further analysis.
Population structure-guided isolate selection
The collection of clinical L. monocytogenes strains from the German consultant laboratory was used as the source of genetic diversity within the L. monocytogenes population. At the time this project was started, the collection contained 1220 genome sequenced L. monocytogenes strains, isolated from human infections in Germany between 2009 and 2019. Of these strains 1004 had been isolated from blood or cerebrospinal fluid and the remaining strains from other sources. Therefore, the majority of the strains (82%) were associated with invasive disease. Most of the strains were collected in 2016 (n = 266), 2017 (n = 395) and 2018 (n = 453) ( Figure S1).
The population structure of this strain collection was determined using MLST and cgMLST [7,12], allowing identification of disease clusters and sporadic cases. All strains belonged to phylogenetic lineage I (57%, n = 700) and lineage II (43%, n = 520); cgMLST grouped the 1220 isolates into 122 cgMLST complexes containing 798 isolates and 422 singletons. The 122 complexes varied in size from at least two up to 104 isolates, with a median size of 3 per complex ( Figure S2). In order to cover all L. monocytogenes subtypes with current clinical relevance comprehensively, the following selection strategy was applied: At least one representative strain from each of the 122 identified complexes was selected. In cases where more than two genotypes belonged to a cluster, its most central isolate was chosen. If strains with different CTs formed a joined complex, a representative strain belonging to the most abundant CT within this complex was selected. An observation further justifying the selection of cluster representatives was obtained in a previous study, showing that isolates belonging to an outbreak cluster possess highly similar antibiotic resistance profiles [40]. In addition to the cluster representatives, all sporadic isolates (422 of 1220) were included to further increase the genetic diversity within the selection of L. monocytogenes isolates. This procedure led to a selection of 544 L. monocytogenes strains from 2009 to 2019 with the majority of strains from 2016 to 2019 including representatives of the molecular serogroups IIa (39.7%), IIb (10.8%), IIc (1.3%), IVa (0.2%), IVb (46.7%), IVb-v1 (0.7%) and IVc (0.2%, Figure S1). Representatives of all 62 STs in the original strain collection were also present in this selection, with ST1, ST6 and ST2 representing the three most abundant STs ( Figure S3). Of the 587 CTs identified in the original strain collection, 539 (92%) were also included. Thus, the strain selection for antibiotic profiling contained 544 L. monocytogenes isolates in total and represents a miniaturized model collection of the clinical L. monocytogenes population currently causing infections in Germany (Table S1, Figure S2).
Antibiotic profiling of the miniaturized model population
Each strain of the model population was tested for resistance against 14 clinically relevant antibiotics. No resistance was observed against the antibiotics currently recommended for the treatment of listeriosis; ampicillin, penicillin and co-trimoxazole (Table 1, Figure 1(A)). Still, two of the tested strains were susceptible to increased concentrations (formerly described as intermediate resistance) of ampicillin and penicillin and three isolates were susceptible to increased concentrations of co-trimoxazole. Among all strains tested one showed resistance to gentamicin. No resistance was observed to erythromycin, linezolid, meropenem, rifampicin, tigecycline and vancomycin. Furthermore, all isolates tested (544/544, 100%) were resistant to ceftriaxone (Table 1). This observation is in full agreement with the intrinsic cephalosporin resistance of L. monocytogenes. Moreover, the majority of the screened strains (493/544, 91%) also showed resistance to daptomycin, a cyclic lipopeptide antibiotic. Around 13% of the isolates (71/544) showed resistance against the gyrase inhibitor ciprofloxacin. One strain was found to be resistant against tetracycline, an antibiotic to which most of the strains (518/544, 95%) showed intermediate resistance. Susceptibility to increased concentrations was also observed for Table 1). Sixteen strains showed growth in the presence of 0.6125 mg/L rifampicin, the lowest tested concentration, and must thus be considered as susceptible to increased doses. The most common co-occurrence of antibiotic resistance was observed with ceftriaxone in addition to daptomycin (493/544, 91%). Out of these, 66 strains (12%) showed additional resistance to ciprofloxacin.
Only two isolates were found to be resistant to ceftriaxone and ciprofloxacin while being susceptible to daptomycin. Forty-five isolates (8%) were only resistant to ceftriaxone but none of the other antibiotics tested. Thus, they only showed intrinsic resistance against cephalosporins.
Despite this observation, we also found that the MICs for ceftriaxone varied between 4 mg/L up to >128 mg/L, with a median MIC of >128 mg/L considering all tested isolates (Table 1). While this classifies all strains as ceftriaxone-resistant, reduced median MIC values for ceftriaxone of ≤32 mg/L were found for certain STs (Figure 1(B)). The largest phylogenetic group with lowered ceftriaxone resistance was ST4 (n = 24 isolates), showing a reduced median MIC of 32 mg/L in contrast to >128 mg/L for the remaining population. Likewise, lowered ceftriaxone MICs were observed for ST29 (median MIC = 24 mg/L, n = 7), ST388 (median MIC = 24 mg/L, n = 4) and ST403 isolates (median MIC = 16 mg/L, n = 8, Figure 1(B)).
Identifying pbpB3 alleles linked to reduced ceftriaxone resistance Single nucleotide variant analysis revealed that ST4, ST29, ST388 and ST403 isolates associated with lowered levels of ceftriaxone resistance carried groupspecific non-synonymous mutations in various coding regions. However, the only gene carrying one mutation common to all isolates belonging to the STs with reduced ceftriaxone resistance was lmo0441, encoding PBP B3, which showed a mutation within the allelic version found in ST4 and ST388 (pbpB3 allele type 4, Ala172Val) and ST403 and ST29 (pbpB3 allele type 20, Thr53Ser, Figure 2(A,B)). This suggests that certain pbpB3 alleles are associated with reduced resistance against ceftriaxone. Remarkably, all ST4 and ST388 isolates carried the pbpB3 Ala172Val substitution characteristic for pbpB3 allele no. 4 in the Ruppitsch cgMLST scheme and this pbpB3 allele was not found in any other strain. Likewise, all our ST403 isolates carried the pbpB3 Thr53Ser variant (allele no. 20), also found in four out of six ST29 isolates tested with lowered ceftriaxone resistance levels. The two ST29 isolates tested with a ceftriaxone resistance above the median value observed in this group had a different pbpB3 allele. Despite its presence in these two subgroups, pbpB3 allele no. 20 was not found in any other of the 1220 strains of the original strain collection. We thus conclude that pbpB3 alleles 4 and 20 are associated with reduced ceftriaxone resistance.
Effect of novel pbpB3 mutations on ceftriaxone resistance
Even though the sequence alterations in the two pbpB3 alleles were rather conservative at the protein level, their contribution to ceftriaxone resistance was tested in a complementation assay. For this purpose, a ΔpbpB3 deletion mutant constructed in the background of L. monocytogenes EGD-e (strain LMJR41) [32] was complemented with different pbpB3 alleles and ceftriaxone resistance of the resulting strains was determined. Ceftriaxone resistance was greatly reduced in the Δlmo0441 mutant (2 mg/ L) compared to wild type strain EGD-e (64 mg/L, Table 2). Reintroduction of the wild type pbpB3 allele (allele type 1) from EGD-e restored this phenotype almost completely (32 mg/L). In contrast, expression of pbpB3 allele type 4 associated with reduced ceftriaxone resistance in the ΔpbpB3 background led to a lower ceftriaxone resistance level of only 16 mg/L ( Table 2). When pbpB3 allele type 49, originating from a closely related but fully ceftriaxone-resistant ST217 isolate (MIC >128 mg/L, n = 6), was expressed in the ΔpbpB3 background, ceftriaxone resistance increased to 32 mg/L. This level of ceftriaxone resistance further increased to 64 mg/L, when pbpB3 allele type 13 from ST6 strain 18-04540, showing the highest observed level of ceftriaxone resistance in this study, was used for complementation ( Table 2). The complementation of the deletion mutant with pbpB3 allele type 56 increased the ceftriaxone MIC to 32 mg/L ( Table 2). This allele type is identical to pbpB3 allele type 4 except for a single mutation at the aforementioned position 172, where it still carries the original alanine. These results further underline the apparent importance of this single amino acid for the resistance against ceftriaxone. As for the pbpB3 allele type 4, complementation mutants carrying pbpB3 allele type 20 showed higher ceftriaxone compared to the deletion mutant but lower ceftriaxone resistance compared to the complementation mutants carrying pbpB3 alleles of the wild type strain or from the high level resistance strain ( Table 2). In conclusion, pbpB3 alleles from strains with low and high levels of ceftriaxone resistance confer low and high levels of ceftriaxone resistance upon their heterologous expression in the ΔpbpB3 mutant, respectively. This confirms the association of certain pbpB3 alleles with ceftriaxone resistance and demonstrates the population-wide validity of the concept that PBP B3 is an important determinant for ceftriaxone resistance in L. monocytogenes.
To estimate the overall relevance of this observation for the entire L. monocytogenes population, the frequency of pbpB3 allele types 4 and 20 was calculated for the model population of 544 strains (55 unique pbpB3 allele types), for the initially used clinical strain collection of 1220 strains (58 unique pbpB3 allele types) as well as for 27,118 L. monocytogenes genomes available on the National Center for Biotechnology Information (NCBI) pathogen detection pipeline at the time of this study (1033 unique pbpB3 allele types). Allele type 4 was detected in 28 strains of the model collection (expected: 10), in 39 strains of the clinical strain collection (expected: 21) and 340 times in the NCBI dataset (expected: 26). Allele type 20 was detected in 12 strains of the model collection, 62 strains of the clinical strain collection and 156 strains of the NCBI dataset. Therefore, the abundance of both allele types was above the theoretically expected values and hence the presence of theses pbpB3 allele types does not seem to provide an evolutionary disadvantage.
Discussion
Our results represent the first comprehensive determination of antibiotic resistance patterns of clinical L. monocytogenes strains isolated in Germany. The complexity of this strain collection was reduced by the generation of a non-redundant model population using cgMLST subtyping data. This model population contains less than half of the original isolates but still maintains the large biodiversity observed in the original L. monocytogenes clinical strain collection; determination of antibiotic resistance patterns in this model population greatly facilitated experimental determination of antibiotic resistance patterns without losing phylogenetic resolution.
An important finding of this study is the sustained effectivity of the standard antibiotics recommended for the treatment of listeriosis. None of the L. monocytogenes strains tested here showed full resistance against ampicillin and penicillin and only one was resistant towards gentamicin. However, gentamicin is not used as a stand-alone antibiotic in listeriosis therapy and only administered in combination with ampicillin or penicillin. Moreover, none of the isolates tested showed full resistance against co-trimoxazole, which is used as an alternative in patients with β-lactam allergy. However, susceptibility only to increased concentrations of penicillin (2/544), ampicillin (2/ 544) and co-trimoxazole (3/544) was observed in few cases. Therefore our results are in accordance with observations made with other clinical strain collections from Europe where intermediate resistance levels against these three antibiotics were also reported to occur with low frequency [23,27].
The highest level of resistance within our model population was observed for ceftriaxone (100%), to which L. monocytogenes is intrinsically resistant [14,28], daptomycin (91%) and ciprofloxacin (13%). However, breakpoints have not been established for daptomycin and ciprofloxacin in L. monocytogenes (as none of them is recommended to treat listeriosis) and applications of cephalosporins and ciprofloxacin have caused therapy failure in the past [42][43][44].
A large variation of ceftriaxone MICs ranging from 4 mg/L up to >128 mg/L was observed between isolates belonging to different STs and could be traced back to amino acid exchanges in pbpB3. Interestingly, an almost similar degree of variation in ceftriaxone resistance was observed within the ST1, ST155, ST451 strains included here (Figure 1(B)), even though no association between ceftriaxone resistance and pbpB3 allele variation was found in these STs. Cephalosporin resistance is a multifactorial process in L. monocytogenes [31], and genetic variations in other cephalosporin resistance determinants, such as other PBPs, certain transporters or regulators [31,45], may account for the variability of ceftriaxone resistance in these phylogenetic groups.
PBP B3 of L. monocytogenes belongs to the same subclass of class B PBPs as Bacillus subtilis PBP3, Staphylococcus aureus PBP2a (encoded by mecA) and Enterococcus faecalis PBP5, which all are low-affinity penicillin binding proteins and as such critical determinants of cephalosporin or methicillin resistance in these bacteria [46][47][48][49]. The two pbpB3 mutations lowering cephalosporin resistance described here affect the Nterminal domain and the allosteric domain (non-penicillin binding domain) of PBP B3 (Figure 2(B)). The function of these non-catalytic domains is not entirely clear, but amino acid exchanges in the allosteric domain of S. aureus PBP2a (such as N146 K and E150 K) are associated with increased resistance of S. aureus to ceftaroline, a fifth-generation cephalosporin [50][51][52][53]. Ceftaroline non-covalently interacts with this allosteric domain inducing a conformational change that makes the active site in the transpeptidase domain accessible for acylation and thus for inhibition by a second ceftaroline molecule [54]. The N146 K and E150 K mutations of S. aureus PBP2a map to the same stretch in the beginning of the allosteric domain as the A172 V exchange in PBP B3 of L. monocytogenes. Apparently, amino acid exchanges in this region of the allosteric domain improve or impair cephalosporin binding in low affinity PBPs and thus resistance of different Gram-positive pathogens to this important group of antibiotics.
While the low level of resistance towards currently clinically applied antibiotics is a relief, the situation in environmental and food isolates is more alarming. L. monocytogenes strains with multidrug resistance or resistance to ampicillin, penicillin or co-trimoxazole have repeatedly been isolated from the environment and from different food types [24,26,[55][56][57][58][59][60]. It can be expected that the antibiotic resistances observed in environmental and food strains today will later manifest in clinical strains. Therefore, surveillance of antimicrobial resistance development in clinical L. monocytogenes strains in the future is of great importance, especially since average resistance levels against several β-lactams have been continuously increasing since the 1920s in clinical L. monocytogenes isolates from France [27].
Disclosure statement
No potential conflict of interest was reported by the author(s). | 2020-05-28T09:16:07.339Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "44a0e1ef869d0cae0316d13710ff93a7091e6e99",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/22221751.2020.1799722?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2113ef5e0e272e19d7261d974e882ba6a9704eb8",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
19321988 | pes2o/s2orc | v3-fos-license | Odorant Receptor Desensitization in Insects
Insects and other arthropods transmit devastating human diseases, and these vectors use chemical senses to target humans. Understanding how these animals detect, respond, and adapt to volatile odorants may lead to novel ways to disrupt host localization or mate recognition in these pests. The past decade has led to remarkable progress in understanding odorant detection in arthropods. Insects use odorant-gated ion channels, first discovered in Drosophila melanogaster, to detect volatile chemicals. In flies, 60 “tuning” receptor subunits combine with a common subunit, Orco (odorant receptor coreceptor) to form ligand-gated ion channels. The mechanisms underlying odorant receptor desensitization in insects are largely unknown. Recent work reveals that dephosphorylation of serine 289 on the shared Orco subunit is responsible for slow, odor-induced receptor desensitization. Dephosphorylation has no effect on the localization of the receptor protein, and activation of the olfactory neurons in the absence of odor is sufficient to induce dephosphorylation and desensitization. These findings reveal a major component of receptor modulation in this important group of disease vectors, and implicate a second messenger feedback mechanism in this process.
Introduction
Desensitization to background odorants is essential to maintain responsiveness in a fluctuating odorant environment. This ability is important for the localization of food and mates for most species. The human olfactory system is well known to desensitize to odorants. Everyone has experienced entering a room with a foul odor, but within a few minutes, the perception of the odorant vanishes. Both peripheral and central mechanisms are thought to be responsible for this phenomenon. 1 Insects, such as mosquitoes and fruit flies, have odorant receptors that desensitize to the presence of background odorants, but the mechanisms underlying this phenomenon are a mystery. Now, we learn that changes in phosphorylation are involved in insect odorant receptor desensitization, stemming from the depolarization of these neurons. 2
Peripheral Olfactory Systems in Vertebrates and Insects
Mammalian odorant receptors are encoded by a large number of Or genes that belong to canonical G protein-coupled receptor (GPCR) family. 3 The interaction between odorant ligands and the receptor protein leads to activation of a G protein, G olf , 4 that subsequently activates adenylyl cyclase type III (ACIII), an enzyme that catalyzes the production of cyclic adenosine monophosphate (cAMP). 5,6 The rise in intracellular cAMP triggers the opening of cyclic nucleotide-gated (CNG) ion channels and membrane depolarization. 7 Calcium entering the depolarized neuron triggers activation of calcium-activated chloride channels that augments depolarization. 8 Insects have far fewer odorant receptor genes compared with vertebrates. For example, the Drosophila genome only encodes 60 "tuning" Or genes. These genes are predicted to encode 7 transmembrane receptors, but they lack sequence similarity with GPCRs and, compared with GPCRs, are reversed in the membrane with their C termini outside of the cell. 9 Each tuning subunit is expressed in a small number of olfactory neurons together with the common subunit Orco (odorant receptor coreceptor) to form ligand-activated odorant receptors. These receptors function on the cilia of the olfactory neurons located in the antenna and maxillary palps. Consistent with species-specific niches, tuning receptors are divergent between species, whereas Orco is highly conserved. 10
Odorant Desensitization in Vertebrates and Insects
Desensitization (also called adaptation) can occur through multiple mechanisms and timescales. Some are intrinsic to the primary olfactory neurons, whereas others involve feedback from neurons downstream in the circuit. Desensitization can occur in milliseconds, modulating neuronal output during the stimulus, or can be slow, requiring prolonged odorant exposure over minutes to hours. Here, we focus on the slow adaptation mechanisms that occur within the olfactory neurons on a scale of minutes.
In mammals, slow desensitization of olfactory neurons has been attributed to several feedback mechanisms ( Figure 1A). Odorant receptors are phosphorylated by the G protein-coupled receptor kinase 3 gene 11 and protein kinase A 12 leading to binding and deactivation by β-arrestin 2. 13 In addition, 2 Journal of Experimental Neuroscience multiple feedback mechanisms are triggered by calcium entry into the activated olfactory neurons. These mechanisms include calmodulin activation of CaMKII that phosphorylates and activates phosphodiesterase that degrades cAMP 14 and phosphorylation and inhibition of ACIII. 15 Additional feedback is provided by calcium-activated potassium channels 16 and calmodulin-mediated reduction in CNG channel sensitivity to cAMP. 17,18 Work from Anne Menini's group using caged cAMP and nonhydrolyzable 8-Bromo-cAMP suggests that the strongest adaptation mechanisms act downstream of cAMP, most likely at the CNG channel. 19 In insects, slow desensitization also occurs to odorants, pheromones, and repellants, [20][21][22] but the mechanisms mediating this process are poorly understood ( Figure 1B). One mechanism important for insect olfactory neuron desensitization occurs from feedback through downstream interneurons called local neurons (LNs) that release γ-aminobutyric acid (GABA) and inhibit olfactory neuron firing 23 (Figure 2). There is a high degree of overlap between glomerular odorant activation and inhibitory neuron activation in glomeruli. 24 Recurrent coupling through excitatory and inhibitory synapses within a glomerulus can synchronize action potentials from the output projection neurons that may be important for odorant discrimination by Odorant ligand activates the odorant receptor (Or) that activates the G protein, G olf . G olf activates adenylyl cyclase type III (ACIII) that converts ATP to cAMP. cAMP opens cyclic nucleotide-gated (CNG) channels that allow sodium and calcium to enter the neuron. Calcium has a feedforward effect by opening chloride channels that augment depolarization. Feedback mechanisms include PKA and GRK3 that phosphorylate activated receptors and facilitate the binding of β-arrestin 2 to block receptor/G olf interactions. The calcium entering through the CNG channels binds calmodulin that activates the Ca 2+ -calmodulin dependent kinase II (CaMKII). CaMKII phosphorylates and inhibits the activity of ACIII and phosphorylates and activates the phosphodiesterase responsible for the hydrolysis of cAMP. (B) In insects, such as Drosophila, odorants activate the Or/Orco heterodimers leading to calcium influx. The elevated intracellular Ca 2+ activates a phosphatase (or inhibits a kinase) resulting in gradual dephosphorylation of Orco S289 , resulting in desensitization of the olfactory receptors. ATP indicates adenosine triphosphate; cAMP, cyclic adenosine monophosphate; Orco, odorant receptor coreceptor. where they activate second-order PNs that project to higher brain centers. Olfactory neurons also activate LNs that are local feedback neurons. (B) Normal signal transmission between ORNs and PNs in the glomeruli. ORN action potentials result in an influx of Ca 2+ that triggers release of acetylcholine (ACh) that activates the downstream PNs. Activity in the ORNs and PNs is highly correlated. 24 The ORN axon also synapses with GABAergic and peptidergic Drosophila tachykinin (DTK) LNs. ORNs express both GABA A and GABA B receptors as well as tachykinin receptors (DTKRs) that provide negative feedback to the ORN. (C) Sustained activity in the olfactory receptor neurons suppresses ORN activity through GABA and tachykinin-mediated presynaptic inhibition. Ionotropic GABA A receptors are chloride channels that hyperpolarize the neuron and mediate rapid adaptation, whereas metabotropic GABA B receptors mediate long-term adaptation. 23 TKRs also modulate the sensitivity of ORNs. 30 GABA, γ-aminobutyric acid; LNs, local neurons; ORN, olfactory receptor neuron; PNs, projection neurons; TKRs, tachykinin receptors.
3
activating neurons sensitive to temporal coincidence in the brain in both vertebrates and invertebrates. [25][26][27][28] Furthermore, the LNs synapse with multiple glomeruli and thus have the potential to process odorant information across glomeruli. 24 Additional complexity is introduced by the fact that GABA receptor expression differs among different classes of Drosophila olfactory neurons. 29 Ionotropic GABA A receptors are important for rapid inhibition after odorant onset, whereas metabotropic GABA B receptors are important for long-term adaptation 23 (Figure 3). Finally, olfactory neuron sensitivity can be inhibited by the Drosophila neuropeptide transmitter tachykinin (DTK) that is released by a subset of LNs in response to odors. The DTK receptors are expressed on olfactory neurons and mediate inhibitory feedback by DTK and may operate independently of GABA. 30 One intriguing possibility is that internal state modulates DTK release, allowing coordination between internal state and chemosensory behavior, but more work remains to be done on the role of neuropeptides on olfactory sensitivity.
Adaptation mechanisms that are intrinsic to the olfactory neurons and those triggered by feedback from inhibitory neurons downstream in the circuit are both important to match gain to stimulus intensity. 32 However, it is important to recognize that adaptation mechanisms that occur within the primary olfactory neurons act upstream of these trans-synaptic mechanisms.
Desensitization at the level of the insect olfactory receptor neuron is not well understood. The insect receptors are odorgated ion channels; therefore, adaptation mechanisms are likely to operate directly on the receptors (see Figure 1B). Furthermore, Orco, being a common subunit of all receptors, is an appealing target for modulation of receptor sensitivity that is independent of the tuning receptor component. The intracellular domains of Orco contain a number of potential phosphorylation sites that are conserved across species. These sites were systematically mutated to alanine and expressed in the olfactory neurons of live flies lacking endogenous Orco. 2 Most of these mutants functioned indistinguishably from wild-type Orco. However, when S289 was mutated to alanine, there was a striking reduction in odorant sensitivity compared with wildtype controls. 2
Charges at Orco S289 Regulate Sensitivity
If S289 is an important site for regulating sensitivity, and mutating this serine residue to alanine reduces sensitivity, what does replacing serine with a charged (phosphomimetic) residue do? When the S289D mutant Orco was expressed in the Drosophila olfactory system, it resulted in a small but significant increase in odorant sensitivity compared with wild-type Orco controls. 2 Thus, negative charges at Orco amino acid 289 are a potential toggle to regulate odorant sensitivity through changes in phosphorylation. Mutants at S289 affect Drosophila desensitization-based behavior as well. Wild-type flies preexposed (desensitized) to apple cider vinegar are not attracted to vinegar traps, whereas flies expressing either Orco S289A or Orco S289D are unable to modulate receptor sensitivity normally and are still attracted to vinegar traps following vinegar preexposure. 2 These results are consistent with phosphorylation at this position regulating sensitivity of the olfactory receptors in response to background odorants. What is the mechanism of this sensitivity change?
S289 Phosphorylation Does Not Affect Receptor Trafficking
The simplest explanation for the findings described above is that a charge at S289 affects the trafficking of the receptors in the chemosensory cilia, a process known to be dependent on Orco. 33 Trafficking of receptors out of the cilia would reduce the receptor density and reduce sensitivity to odorants. Phosphorylation of vertebrate odorant receptors and subsequent trafficking out of the cilia have been proposed as a mechanism of adaptation. 12 However, when antiserum to Orco is used to quantify levels in the Orco S289 mutants, Orco protein levels in the chemosensory cilia are not different from wild type. 2 Therefore, receptor trafficking is unlikely to account for the changes in odorant sensitivity in insects and suggests that phosphorylation at this position affects the function of the receptor channel.
Is Orco S289 Phosphorylated In Vivo?
Phospho-specific antiserum was raised to a phosphorylated peptide corresponding to Orco S289 and used to assess the phosphorylation status of this site in living flies. In animals isolated in an odorant-free environment for 1 hour, there is a strong phospho-S289 signal in the chemosensory cilia that is absent in Orco mutants. 2 When flies are exposed to a mixture of odorants predicted to activate most olfactory neurons, the phosphospecific signal is strikingly reduced ( Figure 3A to C). Therefore, there is an odorant-induced reduction in phosphorylation of Orco S289 . Anti-Orco antiserum (that detects whether Orco is phosphorylated or not) showed Orco that was still present, but no longer phosphorylated, confirming that there is no link between S289 phosphorylation and trafficking. Time course experiments revealed a detectable drop in phosphorylation within 5 minutes that is close to maximal after 30 minutes of odorant exposure. These changes in phosphorylation are mirrored by desensitization of the receptor neurons. 2
Neuronal Activation, Not Activation of the Odorant Receptor, Triggers Adaptation
Mammalian GPCRs, including odorant receptors, are phosphorylated by receptor kinases when the receptors are in the activated (ligand bound) conformation. 11,34 Phosphorylation by receptor kinase reduces interactions with the downstream signaling machinery by increasing the affinity for arrestins that compete with G proteins for activated receptors. 11,13,34 Is odorant-activated receptor conformation important for dephosphorylation of the insect receptors? To explore this possibility, Drosophila olfactory receptor neurons were activated in the absence of odorants using the red-shifted channelrhodopsin (ReaChR). 31 The ReaChR-mediated activation of the olfactory neurons was as effective as odorant exposure for inducing Orco S289 dephosphorylation. 2 Red light also desensitizes the olfactory neurons to subsequent odorant exposures. Together, these data demonstrate that neuronal activation, and not activation of the odorant receptors per se, is important for dephosphorylation of Orco at S289 ( Figure 3D to F), and dephosphorylation of S289 reduces odorant transduction efficiency.
Blocking Synaptic Transmission Has No Effect on Orco S289 Dephosphorylation
The GABA feedback from LNs activated downstream of olfactory receptor neurons is an important aspect of desensitization. 22,29,32,35 To establish that S289 dephosphorylation and desensitization are not a result of influence from downstream circuit components, these processes were measured in flies with olfactory neurons defective for synaptic transmission. Flies expressing tetanus toxin in the olfactory neurons fire action potentials normally, but synaptic transmission is blocked by tetanus toxin-mediated SNARE cleavage. 36 In the absence of synaptic transmission, Orco S289 dephosphorylation and subsequent desensitization are unaffected. 2 Therefore, this mechanism is intrinsic to the olfactory neurons and is not a result of LN feedback.
Additional Intrinsic Desensitization Components Remain Unidentified
Orco S289A and Orco S289D mutants have striking impairments in adaptation but still show residual desensitization. 2 Therefore, additional intrinsic desensitization mechanisms must be present in addition to Orco S289 dephosphorylation. One possibility is that conserved Orco phosphorylation sites that have little effect on sensitivity when mutated alone could combine to produce stronger effects when multiple residues are modified. This could be important to fine-tune receptor sensitivity under different conditions. Identification of the Orco S289 dephosphorylation mechanism is an important step but there is more to learn about sensitivity regulation of olfactory neurons.
Future Directions
The kinases and phosphatases responsible for phosphorylation changes at Orco S289 have not been identified. A major outstanding question is whether the odorant-induced dephosphorylation of Orco S289 described here results from activation of a phosphatase or inhibition of a kinase in the face of a constitutively active phosphatase. Because calcium influx occurs on olfactory neuron activation, it is tempting to speculate that activation triggers a calcineurin phosphatase that removes the phosphate from Orco S289 , but this prediction awaits further study.
5
Orco S289 is a consensus protein kinase C (PKC) phosphorylation site. There are 5 PKC genes in Drosophila, and previous work suggested that Drosophila PKC53E and PKC delta phosphorylate Orco to enhance sensitivity, 37 but it is not clear whether these kinases are localized to olfactory neurons or their cilia. A systematic knockdown of these proteins in olfactory neurons should reveal the relevant kinases for Orco S289 . Finally, there is a second family of odorant receptors in insects related to glutamate ionotropic receptors called Ir receptors. 38 These receptors do not show the rapid desensitization observed in Or/Orco responses, 39 but whether they undergo long-term adaptation is unknown.
Concluding Remarks
Insects are the largest class of animals and colonize every corner of the planet due in part to fine-tuned chemosensory systems. However, molecular dissection of the mechanisms underlying the regulation of odorant receptor sensitivity has lagged behind our understanding of this process in mammals. The identification of Orco S289 dephosphorylation in adaptation provides an entry point for understanding how odorant sensitivity is regulated in these animals to accommodate an ever-changing environment. | 2018-04-03T01:57:15.785Z | 2017-12-20T00:00:00.000 | {
"year": 2017,
"sha1": "42bc2c1569756542a4b82605ca8d82d41e9b1f12",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1179069517748600",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42bc2c1569756542a4b82605ca8d82d41e9b1f12",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
44156599 | pes2o/s2orc | v3-fos-license | Clinical and epidemiological features of Heart-Hand Syndrome: a hospital-based study in China
Heart–hand syndrome (HHS) is a clinically and genetically heterogeneous disorder characterized by the co-occurrence of a congenital cardiac disease and an upper limb malformation. This study revealed the clinical and epidemiological features of HHS in China. The study was based on patients with congenital upper limb malformation treated in Beijing Ji Shui Tan hospital from October 1st, 2013 to October 1st, 2016. We reviewed the patients’ medical records and identified patients with abnormal ultrasonic cardiogram and/or electrocardiogram (ECG). A total of 1462 patients (910 male and 552 female) were identified to be treated for congenital upper limb malformation. Among them, 172 (11.8%) had abnormal ultrasonic cardiogram and/or ECG. Abnormal heart structure were discovered in 121 patients and 51 patients had abnormal ECG. The most common type of abnormal heart structure was tricuspid regurgitation (53/121, 43.8%), while the most common abnormal ECG was wave patterns (22/51, 43.1%). This hospital-based study suggests that the rate of congenital heart disease is high in patients treated for congenital upper extremity malformation in China. Surgeons and anesthetists should be aware of the comorbidity and preoperational examination of congenital heart diseases is highly needed to avoid complications during operation.
The clinical diagnostic criteria of HHS are congenital limb malformation combined with abnormal results of ultrasonic cardiogram or/and electrocardiogram. We further stratified upper extremity malformation according to Swanson upper extremity malformation classification (Supplement Table 1), and analyzed the rate of HHS for each type of upper extremity malformation, as well as the rate of various heart disease of HHS patients.
Statistics.
Descriptive data were presented as number and percentage for categorical variables. P value was examined using chi square test. All the statistical analyses were performed using SAS version 9.2 (SAS Institute, Cary, NC, USA). Data availability. The datasets generated during and/or analysed during the current study are not publicly available due to the including of privacy information but are available from the corresponding author on reasonable request.
Result
A total 1462 (910 male and 552 female) patients with congenital upper extremity malformations were identified and treated in Hand Surgery Department of Beijing Ji Shui Tan Hospital between October 1 st , 2013 and October 1 st , 2016. Among them, 172 (11.8%) patients had abnormal ultrasonic cardiogram and/or ECG, and met the clinical diagnostic criteria of HHS. The prevalence of HHS was not significantly different between male and female ( Table 1). The most common malformations involved right side of upper limbs (47.3%), followed by left side involvement of upper limbs (29.1%) and bilateral involvement of upper limbs (23.6%). The prevalence of congenital cardiac disease was largely similar irrespective of right, left or bilateral involvement (Table 1).
In Table 2 we present the prevalence of congenital cardiac disease stratified by different types of upper limb malformations according to Swanson classification of congenital upper limb malformation 6 . Type I (19.4%) and type II (11.5%) congenital upper limb malformations were more likely to present with congenital cardiac disease. Among them, 121 (8%) patients showed an abnormal heart structure as discovered by ultrasonic cardiogram and 51 (3%) patients had only abnormal result of ECG. Patients who had both abnormal ultrasonic cardiogram and abnormal ECG were classified in the group of heart structural abnormalities, and a total of 46 (3%) had both abnormal ultrasonic cardiogram and ECG. The most common type of abnormal heart structure was tricuspid regurgitation (53/121, 43.8%). The most common abnormal result of ECG was abnormal wave patterns (22/51, 43.1%). There was significant difference in terms of prevalence of congenital heart defect among different types of upper limb malformation (p = 0.01).
The distribution of various congenital cardiac diseases among different type of upper limb malformations is shown in Table 3. There was significant difference for congenital cardiac disease among the three common types of upper limb malformations (p < 0.001).
Discussion
In this hospital-based study in China, we reviewed the medical records of a total of 1462 patients with congenital upper extremity malformations, and found that around 12% of them presented with congenital heart disease, which met the clinical diagnosis criteria of HHS. The prevalence of HHS was largely similar irrespective of gender, and the side of congenital limb malformations. However, the rate of and the type of congenital heart diseases varied by different types of upper limb malformations. HHS is well documented since first noted by Kato 7 in 1924. It has been reported by many others subsequently 8,9 . Several types of HHS have been identified of which Holt-Oram Syndrome (HOS) is the best known, which was reported by Holt and Oram 10 in 1960 based on a four-generation familial study. The results of a European epidemiological study showed that HOS is a very rare condition with an average prevalence of 0.7 per 100,000 births and a high regional variation (range between 0.3 and 2.4 per 100,000 births) 11,12 .
No epidemiological study was done for HHS so far. HHS is a broad category of diseases. The classification of HHS is shown in Supplementary Table 2. In our study, only 46 (27%) of HHS could be diagnosed as HOS according to the classification criteria listed in Supplementary Table 2. As suggested by McDermott 12 , a clinical diagnosis of HOS should include the presence of preaxial radial ray malformation of at least one upper limb along with a personal or family history of septation defects (ASD, VSD) and/or atrioventricular conduction disease. A genetic testing for TBX5, 22q11.2, microdeletion and Fanconi anaemia might be needed for final diagnosis 11 . However, it is unfortunately not possible in the current study to do genetic testing of patients with congenital upper limb malformation and to retrieve their family history of septation defects (ASD, VSD) and/or atrioventricular conduction disease, as all of these patients were already discharged from the hospital without possibility to contact them, which might explain part of the discrepancy for the relatively low rate of HOS in the current study as compared with the literatures. Most of them (73%) could not be classified according to current classification.
There was signification difference in term of the prevalence of congenital heart disease comorbid with different types of congenital upper limb malformations. According to Swanson classification of congenital upper limb malformations, the type I (Failure of formation of parts) and type II (Failure of differentiation) malformations were more likely to comorbid with congenital heart disease with a prevalence of 19.4% and 11.5%, respectively. Clinical doctors should be aware of these two types of congenital upper limb malformations. Pre-operational examination of the cardiovascular systems should be performed to avoid the risk of complications during operation. Thumb malformations, including radial polydactyly, radial ray deficiency, and thumb aplasia, were the most common malformations involved for HHS, accounted for 69.2% (119/172) of all HHS. This finding was in agreement with previous reports 11,13 . For patients with HOS, 14 patients involved left upper limbs and 20 patients involved right upper limbs. Bilateral involvement of upper limbs was noted in 12 patients, accounted for 26% of all HOS, which was lower than previous reports 13 . The prevalence of bilateral involvement was reported in 84% of patients with HOS, suggesting possible etiological difference between Chinese and Western population. In addition, it should be noted that most patients with HOS could be firstly treated in cardiac centers and therefore get lost in our study as we recruited only patients with congenital upper limb malformations, which might be the reason for the different rate and type of various congenital heart diseases as compared to the literatures.
The most common type of abnormal heart structure was tricuspid regurgitation (53/121, 44%), while the most common abnormal result of ECG was abnormal wave patterns (22/51, 43%). For HOS, the most common abnormal heart structure was atrial septal defect (ASD) and/or ventricular septal defect (VSD) (13/46 28%), which was lower than previous report. In the study reported by Lindley B wall 13 , ASD was the most common cardiovascular anomaly, presenting in 53% of HOS patients, and VSD was noted in 48% of HOS patients. The involvement of different abnormal heart structure in HHS or HOS suggests that possible etiological differences between Chinese and Western population. Despite similarities of the clinical presentations of various HHS, it remains unclear whether the hereditary of HHS arises from common or distinct genetic defects. It has been suggested that congenital heart diseases are caused by a limited number of shared genetic defects [14][15][16][17] . However, some previous reports found that HHS are genetically heterogeneous with the possibility of arising from distinct disease genes [18][19][20] . We speculated that genes underlying the development of various Chinese HHS might be different as the morphological and anatomic sites were inconsistent with previous reports. Further studies are highly needed to examine the underlying mechanisms and to explore whether the contribution of distinct disease genes was consistent in Chinese and Western population. A few limitations should be kept in mind when we interpret the current findings. First, all the patients were identified from one hand clinic center, which might not be representative of the Chinese population. The second limitation was that this study was a retrospective epidemiological study and some individual information, such as family history of HHS which might be important to study HHS, was not available in the current study.
In summary, the comorbidity of congenital upper extremity malformations and congenital heart disease is relatively high in China, and the rate and type of various congenital heart disease differ as compared to previous reports, suggesting possible etiological difference between Chinese and Western population. Preoperational examination should cover the field of congenital heart disease to avoid complications during operation. | 2018-06-01T13:16:03.577Z | 2018-05-31T00:00:00.000 | {
"year": 2018,
"sha1": "99f3326ce204ead00066260dc78cb4cd87a70526",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-26727-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eccaddc7f24c617c101a6e88ba04b1a0c704d153",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221571282 | pes2o/s2orc | v3-fos-license | A 10-year ecological study of the methods of suicide used by Brazilian adolescents
Suicide among adolescents has become a major public health problem world-wide. Our study sought to describe the most commonly used methods of suicide among adolescents aged 10 to 19 years in Brazil between 2006 and 2015. Complete data were obtained from the Brazilian Health Informatics Department (DATASUS) and coded into seven categories of suicide methods. The fol-lowing statistical analyzes were performed: chi-square ( χ 2 ) tests to examine the association between the frequency of each suicide method and the year; odds ratios (OR) and 95% confidence intervals (95%CI) compared the relative chances of each suicide method occurring between boys and girls. In total, 8,026 suicides among Brazilian adolescents were registered over the analyzed period. The most commonly used method of suicide by both sexes was hanging
Introduction
Globally, more than 800,000 deaths from suicide are recorded every year and it is the second leading cause of death among people 15-29 years old, thus becoming a major public health concern worldwide 1 . An analysis of the World Health Organization (WHO) showed minor changes in the average suicide rate of children aged 10-14 years 2 . On the other hand, suicide rates among adolescents aged 15-19 years significantly increased in South America (boys: from 7.36 to 11.47 per 100,000; girls: from 5.59 to 7.98 per 100,000) 3 .
In Brazil, the WHO also indicated an increase in suicide rates among adolescents aged 15-19 years from 1990-2009 3 . Additionally, a study showed an increase in the age-adjusted suicide rate among adolescents of 9% between 2006 and 2015 4 . Moreover, another study showed that the suicide rate of adolescents increased by 24% in six large Brazilian cities and by 13% in the country between 2006 and 2015 5 .
One study about suicide methods in adolescents aged 10-19 years from the WHO Mortality Database (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009), with 86,280 suicide cases recorded in 101 countries, showed that the most frequent suicide method for both gender and age groups was hanging; additionally, boys used almost twice more hanging or firearms than girls, who chose slightly more poisoning (by pesticides or drugs) and jumping from heights 6 .
A few studies about suicide methods among adolescents have been conducted in Brazil. They have used data from the Health Informatics Department (DATASUS) that covered three decades; however, these studies have used information collected before 2007. One study analyzed suicide among young Brazilians between 15 and 24 years in nine Metropolitan Regions, from 1979 to 1998, concluding that the main suicide methods used were hanging, strangulation and suffocation, mainly in Porto Alegre, and the use of arms and explosives, in Belo Horizonte 7 .
Another study in Brazil examined the methods of suicide between 1980 and 2006. In most regions and in different age groups, the most frequent methods of suicide among adolescents aged 10-19 years were hanging, arms and poisoning 8 . Regional differences of suicide methods were observed: in the Northeast region, hanging (48.8%), poisoning (18.2%) and arms (16.9%) were the predominant suicide methods; in the Southeast, hanging (39.6%), other methods (24.2%) and arms (16.5%) prevailed 8 .
In another study carried out in Minas Gerais State, Brazil, between 1996 and 2002, the main method of suicide among adolescent boys was hanging/strangulation/suffocation (1.1 per 100,000 inhabitants), followed by use of arms (0.6 deaths per 100,000 inhabitants). Among women, the methods were hanging/strangulation/suffocation and self-intoxication with poisoning and pesticide (0.3 deaths per 100,000 inhabitants) 9 .
In Brazil, governmental efforts for suicide prevention have been relatively modest. More consistent strategies started in the last two decades, as indicated by the launching of the National Strategy for Suicide Prevention in 2006 10 . Increasing mental health assistance coverage was the most implemented strategy. Moreover, psychosocial care services have a fundamental role in the prevention of suicide, especially where there are Centers for Psychosocial Care (CAPS), an initiative of the Brazilian Unified National Health System (SUS) 11 .
The aims of this study are (i) to describe the methods of suicide most commonly used by adolescents aged 10-19 years in Brazil between 2006 and 2015, and (ii) examine if these methods differ by gender. This information could inform prevention strategies and public health policies, limiting access to these methods.
Methods
This study has 10-year ecological design, from January 2006 to December 2015. Data on the suicide methods used by adolescents in Brazil were collected according to the chronological limits of adolescence (10-19 years) and gender.
Cad. Saúde Pública 2020; 36(8):e00104619 In Brazil, suicide methods are registered using the 10th revision of the International Classification of Diseases (ICD-10) 13 : intentional self-harm (X60-X84), event of undetermined intent (Y10-Y19), and sequelae of intentional self-harm and events of undetermined intent (Y87.0). The last two categories were included in the analyses according to recommendations of the Brazilian Ministry of Health, due to possible failures in the codification of the cause of suicide in these groups 14 . Data on suicide methods in Brazil derive from information contained in death certificates compiled by the Mortality Information System (SIM) of the Ministry of Health 14 . In our study, suicide methods were categorized into seven groups: hanging/drowning (ICD-10 codes X70 and X71), self-intoxication (codes X60-X67, X69, Y11-Y17, and Y19), poisoning by pesticides (codes X68 and Y18), use of arms (codes X72-X79), jumping from heights (code X80), jumping/laying in front of a moving object/crashing of motor vehicles (codes X81 and X82), and "other methods" include intentional self-harm by other specified means (caustic substances, except poisoning; crashing of aircraft and electrocution), and intentional self-harm by unspecified means (codes X83 and X84) 13 . All information was complete on the DATASUS database.
Ethics aspects
This study was approved by the Ethics Research Committee of the Federal University of São Paulo (CEP-UNIFESP) and the São Paulo Hospital of Brazilian Platform with protocol n. 3,006,523.
Statistical analysis
The data were obtained from the DATASUS and coded into the seven aforementioned categories of suicide methods, disaggregated by gender. Statistical analyses were performed using the MedCalc Software, version 18.11 (https://www.medcalc.org/). Descriptive statistics were reported in terms of frequency and proportion of each suicide method by year and gender. Chi-squared tests (χ 2 ) with a 0.05 significance level were performed to compare the frequency of each suicide method in 2006 and 2015. Odds ratios (OR) and 95% confidence intervals (95%CI), and z-tests with corresponding p-values (also with a 0.05 significance level) were estimated to compare the relative odds of the occurrence of each method of suicide among boys (category of reference) and girls.
Discussion
The use of suicide methods by boys and girls varied between January 1, 2006, and December 31, 2015, with hanging increasing by 15%. In the same period, the use of arms and self-intoxication by poisoning and pesticide reduced. The understanding of the possible factors underlying such changes could help to define future interventions.
The total proportion of suicides in adolescents by the use of arms between 200 and 2015 decreased from 14.2% to 9.1% possibly due to the Disarmament Statute (Law n. 10,826/2003) 15 , which updated the legislation on registration, possession and trade of arms in Brazil, restricting the access to arms.
Cad. Saúde Pública 2020; 36(8):e00104619 However, recent U.S. legislation restricting access to arms was not associated with a reduction in the number of arms suicides of young men in the U.S. Restricted access to arms, in turn, was associated with a significant reduction in suicide among Canadian men aged 15-34 and young Australians, but with a simultaneous increase in hanging suicides in the latter group 16 . In Brazil, the total number of suicides in adolescents by hanging increased from 54.9% to 70.3% between 2006-2015 1,17,18 .
We also found a relative decrease in poisoning over the same period, from 10.7% to 4.9%. Worldwide, poisoning is a major global health issue, especially in developing countries 8 . Deaths from pesticide ingestion are a major contributor to the global burden of suicide and premature mortality 19 . In Brazil, Law n. 6,670/2016 establishes the National Policy for the Reduction of Agrochemicals, with the aim of implementing actions that contribute to the progressive reduction of the use of agrochemicals in agricultural, livestock, extractive and natural resources management practices 15 .
The change may reflect the restriction of access to arms and pesticide in Brazil, leading to an increase in hanging, a highly lethal suicide method. No specific measures seem efficient against this method, probably due the difficulty to impose barriers to avoid it 20 . Another explanation could be the increased coverage of medical care in the Emergency Room regarding poisonings and traumas caused by arms 21 .
In our study, patterns of suicide methods in children and adolescents reflect the lethality 22 , availability 6 , and cultural acceptability of suicide 23 . Arms and hanging are the most lethal methods, with lethality of 60% and 47.5%, respectively 6 . Pesticides have also been reported as highly lethal, especially herbicides (42.7%). The overall death was higher in boys and increased with age 6 . In one study in rural India, suicide rates among women aged 15-24 were higher than those for men of the same age 19 . Female suicide rates in India are among the highest in the world 19,24 . These rates were believed to be higher in rural India because of the greater availability of pesticides combined with poorer access to emergency medical care 24 .
As the use of methods associated with increased lethality is more likely to result in death, suicide prevention has focused especially on restricting access to suicide methods, which is perhaps where the greatest success has been achieved 25,26,27 . Key prevention strategies can be population-based (for instance, mental health promotion, education, awareness by campaigns on mental resilience, responsible media coverage, limited access to suicide methods) as well as targeting high-risk subgroups (e.g., specific school-based programs, educating gatekeepers in different domains, providing crisis hotlines and online help, detecting and coaching families at increased risk) or even focusing on individuals identified as suicidal (e.g., improving mental health treatment, follow-up after suicide attempts and strategies for coping with stress and grief) 28,29 . Studies suggest that low-cost follow-up interventions in which physicians and health professionals contact patients who have attempted suicide (especially patients who do not undergo any treatment) using letters to express concern and support may help to reduce the suicide rate after a psychiatric or general hospital/intensive care discharge 30 .
The main limitation of our study was the possibility of underreporting of suicides by death certificates that indicate the cause of death as undetermined, or other causes, especially in relation to adolescents. There are many reasons for underreporting, such as reluctance of doctors and family members to determine the cause as suicide due to possible cognitive immaturity of adolescents, but also to avoid social stigma and shame for the family of the young person. Data collection can also be influenced by local scarcity of health professionals. Another limitation is the study geographical area, since the precision may vary in different areas due to the country's continental extension.
Preventing adolescent suicide is a global imperative 1,31 . Several studies have shown promising initial results regarding the efficacy of follow-up care and suicide prevention 30,32 .
We suggest further research on adolescent suicide rates and methods from 2015, due to the different environmental, social and family influences of modern times.
Cad. Saúde Pública 2020; 36(8):e00104619 Contributors D. C. Jaen-Varas contributed to the conception and design, analysis and interpretation of data, and writing of the article. J. J. Mari contributed to the conception and design, analysis and interpretation of data, and critical review of the manuscript due to the important intellectual content. E. Asevedo, R. Borschmann, and E. Diniz contributed to the critical review of the manuscript due to the important intellectual content. C. Ziebold and A. Gadelha contributed to the conception and design, statistical design and interpretation of data, and critical review of the manuscript due to the important intellectual content. All authors approved the final version of the manuscript. | 2020-09-03T09:14:37.184Z | 2020-09-02T00:00:00.000 | {
"year": 2020,
"sha1": "bf82bebab631e5a0437b43ce641c6064445b06c6",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/csp/a/JjpYgFb4H6nW4RnRnQJCB4k/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "493323b19010cfe47706ad7392faaaa10cf556c8",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55461766 | pes2o/s2orc | v3-fos-license | Perceptions Towards Non-Value-Adding Activities During The Construction Process
Non-value-adding activities are pure waste during the construction process. However, most of the construction practitioner does not realise that most of the activities performed during the construction process add no value to their project. A total of 375 numbers of questionnaires distributed to the Developer, Jabatan Kerja Raya, Consultants and Contractors. The study found that awareness by construction participants in Malaysia to take actions against non-valueadding activities during the construction process is relatively low. Through analysed by using the Pareto Chart, it has been found that defects and waiting time are two categories of non-value-adding activities that need to be prioritised by the industry. It is also found that non-value-adding activities most frequently occurred during structural and architectural work. This paper also reviewed on the causes of non-value-adding activities and discussed its effect towards time, cost, quality and productivity of the construction project. This paper is also important to give clearness and broader understandings on this form of waste other than material waste.
Introduction
The construction term is used to define the activity of creation physical infrastructure which comprises work such as residential projects, non-residential project, civil engineering and specialist project.The construction industry plays a vital role in economic growth, social and national growth of the country [1].The constructional field provides the socioeconomic infrastructure for economic growth and defines as an economic engine for developed economies.It produces wealth and quality of life and generate a huge employment opportunities.The construction industry is very complicated industry in nature due to various levels of management, complex design, lot of process and labour intensive.
Nevertheless, despite its importance, the construction industry has many challenges, which requires immediate attention.Common problems in this industry, including low productivity, poor safety, poor documentation, hostile work environments, inefficient costs and low quality of production [2]- [6].Another major problem involved is the great level of waste, which is undesirable, no added value to the product, and also time and cost consume to remove during the construction process.
Waste in the construction industry has gained attention all over the world.According to Rahman et al., (2012) [4], waste elimination is one of the main objectives in lean construction.Skoyles & Skoyles (1987) [7] argued that waste as one of the weaknesses in the construction industry.Alwi, Hampson, & Mohamed (2002) [8] suggested that a proper understanding is needed about waste.Most of the construction practitioners viewed waste as material waste, or debris.After a systematic review of previous researches on waste, Viana, Formoso, & Kalsaas (2012) [9] divided into three (3) categories, namely, construction material waste (physical waste); non-value-adding activities (process waste); and specific waste (such as accidents and rework).This paper will only focus on the second type of waste which is nonvalue-adding activities.This paper will discuss the perception by the construction practitioners towards non-value-adding activities during the construction process.Other than that, it also discuss on the causes of non-value-adding activities and its effect towards time, cost, quality and productivity of the construction project.By using Pareto Chart, categories of non-value-adding activities that need to be prioritised and need urgency solution from the industry will be shown at the end of the study.
Non-value-adding activties (NVAAs) as pure waste in the construction industry
A study by Felipe et al. (2012) [10] found that construction practitioners, including executive level refer waste as scrap at the construction site.However, according to new philosophy developed by Koskela (1992) [11], waste should be referred as any ineffective usage of equipment, materials, labour, or capital during production.
Macomber & Howell (2004) [12] emphasized that anything that's not produces value is waste.Furthermore Formoso & Hirota (1999) [13] refer waste to any activity that from the point of view of the client causes direct or indirect costs, but do not add any value to the product.
The term "non-value-adding activities" (NVVAs) is used by Koskela & Sharpe (1994) [14] to explain waste in the lean production philosophy.According to these researchers, only activities that convert materials and/or information towards what is required by the costumer are value added.Han & Lee (2007) [15] claimed that NVAAs consume time and resources, and need to be avoided.Recently, Ralph & Iyagba (2012) [16] used the term non-value-adding activities to differentiate between physical waste and other waste that occurs during the construction process.
Horman & Kenley (2005) [17] in their research found out that on average, 49.6% of time usage in construction are identified as wasteful activities.Unfortunately, most of the construction practitioner does not realise that most of the activities performed during the construction process add no value to their project.Activities such as waiting, rework, unnecessary movement, overproduction, delay, defects, lack of quality, and inventories are wasted because it's given no valueadded to the process.These non-value-adding activities occur throughout the entire construction process and contribute through every participant in the project [6].Josephson & Saukkoriipi (2007) [18] also supported this phenomenon and therefore further suggest that there is probably a lack of knowledge and size of non-valueadding activities in the construction industry.
Non-value-adding activities will give direct effect to the construction process and project but can be avoided by executing work correctly, close monitoring, controlling and planning.Everyone involved in the construction process has potential to contribute to NVAAs, and therefore, affect the process.Therefore, NVAAs can be referred as activities that consume direct and indirect cost, time, resources, labour and space, but gives no value added to anyone involved during the process.
However, most of these types of waste above can be included in one of the eight categories of non-valueadding-activities in this research.For example, poor quality can be concluded under the defect category, waiting for equipment under the Waiting time category, and excess material on site, under the inventory category.There are few types of NVAAs that maybe the causes of NVAAs rather than NVAAs itself.For example, unreliable equipment [16], [25] is the cause of defect, waiting time and motion.
Causes of non-value-adding activities
Koskela & Sharpe (1994) [14] indicated that there are three causes of NVAAs, which are designed in hierarchical organisations, ignorance, and nature of production.These researchers suggested that traditional management, improper process during design, errors, and machine breakdown also contributed to NVAAs.Other than that, Koskela & Leikas (1997) [32] mentioned that NVAAs occurred due to failure to recognize or measure waste, missing information and complicated material flows.
Previous researchers found that lack of skills by subcontractors and traders were among the causes of NVAAs.Furthermore a great deal of NVAAs also contributed by changes in design, poor coordination, weather, poor planning and scheduling, poor supervision, design changes, slow decision making, lack of trades and subcontractor skills, incorrect construction method, delay of materials, communication breakdown, lack of coordination and lack of trust among parties [10], [16], [21].
Researches by Alwi et al.,(2002) [6] and Alarcon (1997) [24] found that poor quality of site documentation, weather, unclear site drawing supplies, poor design, design changes, slow drawing revision and distribution, unclear specifications, management, information and resources are among vital factors of NVAAs.All the causes above can be categorised into 8M's; management, measurement, method, man, mother nature, material, machinery and money.For example, poor coordination, poor planning, poor scheduling, and poor supervision fall under the management categories, whereas lack of trades and subcontractor skills fall under the man categories.
Research Methodology
This study aims at to identify the perceptions towards non-value-adding activities by the construction practitioners in the Malaysian construction industry.The respondents involved in this study are client (Jabatan Kerja Raya and private developer), consultants (Architect, Engineer, Quantity Surveyor and Project Manager) and contractors.Only Contractors registered G7with CIDB selected for this study due their expertise and experience in handling mega and complex project and have no limit in tendering project.
Before questionnaire distributed to the respondents, it firstly was validated by the academician, field expert and statistical expert.Pilot test also conducted with 30 numbers of respondents.However, only 23 numbers of questionnaires were returned to the researcher.The internal consistency and inter-item correlation (relationship between items) of the questionnaire was assessed by the Cronbach alpha method.The Cronbach Alpha value calculated for pilot test was 0.912.The questionnaire consists of three (3) sections as below:-Section (A) is structured to obtain general information and background about the respondents such as type of organization, position and years of experience.
Section (B) is structured to identify the perceptions of construction participants towards non-value adding activities during the construction process.
Section (C) is structured to identify the effect of nonvalue-adding activities.
Result and Discussion
A total numbers of 375 questionnaires distributed randomly through postal mail around Malaysia (125 sets to clients, 125 sets to consultants and 125 sets to the contractors).After a few follow up through email, online application and telephone, a total 106 sets of questionnaire returned to the researcher within 4 weeks, giving a 28% response rate.According to Akintoye (2000) [33], in the construction industry, it is normal to get a response rate between 20%-30% for postal questionnaire.This is supported by Hoe (2005) [34] which indicated that the response rate of 28% is acceptable.Teo & Loosemore (2010) [35] on their study were using survey to collect data, got a 29.1% response rate.They believed that data collected from those respondents were representative of the population as a whole.In addition, Love & Smith (2003) [36] suggested that 30% response rate is considered satisfactory.Therefore, 28% response rate for this study is considered acceptable and represents the population.
The calculated Cronbach alpha value is 0.870 which exceeded the minimum Cronbach alpha value of 0.70 [37].Findings from the data gathered through the questionnaires are as follows:
Respondent information
Of 106 questionnaires returned, 34 numbers were from the client, 38 numbers from the consultants and 34 numbers from the contractor.Respondents from Client consists of the private sector and the Government, meanwhile, respondent involved represent consultants are from architect, quantity surveyor and engineer.All contractors responds were registered G7 with CIDB.The each percentage of respondents for these three (3) types of respondents is not too different and almost equal which is between 32%-35%.
As shown in the
The perceptions of construction participants towards NVAAs
Section B is structured to identify the perception from construction participants towards NVAAs.In this section, respondents were asked four (4) questions.In the first questions, respondents were asked to indicate the frequency of their organization to take actions towards NVAAs during the construction process by using 5-point Likert scale ranging from 1 (never) to 5 (always).This question has been analyzed using the mean value.By referring to Ariola (2006) [38], the mean score from Likert-scale type questions can be interpreted as in Table 3.
From the table 4, it is notable that all seven items mean scores ranged around 2.51-3.50 which indicate that on average, only sometimes the actions taken by the respondent against NVAAs.This finding supported the problem statement that the awareness among construction participants towards NVAAs is quite low.Investigate the root causes of NVAAs get the lowest mean score (3.06).More than half of the respondents (62%) respondents answer they never (12%), rarely (16%) and sometimes (34%) investigated the root causes of NVAAs in their project.Second question in Section B was structured to seek the agreement from the respondents whether NVAAs contribute to poor construction project performance assessed by using 5-point Likert scale representing from 1 (strongly disagree) to 5 (strongly agree) and calculating the mean.The interpretations towards each statement are shown in Table 5 [38].
The finding from the data which tabulate in table 6 shown the mean score for eight (8) categories of NVAAS range between 3.51-4.50.Therefore, by referring table 5, it can be deemed that an average, the respondents agree that all the categories of NVAAs contribute to poor construction project performance.Third and fourth question in this section structured to identify the categories of NVAAs and type of works where NVAAs frequently occurred.Data were then analysed by using Pareto analysis.Pareto analysis used to find out what problem should be prioritized,and also to identify targets for improvements.Pareto analysis (80/20) using the idea that a large majority of problems (80%) is produced by a few key causes (20%) and also known as the vital few and the trivial many.
Scale Interpretation
Four (4) respondents were not answered these two (2) questions.From table 7 and figure 1, it is notable that defect and waiting time makes up 78% from the total NVAAs occurred during the construction process.Meanwhile, 81% of NVAAs occurred during structural and architectural works (see table 8 and figure 2).Therefore, to improve the construction process, an indepth study by focusing on root cause analysis will be carried out on these two categories of NVAAs during structure and architecture work.
Effect of NVAA
Section C on the questionnaire is designed to determine the effect of the non-value-adding activities.Respondents were asked to rate the effect of the NVAAs towards four (4) important parameters in construction projects which are time (T), cost (C), quality (Q) and productivity (P) by using 5 points Likert scale from 1 (no effect) to 5 (major effect).Interpretations for 5 point Likert-scale in effect are as described in Table 9.Data from respondent for this section was calculated by using mean score.From table 10, it is can be seen that only defect gives major effect towards the quality where it's mean value range 4.51-5.00.The mean score for others parameters for defect are fall under the moderate effect.For waiting time, our finding revealed that it will give major effect towards time.Other three (3) parameters fall under moderate effect where the mean values are 4.14 for cost, 3.54 for quality and 4.05 for productivity.Transportation and overproduction get the lowest mean score are deemed to give neutral effect towards quality.
The average mean score was calculated to identify the overall effect of NVAAs towards the project.Average mean score for seven (7)
Conclusion
Non-value-adding activities referred as activities that consume direct and indirect cost, time, resources, labour and space, but give no value added to anyone involved during the process.The eight (8) categories of NVAAs are defect, overproduction, waiting time, non-utlised talent, transportation, inventory, motion and extra processing also known as DOWNTIME.The findings show that awareness among Malaysian construction practitioners against this issue is quite low.The findings discovered that, construction practitioners don't take NVAAs seriously, even though they were agreeing that NVAAs contribute to poor project performance.By using the Pareto analysis and the idea that a large majority of problems (80%) is produced by a few key causes (20%), priorities should be given to solve the problem of defect and waiting time.The study found 78% NVAAs occurred during the construction process are defect and waiting time.Further study should also be focused on structural and architectural works because 81% of NVAAs occurred during these two types of work.Defects and waiting time will give major effect towards the quality and time of the project, respectively.These two types of NVAAs also get highest and second highest averages mean score for overall effect.Therefore, it can be concluded that by eliminating defects and waiting time during structural and architecture work will give a huge impact towards time, cost, quality and productivity of the project.
Figure 1 :
Figure 1: Pareto Analysis on frequency of NVAAs occurred during the construction process.
Figure 2 :
Figure 2: Pareto Analysis of types of works where NVAAs frequently occurred during the construction process.
Table 1 :
Defination of non-value-adding activities
table 2 ,
67.9% of the respondents have more than 10 years of experience in the construction industry.It can be seen that those with 26 years and above experience contributes 22.6% of the total
Table 2 :
Respondent information
Table 3 :
Interpretation of Likert scale for frequency.
Table 4 :
Actions taken again NVAAs
Table 6 :
Contribution of NVAAs towards poor project performance
Table 7 :
Frequency of NVAAs occurred during the construction process.
Table 8 :
Types of works where NVAAs frequently occurred during the construction process.
categories of NVAAs which are defect, overproduction, waiting time, nonutilised talent, transportation, motion and extra-processing towards four (4) parameters are range 3.51 -4.50, with defect get the highest mean score (4.25) and followed by waiting time (4.06).Meanwhile, the least average mean score is inventory (3.50).
Table 9 :
Interpretation of Likert-Scale for Effect | 2018-12-07T03:47:55.412Z | 2016-07-13T00:00:00.000 | {
"year": 2016,
"sha1": "76fe752155895c94906f278b555e7569f6d17605",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/29/matecconf_ibcc2016_00015.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "76fe752155895c94906f278b555e7569f6d17605",
"s2fieldsofstudy": [
"Engineering",
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
} |
6223563 | pes2o/s2orc | v3-fos-license | MetaStorm: A Public Resource for Customizable Metagenomics Annotation
Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/), which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution.
Introduction
The field of metagenomics has arisen following the advent of next-generation DNA sequencing. Through new technologies, such as Illumina and pyrosequencing, it is now possible to directly shot-gun sequence DNA extracted from various environmental samples, without the need for cloning. Metagenomics is particularly promising for advancing the understanding of the structure and function of microbial communities residing in natural, human, and engineered environments. To date, metagenomic data sets have been obtained from different regions of the human body [1,2,3], seas and oceans [4,5,6], lakes and rivers [7,8,9], wastewater and drinking water treatment systems [10,11,12,13], soil [14,15], and air [16,17]. Unlike single organismal genomic characterization, metagenomic data sets contain DNA sequences derived from hundreds or even thousands of microbial species [18,19]. Thus, a major computational undertaking is to annotate metagenomic samples in terms of the kinds of microbes (taxonomy) and genes (functional annotation), particularly those that are present in complex environmental samples.
Various computational resources have been developed for taxonomic and functional annotation of metagenomics data sets. These resources can be classified into two main categories: 1) Web services organized as a collection of different computational resources that facilitate the storage, analysis, and retrieval of metagenomic data (e.g., MG-RAST [20] and EBI-Metagenomics [21]); 2) stand-alone programs for various aspects of metagenomic data annotation (e.g., MEGAN [22], MOCAT [23], QIIME [24], MetaPhlAn [25], MetaHIT [26], and MyTaxa [27]), which have been commonly incorporated into Web services. Generally, current services (MG-RAST and EBI-Metagenomics) annotate metagenomic samples by matching raw sequences against a fixed set of large reference sequence databases (e.g., UniProtKB [28], Clusters of Orthologous Groups of proteins (COG) [29]. This practice has two major limitations. First, there is a lack of user customization, particularly the inability to select specific sets of genes. Thus, all annotations are made with respect to the same reference databases, which may not be the most suitable depending on the hypotheses driving the research. The ability to select and focus on desired sets or subsets of reference sequences enables testing of domain-specific hypotheses. For instance, conclusions of studies of antibiotic resistance gene occurrence in the environment (e.g., [30]) can vary depending on the database selected, i.e., CARD [31], a specialized antibiotic resistance gene database, versus the full GenBank database. Second, due to short sequence length, the ability to assemble reads can be critical to identifying genes of interest and avoiding loss of information. The assembly of raw reads into longer contigs/scaffolds has proved to be more effective for annotating sequence features such as operons, transcription binding sites, chromosome organization and taxonomy [19,32].
Here we introduce a new online metagenomic analysis server, MetaStorm, which improves available web resources, particularly for environmental samples, while maintaining a userfriendly interface. MetaStorm offers both read matching and assembly-based annotation pipelines, while also enabling customization of reference databases. This allows users to upload databases containing curated genes of interest to facilitate functional and taxonomic annotation. MetaStorm also provides enhanced visualization of annotation results, allowing the user to explore and manipulate taxonomic and functional annotations at various levels of resolution and to compare annotation for similarities and differences across multiple data samples using various graphs.
Materials and Methods
Raw data is submitted to the MetaStorm server via a user-friendly web interface. Submitted data can remain private or be made public depending on user preference. Users are required to create an account and a profile. This profile allows them to retrieve, submit, analyze, and compare not only their own samples but also other public projects. MetaStorm stores the metagenomics samples and results into user projects which describe the features of the metagenomic experiments. If a project is made public, the raw and any associated results are free for download.
accepted. Provision of detailed metadata associated with the samples from which the DNA sequences were derived is mandatory during the submission process. Provision of metadata is critical to help users identify similar studies that are already in the MetaStorm repository for additional sample comparisons. Data is organized in a manner that facilitates retrieval. A project may contain several samples and each sample may be nested with several associated studies within it (e.g., taxonomy annotation, antibiotic resistance, or any functional annotation using both assembly and read matching pipelines). All user, sample, and project information is stored in a relational database.
Reference database
Apart from a set of standard databases (e.g., CARD [31], UniProtKB [28], and GREENGENES [34]) (Table 1), MetaStorm also allows users to upload and use their own customized databases as reference databases. The customizability of reference databases is especially useful when researchers seek to test a hypothesis by comparison against a very specific set of sequences. Neither MG-RAST nor the EBI-metagenomics Web service allows for customized reference databases. In this way, MetaStorm enhances user control by allowing them to select reference sequences.
Web-based submission
Submission of metagenomic data is made by an interactive web interface (Fig 1). Users are first required to login into the MetaStorm website, select (or create) the project they wish to analyze, and select the desired method (Assembly/Read matching). Once in the project profile page, users need to insert sample information (number of samples, name of the samples, conditions, environment, and library preparation), select reference databases, upload raw FASTQ files, and finally run the annotation pipeline. To simplify the process of data submission, MetaStorm does not require external files such as Excel spreadsheets for sample description and provision of metadata (although this functionality can be easily added for future update if necessary). This interactive tool also allows users to remove samples and projects or re-run the samples with different pipelines, visualizing the results as needed.
Analysis pipeline
Once stored in the MetaStorm server, raw reads are queued for taxonomic and functional annotations. MetaStorm incorporates two pipelines, the assembly-based pipeline and the readmatching pipeline (Fig 2). Selecting the appropriate pipeline depends of several parameters including: the design of the experiment, the previous knowledge about the experiment, the research hypothesis and goals. For instance, if the objective is to characterize the most abundant taxonomy in the community, the assembly pipeline may suffice [18]. Assembly pipeline. Through the assembly process, metagenomics reads are merged into large contiguous sequences varying in length from several hundred bases to nearly complete genomes providing much richer information relative to the raw reads [18,19]. MetaStorm provides a fully automated assembly pipeline that allows the user to visualize, compare, and analyze the taxonomy and functional content of a sample or set of samples by matching and computing the abundance. The pipeline for assembly and gene finding is similar to the methods reported from the MetaHIT consortium [26] (mainly the metagenome assembly and gene prediction through scaffolds). This pipeline consists of the following major procedures: 1. Quality control (QC): reads are trimmed and filtered out by TRIMMOMATIC [35] to remove low quality sequences from the data set.
2. Assembly: IDBA-UD [36] is a widely used metagenome assembler that has demonstrated consistent production of high quality scaffolds [37,38,39]. IDBA-UD is used to assemble the QC filtered reads. MetaStorm uses the default parameters. 3. Gene prediction: Once a set of scaffolds are assembled, PRODIGAL [40] (metagenomics version), a microbial gene finding program, is deployed to predict genes within each scaffold.
Taxonomy annotation:
Predicted genes are matched to a reference database using two alignment tools (BLAST [41] and DIAMOND [42]). Currently included are the following databases: a. Two 16S rRNA databases (SILVA [43] and GREENGENES [34]). The 16S rRNA gene abundance is computed by first selecting the best hit (same definition as in MG-RAST representative hit [44]) to the scaffold-genes from the reference database using BLASTN [41] and then computing the number of genes that each taxa contains (E-Value<1e-10, identity >90%). Note that the taxonomy profile is computed based on the abundance of predicted genes, not the number of reads.
b. A set of marker genes processed by the MetaPhlAn2 [45] pipeline. This technique is included because whole genome sequencing samples typically contain very low 16S rRNA sequence content [26,27,45].
Functional annotation:
Predicted genes (translated proteins from PRODIGAL) are matched to the user selected reference databases using the DIAMOND BLASTP aligner [42]. We use the representative hit strategy with an E-value<1e-10, identity>60% over the entire length [46], and minimum length of 25aa. The reference sequence databases for functional annotation depend on the user criteria. For instance, a user interested in antibiotic resistance genes may prefer to run the analysis over the CARD database [31], whereas a project related to the degradation process may use the CAZy database [47].
Read matching pipeline. The read matching pipeline conducts taxonomic and functional annotation of metagenomic data comparing the raw sequence reads to a reference database. This approach is also called marker gene analysis [18]. For taxonomy annotation, MetaStorm uses a matching scheme similar to MG-RAST and EBI-metagenomic where reads are first trimmed out and quality filtered using TRIMMOMATIC [35] and then mapped to a 16S rRNA sequence database (SILVA/GREENGENES ). To speed up the read matching process, we use Bowtie2 [48], a fast and sensitive read matching tool specialized for mapping short reads to reference genomes (-local-sensitive, identity>90%, best-hit-alignment). It has proven to be particularly efficient for matching marker gene databases; MetaPhlAn2 [45] using Bowtie2 for read matching produced more accurate results than its earlier version MetaPhlAn1 [25] that uses BLAST. MetaPhlAn2 [45] which uses a set of clade specific genes is also offered by MetaStorm to estimate the taxonomic abundance. Functional annotation is made comparing the high quality reads to the reference database using the DIAMOND BLASTX [42] aligner with the representative hit approach [44] (E-value<1e-10, identity>90%, and minimum length of 25aa).
Sample normalization and comparison. Sample comparison consists of the analysis of relative abundance through a set of samples, allowing the user to visualize similarities and differences among samples. One of the critical aspects of sample comparison is data normalization. MetaStorm implement three different normalization techniques as follows: 1. Scaling: Normalize the number of matches obtained per sample, with relative abundance between 0 and 100.
RPKM:
Normalize the number of matches using the Reads per Kilobase per Million Mapped Reads of each gene.
Relative to 16S rRNAs:
We use the normalization concept described in [30], which defines the relative abundance as the copy of a functional gene per copy of 16S rRNA genes.
Normalizations are calculated differently for both pipelines. For the assembly-based pipeline all the computations are made in terms of number of matched genes whereas the readmatching pipeline normalize the samples using the number of matched reads.
Visualization of taxonomic abundance
MetaStorm offers interactive visualization, allowing users to see in detail the main features of the sequence make-up of each sample. A taxonomic tree encodes relative abundance information of different lineages in the sample. For example, in Fig 3, a user interested in the relative abundance of various kinds of Proteobacteria will find that the genus Achromobacter is the most abundant. Unlike other metagenomic tools, such as MG-RAST and EBI-metagenomics, we allow interactive visualization to improve the user experience. In particular, the tree allows users to keep track of various levels of the phylogenetic hierarchy. Also, when the user clicks on any specific node (taxa), all descendants from that node will be displayed as a pie chart. The overall abundance of a taxonomy level can also be displayed as a pie chart. Node colors represent relative abundance. All visualization formats are available for the taxonomic annotation methods.
Visualization of functional abundance
Functional relative abundance is described by a set of interactive pie charts and bar plots ( Fig 4A) that relate functional categories with the genes involved in each category. Users can select the reference database to analyze and all the tables in text format can be downloaded. When analyzing individual samples, read/gene counts are normalized using a linear scale between 0 to 100.
Visualization of sample comparison
Visualization techniques employed by MetaStorm include: heat maps, stacked bars, and interactive trees (taxonomy annotation). As for single sample visualization, the response tree shows relative abundance for each node (taxa) and also for each taxonomic hierarchical level, allowing a high level of specificity. This type of interactive visualization features (Fig 4B and 4C) are not available in other visualization tools, such as MG-RAST or EBI-Metagenomics.
Data Access
Similar to MG-RAST and EBI-Metagenomics, all the information on a project tagged public, such as raw read files, processed files, description files, and visualization tables, are freely available through MetaStorm. From the home page, the user can access descriptions of all the recently listed (public) projects and the reference databases that other users submitted. A search tool is available for users to identify potential sets of reference sequences that can match their analysis. MetaStorm's reference sharing capability aims to support 1) the focus of knowledge based on user runs and 2) the projected run time for reporting MetaStorm results. Expectedly, small customized databases will report results faster than full reference databases. A novice user can use this database for analysis and jump to the specific biological problem, thus saving the computing time. Moreover, the search tool enables users to find similar existing metagenome samples in MetaStorm (public ones) and include them for more comprehensive comparison studies. Comparison across different samples is made feasible by the normalization criteria implemented in MetaStorm. Finally, all the raw and generated files for the metagenomic analysis can be downloaded in a variety of formats by clicking on the download button of each section in the visualization page.
Results and Discussion
Compared to other metagenomic resources, such as MG-RAST and EBI-metagenomics, Meta-Storm extends the analysis and visualization of metagenomic samples by: 1) adding a fully developed assembly-based annotation pipeline, in addition to the read matching pipeline deployed by these Web servers; 2) offering a customized analysis where the user can select and upload reference databases, which enables focus on specific genes of interest as well as interproject comparison; and 3) interactive visualization capabilities, including an interactive taxonomic tree, which permit users to interrogate and compare specific aspects of the sequence data. MetaStorm includes a wide variety of databases used for metagenomics analysis (section customizable reference database). Those databases have been used as default by several current metagenomics resources. While the assembly pipeline implemented by MetaStorm is similar to that of the MetaHIT pipeline [26], it incorporates a more meaningful relative abundance determination in which copies are normalized to 16S rRNA gene copies [30]. Normalization enables comparison across multiple metagenomics data sets, including those generated by external labs, empowering researchers to address broad. This last feature is particularly promising for the future applicability of the MetaStorm server.
Conclusion
MetaStorm is a free and public metagenomics resource that enables a more specific user customization through various improvements of visualization, data management, and user interactivity. MetaStorm offers two main metagenomic analysis pipelines: the read matching pipeline (similar to the current web resources) and the assembly pipeline. MetaStorm, unlike any other web resources, incorporates user reference customization, which will help to streamline the annotation process when a research hypothesis requires specific and customized databases. | 2017-03-18T22:30:50.587Z | 2016-09-15T00:00:00.000 | {
"year": 2016,
"sha1": "f8dd6b5b3aa0e370b8c15594ee1c4f5f4425d97b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0162442&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8dd6b5b3aa0e370b8c15594ee1c4f5f4425d97b",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248009325 | pes2o/s2orc | v3-fos-license | Pictorial Modernity and the Armenian Women of Iran
Abstract The essay explores the entangled relationship between modernization and women's visibility and representation through three pictorial spheres most redolent of that relationship: photo studio culture (1880s–1930s), satirical cartoons (1920–58), and costume exhibition (1972–76). The study prioritizes minoritarian politics formulated by women through their organizations and public activities, whether charitable in the late nineteenth century, educational in the early twentieth century, or “civilizational” from the mid-twentieth century on. By examining pictorial and textual sources, it proposes that the Armenian woman as a discursive phenomenon was central to Iran's mainstream modernization and foregrounds the complex working of a double marginality to the processes, strategies, and anxieties of late Qajar and Pahlavi modernization.
"I had dressed as a village girl," recounted Marina Guevrekian, whose mother, an accountant, was a committed member of one of the six major Armenian women's organizations in Iran: the Armenian Woman Union (Hay Kin Miut'iwn, AWU, est. 1939). 1 The 1972 event had been planned by the Women's Organization of Iran (sazeman-e zanan-e iran, WOI, est. 1966) under the presidency of the monarch's sister, Ashraf Pahlavi. The WOI had invited Iranian minority groups to present two girls in "traditional dress" to embody their distinct culture, thus denoting an Iran that under the Pahlavis in the 1970s celebrated an anesthetized yet inclusive nationalism. Almost a half a century later, COVID-19-style masked and socially distanced in a living room in Los Angeles, Guevrekian excitedly recalled the first of two occasions when, as a model in Armenian costume, she came face-to-face with Empress Farah, who took an interest in the young "peasant" and inquired about her life pursuits. When Guevrekian responded, "I just got my BS in chemistry," the empress rejoined, "Oh my, you are such an educated peasant!" This encounter between a young woman from an upper-middle class, ethno-religious minority family and the royal champion of Iranian art and women encapsulates layers of historical complexities and the many perils of a Third World brand of modernity. It also signals the complex working of a double marginality-a woman and an Armenian-contributing to the process of negotiating and shaping a secular nationhood as an ambivalent yet also ambitious project of cosmopolitanism.
The larger book project that this article anticipates aims to explore the history of Iran's Armenian women from the beginning of Naser al-Din Shah's reign in 1848 to the 1979 fall of the Pahlavi dynasty. As the first scholarly study of its kind, it analyzes the shifting relationship between Iran's central nodes of power (absolute monarchy and patriarchy) and its Armenian female subjects (ethnic minorities and women) in the larger matrix of Qajar and Pahlavi Iran and modernization processes. With few exceptions, little scholarship exists on the lives of minority women in modern Iran. The book prioritizes minoritarian politics formulated by women themselves through their organizations and public activities, be they charitable in nature in the latter part of the nineteenth century, educational in the early twentieth century, or "civilizational" from the mid-twentieth century on. It focuses on those organizations that self-identified as both Armenian and women's associations to understand how they positioned themselves vis-à-vis the patrilocal structures of their own communities (e.g., the Armenian Apostolic Church and Armenian political parties), the mainstream Iranian women's organizations (e.g., WOI), the Iranian state and monarchy, and the wider women's movement internationally. It further offers a critical look at the dynamics of double marginality, that is, an ethno-religious and gender inclusion-exclusion within the larger context of modernization. The narrative is of a subaltern positionality in relation to local or communal and broader institutional, state, ideological, and global bodies that concerned themselves with women as a modernist subjecthood.
Until now, the topic has either been covered in the pages of Armenian-language nonscholarly biographical and encyclopedic-like accounts, where abbreviated biographies of women have been published, often alongside the achievements of men, or notable community-oriented anniversary collections by Armenian women's organizations, which document their activities. 2 Armenian women in Iran as a scholarly subject have been mostly absent, with the exception of Berberian's articles on early modern and modern Irano-Armenian women. Although we have witnessed important work emerging on Armenian women in the late Ottoman Empire, especially related to the Armenian Genocide, and post-genocide Turkey, that is not the case for Iran. 3 Although rich and diverse, studies on women in Iran have bypassed Armenian women, and those on minorities have hardly addressed Armenian women or gender. In this collaborative study, we integrate women's, minority, and visual culture studies to explore the workings of Iranian modernization. In doing so, we do not suggest that collaboration on its own necessarily yields richer results nor that tapping into multiple sources produces a broader, more complex picture of Iranian mainstream histories. Rather, we urge a reframing of the approach to Iranian modernization that considers its margins central to its processes instead of exceptional, as conventionally portrayed. Thus, we aim to provide a curative to existing narratives on Iran's histories of modernization.
We employ pictorial representations of Armenian women to demonstrate their impact on the processes, strategies, and anxieties of modernization in late Qajar and Pahlavi Iran. Like the European New Woman, herself a discursive category whose history is secured through texts and images, the reconstruction of Armenian women's history is equally dependent upon textual and visual mediations. 4 Our engagement with text and image is necessitated by the machinery of modernization that was conditioned by pictorial order-pictorial order that is central to its discursive and pragmatic workability. Furthermore, the question of inclusion and exclusion of minorities was a foundational issue that had to be reworked in the matrix of nationalism for the transition from the Qajar Shi'a empire to the secular Pahlavi nation-state. Because the visual representation of women has been key to the strategies of modernization-whether by the other, that is the state, foreigners, or community male leadership, or representations of the self by women themselves-the history of modernization in Iran cannot be fully explained without a minoritarian approach to modernism that is imbedded in visual as well as textual discourses. In this article, we explore the entangled relationship between modernization and women's visibility and representation through three pictorial spheres most redolent of that relationship: photo studio culture (1880s-1930s), satirical cartoons , and costume exhibition . Through our analysis, we provide evidence for the proposition that the Armenian woman as a discursive phenomenon was central to Iran's mainstream modernization.
Silences in Photographs and Photography Studios, 1880s-1930s
The top-down women's movement of the 1930s and then again in the 1960s has been the focus of numerous excellent studies by scholars of both Iranian feminism and Iranian modernism precisely because these movements were seen as key instruments in rapid and stateimposed modernization agendas that relied heavily on shaping a new image of the modern woman. As several authors of women's history have argued, the dilemma for the women's movement was "what would they do with an education?" or, put differently, the dilemma of the disturbing modernist proposition that through secular education women would encroach on the male monopoly over the public domain. 5 Primary voices confirm this predicament. When asked during an oral history interview whether "diplomas or certificates" were presented to graduates and how these were "honored," the missionary and principal of the American Mission Iran Bethel Girls' School, Jane Doolittle, replied candidly, "There wasn't anybody to honor it; at that time, the girls never applied for any work or anything." 6 The exception was a fabulous photo opportunity at the fashionable Roussie-Khan photo studio (Photographie Russe or akas-khaneh-ye monsieur rusi-khan; Fig. 1) on Aalo ol-Doleh Street. Nevertheless, the diploma of these girls graduating from Iran Bethel (Nurbakhsh after 1940) paved the way for the physical presence of women-as-citizens in the public spheres of education, the workforce, and the modern city.
Among the forms of representation, photography played a pivotal role in Iran's modernizing processes, as Naser al-Din Shah adopted the camera to shape the image of his long reign. Iran's history of photography, and in particular that photographic patronage was a key Qajar practice and ownership of a camera a signifier of progress, reveals much about Iran's modernity, particularly in exploring the depiction of its liminal subjects. 7 In Qajar Iran, "one of the easiest ways to become modern," as Layla Diba sums up, "was to become a photographer ." 8 In this section, we foreground the engagement of Iran's Armenian minority with photography that enabled the visibility of Armenian women's ambitions. 9 Through photography, these women entered the domain of public representation as modern subjects and citizens. Be it in global commerce or visual culture, Iran's Armenians had a solid local history on which to lean. As one of the most successful mercantile communities of the seventeenth and eighteenth centuries, the Armenians of New Julfa in the Safavid capital of Isfahan traveled and traded widely; were fluent in multiple languages and manners; circulated people, objects, and information along their vast networks; commissioned and collected high art; and, like their European counterparts, curated cabinets of curiosities. 10 They also were "cultural mediators" who straddled several worldviews and lifestyles and acted as agents of visual exchange. 11 In the footsteps of their Safavid forerunners who had rapidly appropriated European navigation technology and expanded their trade globally, many nineteenth-century photographers embraced the modern promises of the camera and formed a new global photographic network. 12 This "influential subgroup" inherited several attributes that guaranteed its success with the modern symbol-crazed clientele. 13 Like the New Julfan merchants, Irano-Armenian photographers too straddled multiple domestic sociocultural spheres-i.e., the Qajar court and nobility, the Western missionary and artistic circles, the local urban populations regardless of ethnic and religious boundaries, and the Armenian global diasporas-with similar implications for the influence of their cultural capital on Iranian society at large. Their epistemic positionality afforded these photographers Homi Bhabha's concept of "mimicry" through which to perform a "complex strategy of reform, regulation, and discipline, which 'appropriates' the Other as it visualizes power." 14 Trailing Safavid trade routes, until as late as the 1940s New Julfans sent their boys to Bombay or Calcutta to get a British education. At the end of the nineteenth century, many returned home with a camera in hand. It was the latest New Julfan import even as early as 1849-the year that Jules Richard aborted his mission to photograph Persepolis under the order of Naser al-Din Shah. 15 Within several decades, from the 1880s to the 1930s, this Armenian photographic network stretched from Central Asia to Western Europe, often overlapping with the major urban centers and port cities where New FIGURE 1. Front and back of the graduating class of the American Girls' School, formally known Iran Bethel Girls' School. Photo studio of Photographie Russe Roussie-Khan, Tehran. The handwritten note states, "June 8th 1910. With the love of your teacher and friend, Cora Bartlett." Courtesy of Seda Darmanian Hovnanian Archives, Moneh and Greg Der Grigorian Private Collection, La Canada, California. 12 Herzig, "Terminology"; Schwerda, "Photography." 13 Schwerda, "Photography," 86. 14 Bhabha,"Of Mimicry." 15 Hovhannisian writes that in 1849, Ter Stepanos Baghramian imported a camera to New Julfa from India and that he has seen the back of a photograph in the Baghramian Collection marked "New Julfa 1849." See Hovhannisian, Nor Jughayi. See also Damandan, Portrait; and Tahmasbpour, "Photography." Julfan merchants and their Ottoman Armenian counterparts had dominated in commerce and artistic patronage starting from the reign of Shah Abbas I to their ruin by Nader Shah's policies in the 1740s. 16 The struggle for constitutional rule and the modernization processes that it implied were seen by nondominant religious communities as opportunities for upward social mobility, and Irano-Armenians "contributed more than any other community to Iran's material modernization ." 17 In the mid-nineteenth century, "the earliest and most successful photography businesses belonged to Armenians," not only in Iran but in the Middle East and beyond. 18 With a significant influence on the history of photography, Armenian photographers with photo studios operated in a number of cities, including Ashgabat, Tehran, Isfahan and New Julfa, Tabriz, Shushi/Shusha, Baku, Yerevan, Alexandrapol/Gyumri, Tiflis/Tbilisi, Manglisi, Akhaltsikhe, Kutaisi, Batumi, Kars, Constantinople/Istanbul, Partizak/Izmit, Baghdad, Alexandria, Cairo, Damascus, Jerusalem, Athens, Burgas, Varna, Shumen, Paris, and London. In this extensive network, at least two studios were owned by women under their own names: Ashkhen Ivani Aristakova's studio, established in Baku in 1898, provided "all kinds of photographic services," including hand-colored photos and printing on silk and marble, and Satenik Gibrianosyan's studio, active in the 1900 Within Qajar Iran, aside from Antoin Sevruguin, several Westerners helped shape the photographic discourse on the image of the Irano-Armenian woman. Among others, Assyrian missionary Isaac Malek Yonan, Russian photographer Dmitri Yermakov, German telegraph engineer Ernst Hoeltzer, Dutch collector and trader Albert Hotz, the "mysterious and elusive" W. Ordén, and Anglican bishop Charles Stileman in various degrees and styles depicted and described women in the ethnographic type of "the Armenian." 20 Parallel to them labored the lesser-known local Armenian photographers. In Tehran, photo studios opened by Armenians, whom we recognize based on their use of the Armenian script in their backstamps, included Joseph Papaziantz, who opened a studio in 1875, followed by Mikon Aghayiantz Armeni and Osip Iosiphianz (most likely Hovsep Yusefianz). 21 In numbers, they were outdone by New Julfan Armenian photographers. Tuni Johannes (Hovhanesian) is credited with opening the first public commercial studio in Isfahan in 1880. Martin Manuk was a Calcutta-educated Armenian from New Julfa, who in 1924 became Agfa's representative in Isfahan. When Manuk decided to move his studio to Tehran, he transferred Afga's agency to another New Julfan photographer, also educated in Calcutta, Minas (Mkrtchian) Patkerahanian (lit. image-maker). 22 Mateos Gharakanian and Trdat Tadevos Abgarian were among other New Julfans who worked as professional photographers in Isfahan.
In Tabriz, the commercial studio of Melik Voskanian (1901-49) provided a wide range of photographic services, including family and individual portraits and commercial advertisements, as well as capturing the royals, Reza Shah and Prince Mohammad Reza, on their visit to Tabriz. 23 Voskanian diversified his portfolio by creating customized advertisements for Western products in the early 1930s. His sitter for these works was his young daughter, 16 Aslanian, From the Indian Ocean, 96, 159, 163. 17 Amanat, Iran. 18 Navab, "To Be." See also Graham-Brown, Images of Women; and Vorderstrasse, "To Be." 19 Galstyan, "Shooting, " "Ivani Aristakova," and "Gibrianosyan." 20 Pérez González, Local Portraiture. Sent to Iran by the British Indo-European Telegraph Department, Hoeltzer arrived in 1863, married Maryam Haghnazar (1850-1920 in 1870, an Armenian from Tehran, and settled in New Julfa with a home and a commercial photo studio. Until his burial in the Armenian cemetery of New Julfa in 1911, he produced a vast photographic collection. He had special access to the Armenian community that adopted him, as well as to Armenian women as photographic subjects. 21 Vorderstrasse, "To Be," 71; Scheiwiller, " Sevruguin";Pérez González,Local Portraiture,36. 22 Hovhannisyan,Nor Jughayi,Damandan,Portrait,21,37. Several authors misspell Patkerahanian's name as "Patkerkhanian," misinterpreting that the name includes the title of Khan. Patkerahanian is credited for introducing a photo enlarger to Isfahan. See Hovhannisian,Nor Jughayi,and Damandan,Portrait,26. 23 Alek Zarifian (son of Hasmik Voskanian Zarifian) in an interview conducted by the authors, November 29, 2019, Glendale, CA. In Tabriz, a certain H. Hovsepiantz also is named; see Vorderstrasse, "To Be," 71.
Hasmik Voskanian (1928-93), seen here performing for her father's camera in an advertisement for the German Mimosa AG camera company based in Dresden until WWII (Fig. 2). She is captured in two poses. In the first image, she holds a postcard in her right hand while pointing to it with her tiny left-hand index finger and a big smile. Six portrait postcards hover above her head, arranged like a hand fan. At the top, each card bears a letter: m i m o s a. In the second image, Hasmik is depicted throwing the same six cards in the air like a magician and fixing her gaze on the cards suspended over her head. The final FIGURE 2. Melik Voskanian's advertisement for the German Mimosa camera company. The child is the photographer's daughter, Hasmik Voskanian. Melik Photo Studio, Tabriz, ca. 1932. Courtesy of Alek Zarifian from the Hasmik Voskanian Archives, Zarifian Family Private Collection, Glendale, California product-that is, the advertisement-is both unique and remarkable. The two photos are by-products of a careful photographic assemblage, which involved the creation of the principal photographs with Hasmik as well as, separately, the production of the smaller portrait photographs, which are staged in the studio and then photographed with the child-sitter. Hasmik, who could not have been more than five years old, seems fully at home in the photo studio. Seeking better commercial opportunities, Voskanian moved his family to Tehran in 1938 and reopened the Photo Melik studio on the prime location at the crossing of Istanbul and Lalehzar Streets. Hasmik seems to have been raised in the studio; she merely moved from the front of the camera to the back of it in 1949, when, upon Melik's premature death, Hasmik undertook the running of the studio over her three younger brothers. At the tender age of twenty-one she was mentored by another member of the Armenian photographic network, Minas Hatamian, who was the founding owner of Photo Vida. By the 1960s, as what remains of her studio's collection attests, Voskanian's clientele grew to include top Pahlavi generals and their spouses, as well as many members of the Iranian public and her own Armenian community. She closed the studio in 1975, becoming one of the longest practicing Iranian female studio owners. Nevertheless, her name does not appear in the historiography.
Starting in the seventeenth and into the nineteenth century, most Western observers who commented on Armenian women in Iran portrayed them as either largely homebound or covered. 24 They interpreted the public limitations of women to reflect a lack of any kind of autonomous activity. However, surviving archival documents point to women's active pursuit of property deals, inheritance claims, and other self-directed economic activities in the absence of traveling merchant male members of the household, particularly during the Safavid era, the legacy of which survived into the following centuries. 25 Photographs exact the broader picture. Owing to the global photographic network, Armenian women gained easy and extensive access to cameras. Their availability played a role in the politics of visuality. Armenian women, particularly in New Julfa (where photography spread rapidly), "lived a less segregated social life than their Persian sisters and could often be seen" in public. 26 In contrast, the majority of Iranian women had socioreligious restrictions in their access to the photo studio. 27 Aside from the Qajar aristocratic women who had early access to court photography, Armenian women were among the first to step into commercial photo studios. 28 Indeed, it was in the studio of the best-examined and most prolific photographer of Iran's Armenian photographic network, Sevruguin, that women first posed as photographable subjects. As the century drew to a close, they took agency in their photographic reproducibility, often outside the patriarchal use of photo-culture in such conventional images as family, wedding, and funeral photographs. A growing number of photographs of Armenian women were taken in studio settings that captured solo portraits, educational milestones, female friendships, leisure snapshots, and later, women's organizational leadership and membership. In this wide-ranging genre of photographic documentation of women, stretching from the 1880s to the 1930s, the only man present (in his conspicuous absence in the actual photograph) was the one behind the camera, although the male gaze was negated by the unerotic posture, formal setting, or multitude of women present in a frame.
Indeed, in the industry of erotic photography through which "'famous' women, particularly prostitutes, motrebs (minstrels), and stolen images of elite women from the royal courts" circulated, no surviving photograph claims to depict an Armenian woman in an erotic pose, at least none that we could find. 29 Most Qajar photographers-Sevruguin, in particular-successfully produced the ethnic type of the Armenian woman, depicted in an unerotic and dignified genre. Even when Armenian models were used in erotic photographs, they were not identified as such and were captioned with the more generic label of "harem women." 30 Furthermore, in at least one case, we have evidence of a sitter-described in another photograph as "Galin Mahdi-khani, a prostitute woman"-who posed as an Armenian wearing the traditional headgear of the elite called the kot, again in a nonerotic posture and formal setting. 31 Sevruguin's most celebrated photograph, kept at the Smithsonian Institution and adorning the cover of Frederick Bohrer's 1999 edited volume, one of the earliest studies of Qajar photography, is precisely such a "noble" Armenian woman, wearing an elaborate kot. 32 The commercial and aesthetic decisions of this photographic collective produced a pictorial discourse about the modern image of Armenian women that in turn made a significant impact on their later activities and activism. A return to Roussie-Khan's photograph of the three girls graduating from Iran Bethel exposes a network of photographic speech that facilitated the slow encroachment of women into the public domain (see Fig. 1). After receiving their diplomas in June 1910, although the three graduates could not and did not do much with their education (as Doolittle astutely remarked), they did go to a photo studio, probably at the encouragement or invitation of their missionary teachers. On the two sides of Mehdi Russi Khan Ivanov's Cyrillic, French, and Persian backstamp, the black ink handwriting reads: "June 8th 1910. With the love of your teacher and friend, Cora Bartlett." 33 Cora Cecilia Bartlett (1860Bartlett ( -1939 served as a Presbyterian missionary in Iran between 1882 and 1912 and became the principal of Iran Bethel during a period when the pedagogical emphasis was to "persuade" students "to believe." However, Bartlett, like Doolittle and Annie Stocking Boyce, often had a disparate impact, as the focus of her evangelism "became embedded in a project of training Iranian girls for modern womanhood ." 34 In this photograph, the graduates are dressed in identical pristinely embroidered white dresses and white shoes, complete with white gloves and elaborate boutonnieres that match the ribbons on the diplomas in hand. All three were Armenians: Filomena Boghossian, Natalie Argumnian, and Varvara Khachaturian (ca. 1892Khachaturian (ca. -1958, who went on to form a large family and, as its matriarch, ran it like a missionary school. 35 We have photographic evidence that when Khachaturian was a student at Iran Bethel, every June at the end of the academic year, whether graduating or not, the students took a class photograph at the Roussie-Khan Studio. 36 In this photographic space, in these early years, again we witness an overlap of female solidarity, Western-style education, and photographic visibility. Even as one of the most fervent missionary mentors, Bartlett's gift of a photograph to the graduating class of 1910 signals her tactical emancipation of the photograph's performative power to generate meaning. To be photographed in this way was to defer the patriarchal emptiness of the unheeded diploma. Photographic authority, in fact, sanctioned the legitimacy of the diploma. A century later, it is the photograph, not the diploma, that confirms the granting of the degree, which more often than not remained interred in the darkness of the 29 Scheiwiller,Liminalities,115. 30 Vorderstrasse, "What Can(Not)," 107. 31 Scheiwiller, Liminalities, 109-10. 32 Sevruguin, Studio Portrait; Bohrer and Sackler Gallery, Sevruguin, cover page. 33 Based on this photograph, Tahmasbpour's claim that in "1907 Rusi Khan follows the king into exile" must have happened later; see Tahmasbpour, "Photography," 12. 34 storage box. In turn, that these three young women were allowed and willing to be photographed spoke to this special relation between minoritarian modernism and its visual schemes and devices. Urban Armenian women, despite their small numbers, not only were the photographic subjects of both Western and local photographers, but at times took into their hands this apparatus of modernity. 37 This increased visibility of Armenian women in the photographic space coincided, not so accidently, with the founding of women's organizations to support girls' education. These patterns of modernity-posing for a photograph in a studio, cultivating female solidarity across class boundaries, and struggling for secular education-were interrelated.
Owing to the artistic decisions of various photographers, including Armenians and Muslim Iranians, as well as Western travelers and missionaries, by the turn of the century a pictorial discourse had emerged that depicted Iran's Armenian women as noble, urban, and progressive. In our larger project, the reading of nineteenth-century missionary accounts and European travelogues alongside photographic representations of Armenian women as a dignified ethnic type reveals an ideal image of the "Armenian woman," ready to be appropriated by patriotic tropes and challenged by the women themselves throughout the twentieth century. 38 This constructed image of Armenian women also had its nuanced implication for the women's movement in Iran in the decades to come. To the interviewer's question in 1984, for instance, as to "the freedom you enjoyed and did whatever you wanted to do," the Iranian women's rights advocate, Safiyyeh Firuz, noted in her response that "well, my husband was very contemporary (mo'aser) and he liked it . . . when I went out unveiled, rumor spread that 'he has an Armenian wife' . . . that 'he has an Iranian wife but an Armenian lover (rafigh-e armani). '" 39 This early pictorial discourse also had an impact on the caricatural images of Armenian women on the pages of Armenian-language satirical newspapers, but as we shall see, these papers operated under different assumptions or, more appropriately, anxieties. The strategies of photographic documentation were a pivotal aspect of Irano-Armenian women's organizational efforts to be visible, self-represented, and agents of knowledge production. 40 What was noted by others more than fifteen years ago-that the role of ethno-religious minorities has been sidelined both in feminist and modernist histories of Iranian women-is still the case today. 41 Women's emphasis on parental support of education is especially vital, as these parents were often Armenian women from affluent and well-educated families with exposure to progressive ideas through print and personal encounters with others from urban centers in the South Caucasus, Ottoman Empire, or Europe, or alternatively had been themselves raised on missionary education within Iran. New textual and visual evidence discloses that during the period between the 1870s and 1980 Armenian women of late Qajar and Pahlavi Iran organized themselves in the charitable, educational, cultural, and intellectual realms, each represented by one or multiple organizational or institutional entities: the most active among them located in New Julfa, Tabriz, and Tehran. Like other ethnoreligious minority communities in Iran, such as Jews, Zoroastrians, and Baha'is, they sought secular Western-style education and opened their own schools for girls to offset missionary influence. Through charitable work targeting women and girls, they oversaw a substantial increase in girls' education and vocational training. 42 Between the 1870s and 1910, 37 At the beginning of the twentieth century, in Tehran and its surrounding villages, there were 410 families and 2,200 Armenians; Frangian, Atrpatakan, 175. For 1927, the numbers for Tehran were 600 to 650 families and 3,000 according to Garagash,Parskahay Tarets'oyts',vol. 1,109. 38 Malek Yonan, Persian Women. 39 Firuz interview, Foundation for Iranian Studies,[25][26] For an example of such Irano-Armenian female self-representation, see the case of Heripsimeh Abrahamian (1884Abrahamian ( -1957 in Vorderstrasse, "What Can(Not), " 108-12. 41 McElrone, "Qajar Women," 307; Rostam-Kolayi, "Girls' Schools"; Zabihi-Moghaddam, "Advancement"; Chehabi, "Diversity." 42 For a full discussion, see Berberian, "Armenian Women." missionary education, photographic networks, and women's charity intersected and penetrated Irano-Armenian women's lives as forms of modern agency and representation. This historical conjunction bestowed a unique social image and positionality to the Armenian women of Iran. The origin of these first informal and later formal groupings were tied to the anxiety of nondominant communities-Armenian as well as Assyrian or Nestorian, Jewish, Zoroastrian, and Baha'i-associated with the success of nineteenth-century Presbyterian missionary work in Iran, which began as early as 1834. American Presbyterian missionaries had established as many as 117 boys' and girls' schools in Urmia alone by 1895, "enrolling 2410 students, predominantly Nestorians and Armenians." 43 By the last quarter of the nineteenth century, they had extended their educational activities to Tabriz, Tehran, Rasht, and Hamadan. Their schools attracted Armenian students largely because of the free Western-style education they offered, along with Armenian language instruction for both boys and girls. 44 The Armenian women's struggle to establish an organization with the distinct aim of supporting secular, Western-style Armenian schools for girls faced many challenges, and several bodies were initiated but rapidly dissolved. 46 During the late nineteenth and early twentieth centuries, all major urban centers with an Armenian population saw the formation of women's charitable organizations. Tabriz, with its geopolitical and historical ties to the Caucasus, witnessed the launching of the first two Armenian women's benevolent societies in the early 1890s, the first in 1891, uniting with the second in 1901 to form the Tabriz's Armenian Women's United Benevolent Society (Tavrizi Hayuheats' Baregortsakan Miats'eal Ěnkerut'iwn). 47 Their benevolent purpose was tied to the agenda of educating girls "a decade and a half before the first Muslim women's anjuman [society]." 48 They were closely followed by New Julfa, with its illustrious Safavid legacy, when in 1892 Hovsep Barseghian, a male teacher at Saint Katarinian girls' school, founded a woman's organization (New Julfa's Armenian Women's Benevolent Society/Nor Jughayi Hayuheats' Baregortsakan Ěnkerut'iwn) whose membership comprised Saint Katarinian's older students. 49 Tehran's Armenian Women's Benevolent Society (Tehrani Hay Kanants' Baregortsakan Ěnkerut'iwn, AWBS), began activities in 1905. Several other charitable and benevolent organizations followed throughout the twentieth century in places such as New Julfa, Hamadan, Abadan, Tehran, and others under varied titles that included a sundry combination of Union, Society, Benevolent, Charitable (aghk'atakhnam), or Compassion, (gt'ut'iwn), but all bore the 43 Zirinsky, "Harbingers," 174. 44 Berberian, "Armenian Women," 78. 45 Doolittle interview, Foundation for Iranian Studies,vol. 1,[8][9] On the prehistory of Tehran's Armenian Women's Benevolent Society, see Tehrani Hay Kanants, "Tehrani Hay Kanants'," 4-9. Also, Annett Der Grigorian Ayvazian (former president of AWBS) in an interview conducted by the authors, November 29, 2019, Tujunga, CA. 47 For a full discussion of these organizations, see Berberian, "Armenian Women, " 84-85. See also Frangian,Atrpatakan,137. 48 McElrone, "Qajar Women," 309. 49 Minasian, Nor Jughayi, 5. This seems to be the only case in which an Irano-Armenian women's organization is founded initially by a man.
terms Armenian and Women, sometimes in the Armenian compound form. 50 These women's organizations not only became visible in the public space through their activities, but the women's leadership also performed visibility and represented themselves by sitting for photographs and publishing those photographs in dozens of anniversary publications commemorating organizational history and accomplishments and honoring leaders and donors.
Like Muslim women's early activism, most socio-communal engagement of the Armenian women's benevolent societies centered on conventionally gendered charitable and educational work. A shift, however, did begin to take shape at the turn of the twentieth century. Starting in the latter half of the nineteenth century, Armenian communities in Iran as well as the neighboring Ottoman and Russian Empires, where most Armenians lived, experienced increased access to education, a journalistic and literary revival, and a changing political landscape, which brought Caucasian Armenian teachers and political activists to Iran. The Azerbaijan province in northwestern Iran, in particular, served as a point of passage or layover for militants, arms, and print crossing imperial frontiers during the connected revolutions of the early twentieth century. 51 Armenian women's activism, first within charitable and educational spheres and later in the women's movement in an attempt to bring women's issues to the attention of women themselves and to raise their consciousness, occurred within this broader turn-of-the-century context, as women's organizations tried to educate women in politics and in Ottoman and Iranian constitutionalism, as well as inheritance rights, hygiene, and so forth. During the Iranian Constitutional Revolution (1905-11), one of the benevolent organizations even spoke of changing its program to emphasize the woman's question. 52 However, the benevolent mission of most of these organizations, which focused on girls' education and the care of orphans and the elderly, took on an existential significance in the aftermath of the 1915 Armenian Genocide when enfeebled survivors besieged major Iranian cities such as Tabriz and Tehran. They reengaged and channeled their energies and activism not toward "the woman question" but to the chaotic and horrific consequences of genocide, which rippled across the southern border of the Ottoman Empire. The demand for women's volunteer "care" work was such that in that same catastrophic year, the dominant political party, the Armenian Revolutionary Federation (ARF), formed New Julfa's Armenian Women's Compassion Union (Nor Jughayi Hay Kanants' Gt'ut'ean Miut'iwn, est. 1915) initially modeled and named after the Red Cross. 53 Following an official appeal by the Catholicos of All Armenians in Ejmiatsin, the women of the Tehran AWBS took custody of the refugees and administered to their needs. Although Armenian women's political activism waned with the end of the Constitutional Revolution, AWBS continued along its path and survives to this day, likely because it faithfully maintained its apolitical and entirely charitable objectives. In 2020, the society celebrated its one hundred-fifteenth anniversary as the oldest functioning women's organization in Iran, despite attempts by the Armenian Church in Iran in 1921 to eliminate all independent women's organizations and create in its place the Armenian Church-Loving Women's Union (Hay Kanants' Yekeghets'asēr Miut'iwn, ACWU, est. 1928) in Tehran, with the express aim of keeping women's activities under its supervision. However, unable to run the community's affairs without the women's uncompensated labor, decades-old experience, organizational skills, and plain know-how, the church was forced to accept the return of women's organizations in the 1930s. Financed by the church, ACWU amplified the ideal image of the Armenian woman as mother and wife and continued to function parallel to independent Armenian women's organizations. The closing by Reza Shah of foreign, ethnic, 50 Georgian,Amēnun Taregirkʻe,485,495,507,509;Amirkhanian,Nayiri Taregirk',459;Pahlevanyan,Iranahay hamaynkĕ,132,141,143. 51 Berberian, Roving Revolutionaries. 52 Berberian, "Armenian Women," 91. 53 Georgian, Amēnun Taregirkʻě, 427. This is not to be confused with the branch of the Red Cross founded earlier in 1909; Berberian, "Armenian Women," 91. and non-Muslim schools in the 1930s as well as his suppression of independent civil society entities, however, provoked intracommunal agitation among Armenians and contested that conservative image. 54 Shut down by the nation's royal patriarch as well as their own church fathers overnight, benevolence and charity began to ring hollow to some Irano-Armenian women. A new generation of Armenian girls, raised on a missionary brand of modern womanhood, followed by the state-initiated Women's Awakening Project , birthed a new organization. They joined the other organizations, however, in the struggle to self-represent and produce images of themselves that contested the disseminated narrative. Whereas selfrepresentation strove to honor, satirical representation attempted the reverse.
Bogeymen and Birch Brooms Take on Women, 1920Women, -1958 Unlike, for example, Hayganush Mark's Hay Gin/Kin (Armenian Woman, Istanbul, 1919-33) in the neighboring Ottoman territories or Mari Beylerian's shorter-lived Ardemis/Artemis (1902-4), appearing much earlier in Cairo and Alexandria, Armenian-language periodicals devoted to women's issues never saw light in Iran. 55 Contemporary newspapers of a political, social, cultural, and literary variety, including satirical ones, instead are instrumental in providing a glimpse into a community's views about women and women's issues, especially themes of patriarchy, modernization, and nationalism, exposing the gender dimensions of the politics around these issues.
Although we have some evidence of the stirrings of Armenian women independently and with fellow Muslim women around the woman's question during the Constitutional Revolution, it was not until 1939 that an Irano-Armenian feminist-leaning organization was established and woman-centered activity took shape in the immediate aftermath of Reza Shah's exile in 1941, when many organizations were launched and became active players in shaping Iran's second phase of modernization, especially through the declaration of the 1963 White Revolution by Mohammad Reza Shah. This era of nagging modernity witnessed a public contestation between conservative patriarchal institutions, such as the dominant Armenian Apostolic Church and Armenian political parties (e.g., ARF and the Armenian Democratic Liberal Party), and the embryonic and independent women's movement. During this period, in the absence of women's journals, satirical journals such as Bobokh (Bogeyman, 1920-42) and Tsakhavel (Birch Broom, 1943-44, 1950 took the lead in attempting to shape the community's views on women vis-à-vis patriarchy, modernization, and nationalism. 56 Through their biweekly and monthly masculinist antagonism toward women and critique of Armenian women's lifestyles, male editors attempted to control the narrative about the proper modernization of women, often through "hyperbolic, oversimplified, and repetitive" representation, similar to European satirical journals like Punch. 57 Whereas Bobokh made women the brunt of offensive jokes, thus betraying the highly patriarchal community's gender bias, other satirical commentaries and illustrations critiqued multiple issues, from the evolution of fashion to community politics, through women's bodies. Bobokh was Iran's first Armenian-language biweekly satirical ( yergitsakan) periodical. 58 On page after page, women appear in both rich text and black-and-white graphics as impressions of a changing society. Bobokh's longtime editor and later owner, Hayk Garagash (1893Garagash ( -1960, was born in Tabriz. 59 At around age five, his family moved to Ghazvin and 54 Abrahamian, Iran, 135-65. 55 then to Tehran. He attended the French Catholic Saint Louis School through high school, where French and Persian were the languages of instruction and from where some of Iran's intellectual elite, such as contemporaries Nima Yooshij and Sadeq Hedayat, graduated. Soon after graduation, Garagash worked for the Royal Bank, the Ottoman Bank, and eventually the Anglo-Persian Oil Company (APOC) while simultaneously editing Bobokh. According to an online biography, APOC's ultimatum that he resign from his editorial post at Bobokh or lose his lucrative position led Garagash to leave APOC and commit himself to satire and theater. 60 He devoted his editorial time to Bobokh, and later the cultural and literary weekly Veratsnund (Rebirth, 1930-53). Amid the chaos of Iran's Allied invasion, Veratsnund became a daily, essential in circulating information to Armenian-speaking communities during World War II. In the late 1920s, Garagash also authored three large volumes of the Perso-Armenian Yearbook (Parskahay Tarets'oyts ', 1927, 1929, and 1930). Under his own biographical entry, he noted that "a good majority of the theatrical performances that he organized and staged were for [Muslim] Iranian women." 61 It may very well be that not only did he not see any contradiction to mocking women in print while staging performances for them but that he also believed, like many of his contemporaries, in theater's role in promoting self-examination, civilization, and progress.
Satire appeared in the pages of Bobokh via caricaturists such as Darvish, the pseudonym of Andre Sevruguin (1896Sevruguin ( -1997, son of Antoin Sevruguin. 62 Although not as well-known globally as his photographer father, Darvish was an influential artist in his own right and part of Tehran's avant-garde intellectual circles in the 1930s with Hedayat and Bozorg Alavi. 63 He must have known Garagash from their school days at Saint Louis. Darvish's first caricature appeared in the third issue of Bobokh (although the cover illustration of the bogeyman on the first issue also is likely his). 64 After his departure, the biweekly journal's principal caricaturist from August 1924 to at least 1936 was Margar Gharabegian (1901-76), who went by the pen name Dev (devil or demon). 65 Described by those who remember him as "khosh tip" (good looking), "well-groomed," and "a Don Juan," Dev had an artist's studio on Sevom Esfand Street and produced caricatures of Armenian women for Bobokh and later Tsakhavel that mirrored his public persona. Although the image of the woman was heavily deployed as a visual trope in the pages of Bobokh in the first decade of its publication, the 1920s, it completely disappears in the next (and last) seven years of the paper. It was instead largely replaced in spring 1930 by a fictive character: a provincial old woman whose drawn image in traditional dress adorns the front page and whose speeches in archaic Tehrani Armenian dialect mixed with Persian and some Turkish serve as editorial commentary. 66 Susan (Sūssân) Baji becomes Garagash's mouthpiece, superseding many of the caricatures and jokes and continuing the relentless censure of modernity's effect on social relations. This shift is accompanied by a gradual deterioration in the visual, textual, and thematic richness of the paper, perhaps because of a lack of financial resources, Reza Shah's censorship policies, or Garagash's dwindling interest in the paper, as he recast his focus on Veratsnund at this time.
Although national and global politics were largely untouched, Bobokh targeted two arenas with particular zeal: first, the Irano-Armenian community and intracommunal affairs and, 60 Khanents, "Hayk Garagash." 61 Garagash,Parskahay Tarets'oyts',vol. 2,405. 62 Ibid., vol. 2, 378-79. 63 Tajarian and Sevrugian, "Art," 69. 64 Bobokh 3 (February 1, 1920): 21. 65 His first illustration appears in Bobokh 20 (August 7, 1924): 155. We had no access to issues after Bobokh 268 (August 1, 1936). Rima Serebrakian and Baghdasar Der Grigorian, in a discussion with the authors, January 31, 2021, Pasadena, CA. 66 Because of missing issues, we are unable to ascertain exactly when Susan Baji appears in modern dress (but it is sometime between 1932 and 1936), or when she first appears in Bobokh (although it is certainly sometime between March and June 1930). second, women. For the paper, nothing in these two arenas seemed sacred; everything was fair game and open to criticism. Whatever form it took, whether doled out lightly or in outright mockery and ridicule, the criticism unreservedly reflected and reinforced readers' and society's sexism, its gendered fears and anxieties, and what it perceived to be the perils of modernity, which the satirical paper viewed as threatening to the Irano-Armenian community's traditional and patriarchal gender relations and culture. Especially in the 1920s, Bobokh's anxieties regarding modernity manifested in biting criticism, drawn in black-and-white illustrations and caricatures or written into jokes and short snippets that targeted women's fashion and lifestyle choices, such as shortened skirts and hair, décolleté tops, makeup, and modern dance, like the foxtrot, all clearly informed by or modeled upon practices in Europe or the US. 67 In the case of dance, men also were implicated in an illustration depicting a jumble of intertwined men's and women's legs covered in the center with a scream bubble: "Modernism!! Fox-trot, the latest fashion!" 68 In most cases, however, the targets were clearly women; one cartoon commenting on the inverse relationship between long tongues and short skirts appears twice, in 1921 and 1929. 69 Bobokh's projections for the future of women's fashion seemed grim if its readers were to judge by its line drawing of six women spanning the years 1875 to 1940 under the heading "'Fashion's past, present and future." Typical of its exaggerated manner parodying women's vogue, the paper predicted that given the increasingly minimalist trend, women would barely be clothed by 1940. The figure representing 1940 wears an oversized hat, large earrings, high heels; a sash runs between her bare breasts and covers her pelvic area; she carries a purse in her right hand and a fan in her left. The caption reads, "Very little aptitude is required to foretell the 'fashion' of 1950." 70 A popular graphic trope of Western caricature, the stripping modern woman also made appearances in Persian-language journals such as Tehran's Tofiq (1923-71). 71 Although beginning in the 1920s some circles of upper-middle class men and women began socializing and the latest European and American styles in fashion and beauty made it to the pages of the pro-Reza Shah Alam-e Nesvan (Women's World, 1920-34), Bobokh's depiction of women's short skirts or décolleté tops had little to do with the reality of the public space, the streets of Tehran. Although it may have been reacting to some degree to the introduction of the newest trends, whether in the real world or Women's World, the paper's caricatures rather accurately revealed male anxiety about the desire to maintain control of the female body with the onset of rapid modernization. 72 Bobokh also ran a five-year series in the 1920s called "Great Men and Women," featuring negative, critical comments by known European authors about women. But for the most part the paper did not rely on European men to speak for the Irano-Armenian community. It articulated its own misgivings and social critique when it questioned women's loyalty and trustworthiness in love and marriage, insinuated sexual transgressions, and even likened women to money that "changed hands," in this case illustrated starkly as a scantily clad short-haired woman being passed from one man's hand to another's. 73 Although the satirical paper also leveled some of its criticism at men's hypocrisy or fashion, its anxieties revolved largely around the impact of modernity on women precisely because they were perceived as 67 For a discussion on "the conflation of modernization and Westernization" and "health and beauty for Iranian women" in the Iranian press, see Amin, "Importing." 68 preservers and carriers of culture and tradition. 74 Modernity was seen to simultaneously effeminize men and masculinize women, thus perhaps lessening gender differences and equalizing men and women. Echoing an article that had appeared a few years prior on short hair and a cartoon on short skirts, a 1929 illustration reflects growing anxieties about women's femininity as skirts and hair not only became shorter, but women in Tehran chose to go bald, mimicking Parisian fashion. 75 In his discussion of the Iranian press of the 1920s and 1930s, Camron Amin shows how Persian-language newspapers "dismissed cosmetics and 'fashion worship' (mod parasti) as corrupting threats to women's and the nation's progress." 76 As we shall see, in the Armenian case, the nation often takes the form of the community.
The emphasis Bobokh paid to dress is not surprising given its importance as "both an indicator and a producer of gender." 77 However, to Bobokh and its readers, the danger lay not only in fashion but also in what must have seemed even more menacing-sexuality. The trope of the scheming, conniving, untrustworthy woman was not new to the 1920s or 1930s; however, the anxiety of equalizing sexuality or sexual behavior brought on by modernity so well represented in the illustration titled "Contemporary Understandings" certainly was (Fig. 3). The illustration shows two panes: on the left is a woman seated watching a man place a naked woman's upright body on a bookcase next to other similar women's bodies much like one would with a book; on the right is a man seated watching a woman carrying out exactly the same action but, in her case, placing a naked man's upright body among a sea of other similar bodies. All the women in the illustration have short hair, yet the two who are dressed are modestly so, and almost all the men lack facial hair-all these are markers of the modern woman and man. The caption encapsulates the message: "Just as for certain men women, for certain women men, resemble books; after reading them, they arrange them in the library." 78 Whether a woman was likened to money changing hands or depicted arranging naked men on a bookshelf, the image of the woman, in particular her body, in the hands of Bobokh's male editors and caricaturists was malleable. For most of its existence, the paper oscillated between parody and derision in its representation and imagining of women. In only two cases do we come across a substantially different portrayal-one that seeks to evoke empathy or engagement rather than ridicule. Both illustrations appear in 1925, in almost consecutive issues. One is of a goddess-like woman with long hair and eyelashes in what resembles a toga labeled with the words "Armenian public" (hay hasarakut'iwn) lying unconscious with arms outstretched as two large seemingly fierce but grinning bulls representing different political currents in Tehran's Armenian community menacingly loom over her, ready to kill for the sake of interparty solidarity. 79 The other is a black-and-white reprint of a color original by caricaturist Aleksandr Sarukhan that appeared in Cairo's satirical journal Haykakan Sinema (Armenian Cinema, 1925-26). 80 Titled "The Force of Saving the Nation," the illustration depicts a struggle among contemporary Armenian political parties (the portly R.A., the tall H.H.D., and the tiny S.D.H.) over the "Armenian nation" (hay azg), which is branded on the chest of an emaciated woman whose bare arms 74 For an example of modernity as an effeminizing process, see Bobokh 34 (September 14, 1925): 266, where men are depicted in athletic fashion with tight-fitting, short-sleeved, low-cut tops, shorts, and heeled shoes. 75 also served as the paper's manager, with Yervant Odian (Yervand Otian, 1869-1926) as its editor. Odian is one of the most well-known Armenian satirists and most famously the author of Ĕnger Panjuni (Comrade Clueless), which mocks Armenian political parties through the character of Marxist Comrade Clueless, sent to propagandize among peasants in the Armenian provinces of the eastern Ottoman Empire. seem on the verge of being wrenched by the forces of the parties "saving" her. The exception is the less powerful S.D.H., standing on a rock and clinging to her exposed legs, although like his counterparts he too proclaims himself the savior of the nation. 81 The caption reads, "In order to pluck the privilege of savior, they pull apart the poor nation, without reflecting on what the nation endures in their hands" (Fig. 4).
How are we to interpret these contrasting renderings of the female body-whether as a frail and exposed figure or a healthy beautiful goddess-as nation or public, or the Armenian nation or public? In a sense, it is in these two cases that Bobokh's two main targets-intracommunal politics and women-come front and center: but here the latter becomes the very tool by which the former receives the harshest rebuke. It seems Tehran's and Cairo's Armenian communities had much in common, judging by Bobokh's appropriation of "The Force of Saving the Nation" illustration, among other similar illustrations by Sarukhan. 82 The paper did not engage with or even pay much attention to women as part of the community politics they critiqued. In both illustrations, their bodies merely serve as a vehicle to drive home a point about party politics in a highly patriarchal society. Unlike its successor Tsakhavel, for example, Bobokh for the most part ignored women's organizations and simultaneously portrayed an image of Armenian women that starkly contrasted with that represented by women's organizations themselves. We could attribute this shift of engagement largely to a new arrival on the scene-one that was distinctly dissimilar in every way from the benevolent, charitable organizations that had dominated the women's world.
The AWU, founded in 1939, was composed of young women with feminist leanings and intellectual interests who, although also involved with charitable work, sought selfenlightenment as a primary goal. Even Tsakhavel, however, must not have taken AWU seriously in its early years, perhaps dismissing the youthfulness or inexperience of its female activists. This is evidenced by the paper's first issue in September 1943. An illustration of a judicial court adorns the cover. A birch broom (tsakhavel) at a podium of judgment, flanked by two other birch brooms, one on each side, presides over the "sinful," that is, community organizations, including women's groups-church-loving, benevolent, charitable groups, and even women writers-but AWU remains absent. The caption reads, "Woe to the sinners (meghavornerin). Glory to the sinless (innocents, anmeghnerin)." 83 Tsakhavel's unease with women's increased public visibility and activity intensified with the surge of Armenian women's undertakings but also more broadly with the growth of Iranian women's organizations in the 1950s. For example, the daughter of the Zoroastrian representative to the parliament, Farangis Shahrokh Yeganegi, later Assistant Secretary General of WOI, founded the Zoroastrian Women's Organization (est. 1950); Mehrangiz Dowlatshahi founded the New Path (est. 1955, Rah-e now); and Safiyyeh Firuz launched the Women's League of Supporters of the Declaration of Human Rights (est. 1956).
Tsakhavel, which was the Irano-Armenian community's second satirical monthly, ran equally as long as Bobokh, although with a several-year hiatus. Owned and edited by Yervand Bazen (born Mirzaian, 1899(born Mirzaian, -1966, it appeared from 1943 to 1944 and then again from 1950 to 1958. Bazen, like Garagash, was born in Tabriz and received his education in Armenian and French schools. He worked for a number of journals in and outside of Iran and published several books of poetry throughout his lifetime. 84 Tsakhavel's acerbic style and content shared much with Bobokh's, even if Tsakhavel relied less on visual representations than its predecessor. Similar themes were expressed in the paper's pages in the 1940s and 1950s, two decades after the appearance of the first satirical paper. Its critique of women, however, was often more aggressive, offensive, and disparaging; it even pursued women critics of the paper in its pages. 85 Tsakhavel also distinguished itself from Bobokh with didactic pieces and illustrations that juxtaposed good and bad archetypes of womanly conduct, promoting an ideal woman who combined attributes of both modernity and modesty. By the 1950s, although Tsakhavel had come to terms with at least some of modernity's encroachment and acknowledged the new, modern woman, it was still driven by an anxiety. This anxiety boomed within the context of a growing women's movement, with organized activity, public presence, and visibility, all of which directly and indirectly challenged patriarchal norms and gender relations. Thus, Tsakhavel sought to shape and contain the modern woman by advocating for her modesty, with all that implied for behavior, dress, character, morality, and reconfirming her place in the patriarchal order. The community's anxiety about Armenian women's increased public visibility and activism through independent women's organizations was again plainly expressed as Tsakhavel seemed to take sides with the most conservative of these groupings. Conspicuously eyecatching, the unsigned cover of the June 1, 1951 issue was entitled "Collective and Unanimous the Church-Loving Women's Union is Building the Prelacy" (Fig. 5). 86 It depicts the caricature of five middle-aged women, actual personalities and leaders of ACWU. Unlike the dominant Armenian women's organizations in Tabriz, Tehran, and Isfahan (that is, the various AWBSs and the AWU), from the outset the ACWU was subservient to the priorities of the Armenian prelacy, having been created under its authority. The third and fourth points of its regulations stated: "The Union is accountable morally and financially to Tehran's National Prelacy," and "The Union's honorary president is the Prelate of Tehran's Armenian Diocese." 87 Quite telling was the very language of its bylaws. The primary stated goal was "to assist the Armenian churches in their beautification"; women were to play an ornamental role for the structure that was the church while organizing lectures on religion and ethics, "inspir[ing] the worship of national . . . traditions among Tehran's community." Tending to needy students or the burial of the poor took on secondary importance. 88 Unlike other Armenian women's organizations, the independent self-representation of ACWU in the form of either text or image seems never to have been profuse, self-initiated, or scholarly. In its self-presentation since 1928, the women of ACWU seem dwarfed by the authority and image of the church fathers. In effect, this cover of Tsakhavel is one of the few times ACWU was deployed on the communal stage to steer a conservative campaign on Armenian women. The five caricatures are depicted hard at work doing what was traditionally a man's job: physically erecting the new building of the prelacy. Longtime president of the ACWU, Satenik Petrosian Aserian (front left, 1902-85), in her pristinely white dress, stands in a pile of mud, proudly holding two mud balls in her hands; Hasmik Simonian Vartanian (front right) stands in the shared mud pile holding a shovel on which her right leg rests; the oldest among them, Gayaneh Melikian Yahinian (back right, 1897-1979), wears a bright red dress and moves out of the pictorial frame carrying the front end of a handbarrow, while Arax Petrossian Makarian (back center, 1907Makarian (back center, -2010 holds its back end, 85 See, for example, Tsakhavel 10 (May 1950): 7, for an illustration of woman with a snake as her tongue; and Tsakhavel 2 (October 1943): 5, 14; Tsakhavel 4 (December 1943): 7 for the pursuit of critics. 86 Tsakhavel 4, no. 36 (July 1, 1951): cover. 87 Tehrani Hay Kanants' Yekeghets'asēr Miut'iwn, Kanonagrut'iwn, 3. Although this is a 1964 printing of the ACWU's regulations, there is no evidence that the regulations had gone through any substantive changes since their inception. From 1928 to the present, ACWU seems to have produced two small brochures on regulation and a brief history; see also Tehrani Hay Kanants' Yekeghets'asēr Miut'iwn, 80-ameak. 88 Tehrani Hay Kanants' Yekeghets'asēr Miut'iwn, Kanonagrut'iwn, 4. ACWU's establishment requires more research, as it may have been a concerted effort to bring women's activism into the fold and service of the Armenian Prelacy. looking away. 89 Behind a half-constructed wall, a not-yet identified figure with gray hair lays the bricks. The long caption is as enigmatic as the image, simultaneously praising and belittling both men and women in a mix of lowbrow Tehrani Armenian and Persian.
Brava women, in these days even man would not have this courage; even if in the past you slipped slightly, this work of yours wiped out all that. Money is a vile thing; it could bring calamity upon one's head, but the home will always remain.
Brava women, perhaps if Tehran's women's union sees your work and musters the courage, it too would build a theater and then a cultural house; evil tongues say that they too have money. 90 In their Sunday best, with diverse dress styles and colors of bloomy blue, fluoridated red and white, and austere black and white, the women are depicted as if in the church courtyard at an Easter celebration. Yet at least three of them are stripped from thigh to toe and stand barefoot. A highly nuanced but certainly uncomfortable tension is created by this contrast, and still another: women taking on construction, which otherwise they would not be allowed to do, and for which, if they dared it, they would be ostracized. What are Bazen and his caricaturist alluding to here?
Tsakhavel, like Bobokh, was a satirical paper that sought or at least claimed to place all of society under a microscope and mock it; therefore, although women were special targets, men were not spared. With this depiction, Tsakhavel is leveling its scorn not only at the church-loving women by portraying them barefoot and calling them knik, but also at men, implicitly questioning their masculinity by depicting women doing men's jobs. As in most of its caricatures, Tsakhavel was sending an intentionally mixed and complex intracommunal message. Although at first glance the cover image and its caption seem to praise ACWU's work for the prelacy, the article that follows foregrounds the paper's real intentions: an attack on women's activism and an explicit attempt to hold a monopoly over the narrative about the New Woman at large. As readers move from the front page to the related article on the second page, they discover that the initial praise of ACWU is, in effect, a narrative strategy for criticizing the efforts of these women as they propose to construct-with "sums managed and saved during centuries"-a building to serve church or national needs, as well as making a jab at the prelacy, which had not only accepted the proposal but had promised additional financial assistance. 91 Tsakhavel's disapproval of ACWU is then juxtaposed with 89 Ruzan Hovanessian (niece of the president of ACWU, Satenik Petrossian Aserian, and daughter of her sister and active member, Arax Petrossian Makarian) in an interview conducted by the authors, August 13, 2020, Glendale, CA; Ina and Alenush Aslanian (granddaughters of ACWU cofounder Gayaneh Melikian Yahinian) in an interview conducted by the authors, July 12, 2020, Paris, France, and July 14, 2020, Glendale, CA. See also Makarian,Mokhrats'ats. 90 The Armenian script of the Armenian and Persian original follows: Աֆարիմ կնիկներ, էս օրերում տղամարդը էդ ղէյրաթը չէր կարող ունենալ. եթէ անցեալներում մէ թալաքա սայթաքում էլ ունեցել էք, ձեր էս գործը ամէն բան բաթել արաւ։ Փողը անպիտան բան ա, մարդու գլխին բալա կը բերի, ամա տունը միշտ կը մնայ։ Բարաքալլա կնիկներ, բալքի ձեր էդ գործը Թեհրանի կանանց միութիւնը տեսնի ղէյրաթ գայ ինքն էլ մէ թատրոն եւ մշակոյթի տուն շինի. չար լեզուները ասում են ընդոնք էլ փող ունեն. See Tsakhavel 4, no. 36, July 1, 1951 (cover).
Several Persian words appear. Two variants of "brava" are used: Աֆարիմ /afarim, slang for afarīn ( ﺍ ﻓ ﺮ ﻳ ﻦ ), and Բարաքալլա /barakalla ( ﺑ ﺎ ﺭ ﮎ ). Ղէյրաթ /gheyrat ( ﻏ ﯿ ﺮ ﺕ ) is used twice, once to mean courage and another time in combination with the Armenian verb "to come" (gal) to imply "to muster the courage." The Persian բալքի /balkeh ( ﺑ ﻠ ﮑ ﻪ ) appears for "perhaps" and թալաքա /talakeh in combination with the Armenian "to slip" (saytakel) indicates taking money or something else through trickery. The Armeno-Persian slang usage of the word in this particular sentence can be interpreted as "small" or "slight." The choice for "women" is the Armenian կնիկներ /knikner, the plural of knik, synonym for a married woman, although it also may be used pejoratively either to denote the low status of the woman or to betray the low status of the speaker. 91 Emphasis added as a reminder that the ACWU was founded in 1928. Tsakhavel 4, no. 36 (July 1, 1951): 2. a call to AWBS to rise to the occasion and divert the money from providing breakfast once a week to poor pupils, whom the paper charged with "shamelessness" and being "sinecure, " "spoiled," and "demanding," to the construction of a house of "Armenian Culture"-a "laudable and historic and valuable" undertaking.
Here, Tsakhavel was toeing the middle line-much like the modern yet modest-by privileging the AWBS in this community-building effort. Yet again, what seems like a reasonable solution to larger communal concerns is, in fact, a transfer of responsibility that guarantees failure. AWBS, even if involved in "praiseworthy" activity for "twenty years," never committed to a mission for the intellectual and cultural betterment of either women or the community. From its inception in the late nineteenth century, AWBS has remained an exclusively and robustly charitable organization: to feed the poor, to care for the elderly, to school the dispossessed, and so forth. Erecting a house of theater or culture was neither a priority nor in its toolbox. Intriguingly, the one self-proclaimed intellectual and cultural women's organization, AWU, is absent from both the illustration and the article. Although active and highly visible since 1939, AWU is neither named nor represented. Tsakhavel's intentional silence, the very refusal to name the organization, is the real assault on AWU. At the end, Tsakhavel pokes fun at ACWU, calls on AWBS to chase failure, and summarily disregards AWU by denying it representation. Unlike the five heads of ACWU in their fifties shown in conventional outfits, the board of AWU in 1951 consisted of women with an average age of twenty-nine. Rebelling against their own mothers' membership in ACWU, the women of AWU had refused the role of "beautifiers" of patriarchy; theirs was a project of community and self-enlightenment. Despite the tensions of the image and the text, Tsakhavel picked sides; the New Woman, neither in her generational outlook and ideological priorities nor her modernist appearance, was Tsakhavel's choice-but neither necessarily was the traditional church-loving or benevolent one. Tsakhavel preferred to conceive and sculpt its own new woman.
The cover page of Tsakhavel's May 15, 1950 issue is perhaps its most revealing and representative piece of such an effort (Fig. 6). The illustration, which promotes the proper fashion for the modern-yet-modest Armenian woman by contrasting it with the "unacceptable" fashion of another, lays bare the paper's attempt to monopolize the discourse of modernity about women and to decide their proper place. The artist's pen name, Khaytuni-"stinger" or "the one who has a sting" in Armenian-appears throughout the pages of the paper in the 1950s under several illustrations and verse (where his first name appears as Zambur; Armenian zambuṛ, or hornet; Persian, zanbur or bee). 92 Bobokh's commentaries on women's behavior and fashion are outdone here with the introduction of a rich backdrop: Reza Shah's modern and sanitized urban fabric. The radical urban reforms of the mid-1930s that affected spatial organization and urban life and the dress code changes coincided with and, at least in the case of urban modernization, continued under Mohammad Reza Shah. 93 By the late 1950s, Tehran had seen a complete urban makeover, with wide boulevards, streetlights, multistory buildings, and imported cars. In this illustration, we see the dress code and urban makeovers begun in the 1930s come together. The cartoon depicts a crossing of two streets, where on one corner stands the Armenian newspaper press house and on another a bar. The modern architecture with minimalist square windows, the Parisian streetlamp, and the clearly marked street and sidewalks are all incorporated into the caricature as indicators of the already fulfilled promise of Iranian modernity. 94 The long caption on the left appeals to the reader, "Here is a woman in front of you; paint (nerk) and make-up (shpar), latest fashion; there is no shame on her face; what else should I write about her?" The caption continues on the right side of the image where the modern-yet-modest woman is depicted: "In her natural exquisite appearance; 92 See, for example, Khaytuni, "Orvay hratap pahanjneritsʿ," 2. 93 On Tehran's urbanism, see Adle and Hourcade,Teheran;Ehlers and Floor,"Urban Change";and Mazumdar,"Autocratic Control." 94 On Pahlavi architecture, see Grigor, Building Iran. with a virtuous ( parkesht) posture and modest face; she is the embodiment of a model woman; respect and admiration to you, woman." 95 The binary structure of the double column of the caption is faithfully echoed in the compositional arrangement of the drawing itself. They join to convey the equally binary ethical message of the good and the bad, the graced and the demeaned, the cultured and the commercial. Above this caption, modern Tehran is framed by two women, one portrayed as inappropriately dressed and the other, on the right of the image, appropriately clothed. The latter is presented here as the prototype of the "modern yet modest"; she is the New Armenian Woman. 96 Her skirt covering knees dressed in stockings, her long-sleeved buttoned-up collar shirt elegantly accessorized by a scarf -she is a far cry from Bobokh's projection of fashion's future. In her left hand she carries her purse, and in her right hand a rolled-up copy of Tsakhavel. The image hints at the fact that she has just left the press building, thus inferring to her intellectual worth. Whereas the proper woman is leaving the Tsakhavel headquarters, her moral-pictorial reverse is heading to a bar. Like the cinema, bars were modernist spaces opened to women with great contentions and implications. 97 The modest woman's judgmental gaze is directed toward the woman flanking the opposite side of the frame who, with her buttocks confronting the reader, protruding oversized breasts, short skirt with a long slit exposing stocking-less legs and high heels on newly paved sidewalks, is meant to give the impression of at the very least impropriety, and perhaps even lasciviousness. And because of "no shame on her face," it is she who seeks and meets the gaze of the inferred male audience as men behind her stop and stare.
In the back of the picture plane, a large Orwellian head-indeed, the head of Tsakhavel's chief editor and owner, Bazen-hovers over not just the "good" and the "bad" New Women, but the public space, where the Iranian women have now arrived. The introduction of women-as-citizens into the public domain, the street, was, as Najmabadi notes, "underwritten by policing of women's public presence through men's street actions," even the "regulatory harassment" by men. 98 Here, Foucault's notion of the modernist gaze originates on the press building, in the very eyes of the editor; then it traverses to the modest woman whose eyes point to the immodest woman and, through her, is returned to the viewer or reader. The gaze of the Orwellian Big Brother comes full circle. Despite this rigid binary, however, the depiction of the modern woman in both Armenian-and Persian-language periodicals was highly versatile in the discursive space of satire and caricature, ranging from the noble mother of the nation to the publicly available woman. 99 On this Tsakhavel cover, it is precisely this kind of discursive policing by the editor of the paper that we witness. His moral binary of the ideal Armenian woman versus the undignified is mirrored in the graphic binary of the illustration.
Although satire more likely pokes, prods, and provokes rather than proclaims, pronounces, and pontificates and may often seem impenetrable or obscure, some satirists nevertheless had unmistakable aims that they pursued through the medium of graphic representation. 100 For example, the editors and illustrators of both our satirical papers, like other satirical and graphic periodicals globally, deployed the conventions of the medium itself to instrumentalize and recast the image of Armenian women as "modern" and, as a result, Euro-American-looking. They were working within the visual conventions of graphic caricature, with its long-established codes of representation. In a similar vein, in his discussion of the impact of the adoption of Euro-American images of femininity into Iranian culture during the "women's awakening," Amin draws attention to the way that images appearing in Persian-language satirical papers "became fodder for sensational and graphic political expressions in the 1940s." In his analysis of two independent weeklies, Mard-e 95 Tsakhavel 11 (May 15, 1950). 96 Najmabadi, "Hazards." 97 On cinema as a "volatile public space," see Thompson, Colonial Citizens, ch. 12. On Pahlavi-era cinema, see Naficy, Social History. 98 Najmabadi,Women,154. 99 Ibid. 100 Griffin, Satire, 5, 95. See also Grant, "Satire," 13.
Emruz (Today's Man) and Atash (Fire), he argues that both photographs and satire's medium of cartoons "can serve as a guide to gender relationships precisely because they 'standardize, exaggerate, simplify everyday life even more dramatically than everyday rituals.'" 101 An analogous development takes place with the illustrations sketched on the pages of Bobokh even earlier in the 1920s and Tsakhavel in the 1940s, but especially in the 1950s, as such depictions both reflect and direct views about women and gender. Both Bobokh and Tsakhavel had fairly long runs of about nineteen years, only to be outdone by the Tehran-published papers Veratsnund (1930-53), Jahagir much later , and, of course, Alik', which began in 1931 and still exists today. This is quite telling, as most other Armenian-language papers-not counting those geared toward children-in Iran have had an average life of about two to three years, with some notable exceptions, including literary journals Arp'i and Armenuhi (both Tehran, 1949-55) and the organ of the Social-Democratic Hnchak Party in Iran, Zang (Bell, Tabriz, 1910-22). Bobokh's and Tsakhavel's success may have had to do with a variety of factors, especially their ability to reach a broad reading and viewing public with varying levels of literacy and their appeal to the community's views on women and notions of gender. 102 Although the construct of the New Woman was being shaped by these two papers, increasingly Armenian women became active and sovereign agents of their self-representation. In the mid-1970s, this development exhibited its full manifestation.
Displaying Costumes, Exhibiting a Mission Civilisatrice, 1972-76
The depictions of Armenian women by Bobokh and Tsakhavel betray the gender dimensions of the politics around women's issues during the turbulent years buttressed by 1921 and 1953; by the same token, they hint at the fact that women themselves, through organizations, were doing much more than conventional charity work. The establishment of AWU in 1939 by nine 17-year-old Armenian girls from Iran Bethel School was a game changer. Witnesses to the closing of Armenian schools, Emma (1922Emma ( -2013 and Marta Abrahamian, Seda Darmanian (1923-2010, Lili Espero, Hubi Khachaturian, Eleonore " Elo" Mazlumian (1925, Hasmik Carapetian, Vrejik Saghatelian, and Tagush Ohanian were galvanized by youthful anti-Reza Shah sentiment as well as their missionary mentors, with whom they had direct contact through either multigenerational schooling or private tutoring. 103 Unlike their mothers in the AWBS and ACWU, their devotion to equality was not quiescent. During its first four years, between 1939 and 1943, AWU ran an underground network of classes geared toward teaching Armenian language and literature to elementary students who had been assigned to all-Persian-curriculum state schools. 104 During this early illicit stage, AWU also planned regular outings for its few members in the outskirts of Tehran, exercising their healthy bodies and minds in the fresh air of the Alborz mountains. During these excursions, the girls took numerous group photographs of themselves-often with a book or a notebook in hand-and later, on the back of the photographs they carefully recorded: "1938 camp, " "camp 1938 Emam Zade Ghasem," and"1940 camp Ap. 18-21." 105 AWU was among the many organizations that were legalized after Reza Shah's exile. Its first formal executive board served in the year 1943-44, with Seda Darmanian Hovnanian as president, Alice Goyumjian Martirosian as secretary, and Herminé Mkrtchian Hanesian as treasurer. 106 Its very first act was to secure photography's bureaucratic power by documenting this formation in a photo studio; a practice that was repeated with subsequent boards, captured as an official executive unit. 107 Rejecting authoritarianism imposed either by the state or the community, these young women appealed to independence, intellectualism, and high culture. Their first stated mission was clear-cut: "To elevate the intellectual level of the Armenian woman," followed by, "2. To keep alive the patriotic spirit in the Armenian woman; 3. To aid the new generation in its Armenian education and instruction;" and "4. To participate in national life and to contribute to the realization of its goals." 108 The linking of the woman question and nationalism so common in women's movements and activism across the Middle East is evident in AWU's mission and activities as well. 109 From the late 1940s through the 1960s, as AWU expanded and diversified its activities, it maintained the same goals and outlook and attracted wider membership and community status, especially as it rose to meet the next community crisis. Between 1946 and 1949, some 100,000 Irano-Armenians, many from the province of Isfahan, had answered Stalin's call to return "home." They were encouraged and assisted by local Irano-Armenians such as Garagash. 110 For customs papers, families were photographed by New Julfan Armenian photographers, including Patkerahanian, and left their villages en masse. 111 When the Iranian government halted "repatriation" in 1947, many found themselves stranded in Tehran. 112 The majority of those who arrived at their destination in Yerevan were men, whereas those left behind, the majority women, first lived in the slums of Behjatabad, awaiting permanent settlement in the neighborhoods of Narmak, Zarkesh, and Majidieh. 113 In the early 1950s, AWU, seeking "to render" these impoverished and displaced "Armenian women literate," began to offer classes, organize "useful lectures," and "tried to familiarize them with everyday issues." 114 While tending to the women of the shantytowns, AWU also organized weekly literary-cultural gatherings. 115 After this refugee crisis, AWU moved on to more intellectual and cultural programming. In 1960, it hosted Armenian feminist Ellen Buzand during her lecture tour of Iran and then helped publish her book, Nor Kinĕ (The New Woman, 1960). 116 This was followed by AWU's daring publication of Jean-Paul Sartre's existentialist play Huis clos (1944) in 1963, which featured two female protagonists and was translated into Armenian by one of the AWU's board members, Knarik Avagian (president 1952-53). 117 AWU also hosted the French armenologist and Armenian chair at l'École des Langues Orientales de Paris, Frédéric-Armand Feydit, in 1967. Through the decades, AWU subsidized students who went to study abroad and supported graduate students who enrolled in the Armenian studies program at Isfahan University. Despite its many undertakings, what most impressed its members, however, were AWU's weekly lectures on diverse topics, ranging from flower arranging to hygiene to women's rights, organized on Tuesday afternoons from three to five o'clock at the Armenian Club, located at the intersections of Hafez and Naderi Avenues. Feminism, neither explicitly articulated nor entirely absent, was nevertheless practiced and thus acted as an epistemic variable with which the Armenian women of Iran grappled in the 1940s and 1950s.
Rebellious in the 1940s to the 1960s, the leadership of AWU began to be perceived by many Armenian young women as ossified during the 1970s, a period of both significant reforms in family law and increased centralization and state control over all associations, including women's organizations, as exemplified by the creation of WOI. Given their long history with women's organizations, many Armenian women immediately joined WOI, as evidenced by the identification card of Huri Yahinian Aslanian (1925Aslanian ( -2000Fig. 7). Issued by WOI to her as a member of Tehran's AWBS (anjoman-e khayriye-ye aramane-ye tehran), it is dated 1 Aban 1345 (October 23, 1966). Most scholars attribute the origin of WOI to the general meeting of the High Council on 28 Aban 1345 (November 19, 1966). 118 Yet, this card was issued twenty-seven days earlier, between Princess Ashraf's "command on 29 Mordad 1345 (20 August 1966)," to review "the High Council and its shortcomings" and the formation of WOI in November. 119 WOI's 1976 annual report listed thirty-three "member organizations" in Tehran. 120 Among the religious minorities, the Armenian organizations of the ACWU, the AWU, and the Cairo-created Armenian General Benevolent Union were followed by the Iranian Jewish Women Organization (est. 1949) and the Zoroastrian Women's Organization. Individual Armenian women also were active on WOI's various committees, including Margarette Grigorian on the Social Welfare Committee (commission-e rafa-ye ejtemai) and Nvart Masumian on the Handicraft Committee (commission-e sanay-e dasti). 121 In the centralized context of Mohammad Reza Shah's reign in the 1970s, the AWU was at its economic and organizational zenith, which led to the success of its most publicly visible undertaking in 1974. Its original mission was now wholly aligned with late Pahlavi modernist plans and its brand of women's rights. During this period, Armenian women's organizations, especially the AWU, resolutely engaged the Iranian state, the WOI, and other cultural entities as sovereign modern citizens with a shared stake in the modernist agenda of late Pahlavism. The fully cultivated stages of aesthetics and athletics were their playground. They injected the "modern-yet-modest" Armenian women into larger Pahlavi discourses on cosmopolitanism and women's rights through athletic, cultural, intellectual, and artistic undertakings. The state, in turn, did not hesitate to appropriate the healthy and hygienic bodies of its Armenian minority women to project the image of a progressive king at the vanguard of both ethnic and women's rights. When in 1968, for instance, the women's basketball team of Tehran's Ararat Cultural Organization won the national championship, the leading daily Kayhan applauded "Armenian girls" on their accomplishment while flaunting the photographs of their agile bodies in action on its hefty sports pages. 122 In the final decade of Pahlavi rule, AWU's mission civilisatrice "of educating and raising the cultural level of Armenian women," as stated in its bylaws, became especially manifest as it began to engage with larger state plans, including Mohammad Reza Shah's march toward the "Great Civilization" (tamadon-e bozorg).
Tehran was abuzz in May 1974. By the invitation of the Red Lion and Sun Society, the delegates of the Thai Muslim Women's Foundation had arrived on a ten-day visit of Iran, while the American Women's Club was convening its meeting on May 14. 123 The Ice Palace was screening Monte Walsh (1970), the Cinema Goldis Fiddler on the Roof (1971), and the Iran American Society Richard C. Sarafian's Run Wild, Run Free (1969). 124 The government was gearing up to host the Seventh Asian Games the following September at the massive Aryamehr Sport Complex. On the other side of the city, between May 13 and 16, from six to nine o'clock in the evening, forty-four Irano-Armenian women also took part as models displaying the history of Armenian costumes for the public in the Armenian Club, now relocated to the north, from the corner of Naderi and Hafez Avenues to France Avenue. On opening night, they all posed in their heavy dresses-thoroughly researched and tailored to the last historical detail-for a special viewing by Empress Farah and her official entourage, which included Minister of Culture Mehrdad Pahlbod, the long-term Armenian representative to the Iranian Parliament, Sevak Saginian, and his wife, Nella Saginian (Fig. 8). Following royal protocol, the exhibition opening was secured by SAVAK and limited to the official dignitaries and special invitees, which included the family members of the models as well as the tailors and women artists who had created the costumes. 125 The four days of the exhibition were the conclusion of several years of research, planning, and production. From 1972 to 1976, AWU poured its expertise into completing the steps ordinarily undertaken by a museum: the art history research on women's costumes, selection of specific artifacts as originals to be reproduced, choosing of models, creation of costumes, assembly of costume accessories, construction of a stage set, choreographing of exhibit models, printing of the exhibition catalog, photographing of models in costume, composition and translations of the academic text, public relations for the costume book, and final publication of two exhibition catalogs. The entirety of the project, with its various moving parts, was driven by modernist systematization and visual primacy, now honed to perfection by the AWU.
Under the presidency of Armik Tumanian Nercessiantz (president 1972-76), an executive board was formed in fall 1972. It was led by Emma Abrahamian, one of the 17-year-old cofounders of AWU, a graduate of l'École des Beaux-Arts in Paris, among the twenty-seven 122 "Afarin bar dokhtaran-e Aramaneh." 123 "Armenian Costumes." 124 Guevrekian (secretary, 1955-56), andLida Lianazof (auditor, 1966-67). 127 Whereas the initial art history research and the selection of images of specific artifacts (i.e., coins and seals; illustrated manuscripts; miniature paintings-including a few by the famed Toros Roslin, ca. 1210-70 CE; church mural paintings; oil paintings; architectural reliefs; and photographs) were done collectively, each of the five committee members was assigned the supervision of eight artifacts and, in due process, the transformation of each into a live model. This was followed by a period of purposeful recruitment of girls and women with specific facial and physical features, matching the carefully selected historical artifacts. Older models were primarily drawn from AWU's ranks. Younger models were sought out in the community. Alice Khatchikian recalls being approached by Nercessiantz at the Armenian Club. In a similar manner, while sipping a café glacé at the historical Café Naderi on Naderi Avenue, Marie Louise Grigorian was "discovered" by a stranger-who turned out to be Abrahamian-because of her "Isfahani face." 128 Each model's physique was adapted to precisely duplicate the historical evidence. Volunteer work dominated and fueled the project, including the arduous production of the dresses at the end of 1973 and during the first half of 1974. However, a great deal of flexibility in individual investment and involvement was built into the project. Young women were actively recruited into the organization, and lifelong members were given key roles, occasioning opportunities for camaraderie and mentorship. For some, it was a "lifechanging" moment; others barely remember their contribution. 129 Both the labor and the material acquisition were planned flexibly. Many models purchased the fabric at their own expense and tailored their own dresses. Three AWU members volunteered as both model and dressmaker, and seven members, including several committee members, called upon daughters or nieces for whom they made the dresses. Twenty-two professional tailors offered their labor at no cost to produce a total of twenty-eight costumes, including one for a six-year-old girl. The crowns, hats, and embroideries were crafted separately by both professionals and amateurs. "All the costumes, hats, veils and aprons," the exhibition brochure explained, "have been cut, sewn and handcrafted by Armenian women in a labour in which skill and enthusiasm have gone hand in hand." 130 Those who could afford it acquired their own fabrics; for others, AWU supplied the materials. The dress representing the region of Karin/Erzurum, for instance, was made of an Indian fabric purchased in London, a regular leisure or education destination for well-to-do Irano-Armenians in the 1970s. 131 At least two of the costumes in their entirety were nineteenth-century dresses, including the thick purple velvet outfit from Konia borrowed from the Rusmanian family collection, which required insurance for its use in the exhibition. 132 Furthermore, the models whom we interviewed confirmed that the committee persistently sought out "antique" accessories to complement the costumes. 133 Abrahamian, the lead on the project, was intimately familiar with this practice-of scavenging the alleyways of southern Tehran's bazaar in search of Qajar-era belt buckles, keys, locks, jewelry, coins, and other riches. As a member of Tehran University's Faculty of Fine Arts-alongside well-known artists Parviz Tanavoli and Marco Grigorian-she knew how to seduce the past to birth the modern. The incorporation of rare, antique, and inexpensive secular and in-use religious artifacts with the dresses significantly contributed to the interjection of this Armenian women's narrative into the wider Pahlavi promotion of folklore as an expression of modernism, often sponsored by Empress Farah's office and the "folklorists" of the Iranian elite. 134 To effectively stage the exhibition, AWU enlisted professionals from within the Irano-Armenian community as members of the Coordinating Committee. The committee comprised Pistos Marugg (choreographer), Aramais Aghamalian (stage director), Arby Ovanessian (artistic advisor), Lida Berberian and Henri Yeganian (musical directors), Serj Avakian (graphic designer), and Razmik Arzooian (photographer). 135 Elongated but low wooden platforms were erected along the brick walls of the basement hall of the Armenian Club to render the exhibition as a chronologically evolving experience for the viewer; a "historical sequence," as remarked in the exhibition brochure, that "expresse [d] both continuity of traditions and cultural evolution," starting from Urartian and Parthian figures and extending to nineteenth-century Constantinople and Tbilisi urbanites. 136 The models were coached by Marugg-a professional ballerina and ballet teacher-in a certain set of arm and torso movements performed while standing in situ. An electrical wire system was devised to enable Berberian, who had composed and arranged the music program but also served as model 21, to access the on-and-off button under her foot. At certain intervals, the models performed their movements as the music came on; in the absence of music, they froze as if statues in a museum. The systematized details of the performance and display were matched by the modernist venue, the new hall of the Armenian Club, designed by Rostom Voskanian, a l'École des Beaux-Arts graduate and one of the heads of the architectural ateliers at Tehran University's Faculty of Fine Arts. 137 The trilingual brochure of the exhibition (1974; see Fig. 8, the brochure is visible in Empress Farah's hand), opened with an homage to the cultural head of the Pahlavi state and her policies of pluralist inclusion through the valorizing of folklore: "Her Imperial Majesty, Shahbanu Farah Pahlavi's Gracious interest in the arts of various communities of this country is an immense source of inspiration towards further enrichment of cultural entities with mutual understanding and respect among people living in this land through the ages." 138 This fourteen-page brochure, printed in Armenian, Persian, and English detailed the historical facts about each character in brief paragraphs under the model's number in the brochure. The introduction draws the reader's attention to the academic underpinning of the exhibition by noting that illustrated manuscripts were consulted at "the British Museum, the Berlin Museum, the Vatican Museum, and the Yerevan Museum, as well as the Armenian library-museums in Paris, Vienna, Venice, Jerusalem and New Julfa, Isfahan." 139 It was noted that the work of such "distinguished scholars" as German Orientalist and Urartian expert Carl Ferdinand Friedrich Lehmann-Haupt, French historian of Cilicia Victor Langlois, Soviet archaeologist of the South Caucasus Boris Piotrovsky, and others buttressed the exhibition. The narrative of the text and its graphic tropes also displayed AWU's engagement with mainstream Pahlavi narratives about modernity and civilization. The once marginal in Iranian society was now helping shape the center.
Guiding the empress through the exhibition, Abrahamian explained the qualities and characters of each dress on live display: the goddess of Urartu from the ninth century BCE, Queens Satenik and Ashkhen from the Arsacid dynasty (12-428 CE), Queen Gurandukht of the Bagratid dynasty (ca. 885-1045 CE), Queen Keran and Princesses Keran and Zabel of the Armenian Kingdom of Cilicia (1080-1375 CE), and generic figures such as nuns, peasants, "gozals" (beauties), and "aristocrats" from various periods and regions, including the Safavid and Ottoman Empires, a prosperous New Julfan merchant's wife from the seventeenth century, two villagers from Chahar-Mahal and Feridan (dresses in use at the time of the exhibition), and two seventeenth-and nineteenth-century aristocrats from Tbilisi, among others. 140 While approaching and inspecting each, Farah engaged her ethnically Armenian subjects with interest in the exquisite artifacts. To two models, she teased, "Aren't you hot in that?" and "Aren't you getting tired?" At a third, she inquired about the golden lacework, whereas at another she observed that nylon stockings had not yet been invented in the sixth century. The empress also reassured another model that she had not taken offense when the nervous teenager addressed her as "his majesty" 136 Hay Kin Miut'iwn, Ts'uts'ahandēs, 5;Hay Kin Miut'iwn et al., Hayuhin, models 1-3, 41, 42. 137 Rostom Voskanian (1932 was the oldest son of Minas Voskanian, the Tabriz photographer whose studio was passed on to his daughter, Hasmik, freeing Rostom to pursue a career in architecture. On Voskanian, see Grigor,"Rostom Voskanian,[12][13][14] Hay Kin Miut'iwn, Ts'uts'ahandēs, 1. On Shahbanu's promotion of cultural inclusion and preservation, often dubbed "folklorist," see Grigor,Building Iran, (a'lā hazrat) instead of "her majesty" ('olyā hazrat). 141 The linguistic slippage of this young woman as well as the sovereign's benevolence speaks volumes about minoritarian modernity and the solidarity of women. Despite the upward mobility and growing integration of Iran's religious minorities into mainstream society in the late Pahlavi era, the scene also reflected to some degree the enduring (self-)marginality of Armenians even in the performance of inclusion through the exhibition. Farah, who "always encouraged [her] office to sponsor many private cultural and social events," was visibly impressed by the exhibition, as the committee had hoped. 142 Yet, to prevent any royal conundrum, an executive decision had been made in advance by the hosting members to refrain from using the Persian expression pishkesh, in case the empress took up the offer. 143 After all, AWU intended to donate the costumes to the museum of the once powerful All Saviour's Cathedral (Vank, est. 1606(Vank, est. , building 1655 in New Julfa, Isfahan, although most are now part of the collection of the Ardak Manoukian Museum, adjacent to Saint Mary Armenian Church in Tehran. The exhibition of "historical dresses" made a splash in the national and international mass media and led to the production of a film that was shown on national television and at the Shiraz Festival of Arts. 144 The day before the opening, Tehran's popular daily, Ayandegan, showcased four of the models, including the little girl, with an article entitled "Something More Than a Fashion Show." 145 On the evening of the opening, Kayhan International published a photograph of the empress viewing four of the models; two days later, so did The Tehran Journal and Alik'. 146 Since the 1930s, the Pahlavi media had depicted the royals inspecting monuments, cutting ribbons, visiting exhibitions, and in this way narrating the secular nation. 147 Kayhan International's caption read, "Empress Farah inspects an exhibition of Iranian Armenian women's dresses at the Armenian Club yesterday," further adding, "the exhibition featured styles from various eras of Iranian history." This was an inaccurate representation of what was intended by AWU and displayed, as of the forty-four costumes only eight-from New Julfa (sixteenth and seventeenth centuries, models 12-15), Tabriz (eighteenth century, model 16), Karabakh (nineteenth century, model 20), and Syunik (eighteenth century, models 23-24)-could be deemed as part of an "Iranian" historical era. However, what is telling here is how the exhibition and its rituals of display, viewing, and media hype reflected the state's wider discourse on inclusive cultural plurality under one monarchy. The cumulative sum of these events and their reverberation in the public domain embodied in multiple ways the women's movement in the last decade of the Pahlavi era. Farah's visit to the exhibition was a performance of deep sociopolitical and ideological patterns in late Pahlavism. Photography's performative mandate to produce sociopolitical meaning was now reclaimed through an erudite exhibition that was performed on the stage of both art and diplomacy. It occasioned a moment in which a form of modernism and feminism met on the grounds of high art and cultural regionalism. The double marginality of being a woman and a Christian Armenian was diluted in the discourses of Pahlavi cosmopolitanism and civil society formation and at the same time emboldened by the visual strategies of museum culture, artistic display, and valorization of folklore.
Both national and international mass media reported on the exhibition, and WOI included it in its 1975 annual report. 148 Farah's endorsement of the event gave the AWU added clout to translate the exhibition into a richly illustrated exhibition catalog in the format of a book, entitled Hayuhin ew ir taraznerě [The Armenian Woman and Her Costumes, 1976]. In the immediate aftermath of the exhibition, a new Publication Committee was added to the Costumes Committee. In addition to Gevorgian, Bernardi, and Nercessiantz, two veteran AWU members joined: Adelina Petrosian Stepanian (board member, 1944-45) and Leontine Masumian (vice-president, 1958-59), who noted in the catalog that the "visit and attention of Iran's devoted and art-loving Empress . . . the encouragement of Armenian and other artists, and the urging of the very many people who attended the show, gave us the courage to publish an album of the costumes exhibited." 149 For the production of the catalog, as with the staging of the exhibition, three high-profile professionals were added to the Publication Committee. The London-based, "award-winning" fashion photographer, Peter Carapetian took a break from British Vogue and Brides Magazine and arrived in his native Tehran to photograph each of the models. 150 For the printing of the book, AWU approached Gregory Lima, a New York journalist who had come to Tehran in 1958 to head the launching of Kayhan International. His interest in AWU's proposition was multiple: his Armenian wife and two sons, his draw to writing, and perhaps that his mother had been a "seamstress and a shop steward" for the International Ladies Garment Workers Union, one of the earliest and largest majority female labor unions in the United States. 151 The decisions surrounding the location and aesthetics of the photo sessions followed the same logic of "the authentic" implemented in the design, production, and display of the costumes. AWU organized several sessions at specific sites throughout Iran that would reinforce the authenticity of the costumes. Arrangements were made for Carapetian, his photography team, and groups of three to five models to travel as far north as the monasteries of Saint Thaddeus and Saint Stepanos (seventh to the seventeenth centuries) in Azerbaijan Province, and as far south as the All Saviour's Cathedral in New Julfa and the Armenian villages of Isfahan Province. In and around Tehran, the skirt of Ab`ali mountain, the gardens of Niavaran Palace, the interiors of Saint Sarkis Cathedral (1971) andSaint Mary Church (1945), and as well the interior and exterior of Galstian's neoclassic home served as diverse environments for the photo sessions. The high-quality photographs of the forty-four models appeared in color on full pages, alternating between verso and recto, facing the line-drawing of the historical artifact (described previously) based on which the costume had been produced. The attention paid to the quality and the authenticity of the final works were rendered mobile and permanent with the publication of the catalog book. The side-by-side, comparative reproduction of the historical evidence (the artifacts) and the copies (the dresses) created a modernist veracity. With a few years of delay due to color separation in London and printing in Hong Kong, the book was published and "sold out instantly"; the much-demanded reprint "never happened." 152 Following the media hype about the exhibition, several board members sought out organizational partners in Paris, Boston, Washington, DC, and Los Angeles to coordinate an international tour of the costumes. 153 Lack of financing followed by the onset of the Iranian Revolution put an abrupt end to AWU's aesthetic, cosmopolitan, and feminist ambitions. The century-long pictorial journey of Irano-Armenian women from the 1880s to 1976 traced here reveals the Irano-Armenian brand of the New Woman as she became idealized, satirized, belittled, and admired. She was first captured by male photographers in the modern space of the photographic studio as dignified and austere, as she struggled to secure proper education for girls or succor for refugees; she was then mocked by male editors and caricaturists while being sidelined by king, prelate, and party boss. From the outset, be it in photo studios, schoolyards, charity work, or historical writing, women insisted on their own textual and visual self-representation, itself a modernist discourse that came full circle in 1974, when they showcased their presence not only in the tropes of history, fine arts, and folklore but also in the rituals of nationhood and kingship. As herstory remained muted, these women struggled to be agents of visual and textual representations as women, Iranians, Armenians, Christians, artists, tailors, grassroots volunteers, and modern citizens of Iran and the world. As such, despite their double marginality, their activism came to help shape Iran's unique experience of modernity during the course of a turbulent twentieth century. | 2022-04-08T15:26:22.253Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "efc8b8cc336bfa76e7cb47bccc9fcd26a9eb15b1",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/A211EDF1CF3C9EAD863BD9D14FE18A1F/S0021086221000189a.pdf/div-class-title-pictorial-modernity-and-the-armenian-women-of-iran-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "6655299c43cb95718044cd1735b9ab17d9e045d8",
"s2fieldsofstudy": [
"History",
"Art"
],
"extfieldsofstudy": []
} |
222183677 | pes2o/s2orc | v3-fos-license | The Lichtenberg Keilmesser - it’s all about the angle
The presence of the ‘Keilmesser-concept’ in late Middle Paleolithic assemblages of Central and Eastern Europe defines the eponymous ‘Keilmessergruppen’. The site of Lichtenberg (Lower Saxony, Germany) was discovered in 1987 and yielded one of the most important Keilmessergruppen assemblages of the northwestern European Plain. At that time, researchers used the bifacial backed knives to define a new type, the ‘Lichtenberger Keilmesser’, which they characterized by an aesthetic form-function concept with a specific range of morphological variability on the one hand, and a standardized convex cutting edge one the other hand. Thereby, a shape continuum was observed between different form-function concepts in the Lichtenberg assemblage, from Keilmesser through to Faustkeilblätter and handaxes. In a contrasting view, it was recently suggested that the morphology of Keilmesser, including what is defined here as type Lichtenberg, is the result of solutions to establish and maintain edge angles during resharpening. With the intention to evaluate these contrasting hypotheses, I conducted a re-analysis of the Keilmesser from Lichtenberg and their relationship to central German late Middle Paleolithic knives, using 3D geometric morphometric analyses and an automatized approach to measure edge angles on 3D models. Despite a morphological overlap of the tools from both regions, I could show that the Lichtenberg Keilmesser concept refers to one solution to create a tool with specific functionalities, like potentially cutting, prehension, and reusability. To establish and maintain its functionality, certain angles where created by the knappers along the active edges. This behavior resulted in specific shapes and positions of the active parts and created what looks like a standardized or template morphology of this Keilmesser type.
Introduction
The bifacial backed knife, and more specifically the Keilmesser-concept, observed on bifacial and unifacially shaped tools [1], is the most prominent tool type of the central European Micoquian [2][3][4][5][6]. Furthermore, its presence in late Middle Paleolithic (LMP) assemblages defines the eponymous Keilmessergruppen [7,8]. Based on earlier definitions [3,[9][10][11], Jöris [12,13] defines the tool as bifacial cutting tool with a working edge opposite an unworked or roughly worked back, a base in the proximal part adjacent to the back, as well as a second, sometimes also sharp edge in the distal part (distal posterior part) that converges with the cutting edge and forms an often pointed distal tip. When Veil et al. [8] discovered the site of Lichtenberg (Lower Saxony, Germany) in 1987 (Fig 1), they found one of the most important Keilmessergruppen assemblages of the northwestern European Plain. Veil [8] and Jöris [12,13] used the bifacial backed knives to define a new type, the Lichtenberg Keilmesser (Fig 2). Veil [8,14] describes the ideal tool as follows: an oval shaped outline with a longitudinal symmetry, especially in the tip region, a convex lateral working edge that extends to a retouched and often rounded sharp tip at the distal part, and a natural or partly retouched back. The back opposite a sharp working edge results in a wedge shaped cross section of the tool. Bifacial backed knives resembling Lichtenberg Keilmesser occur in several sites across the central and eastern European Plain between Marine Isotope Stage (MIS) 5a and MIS 3 (Fig 1). Examples are Salzgitter-Lebenstedt (Lower Saxony, Germany) [15,16], Königsaue layer A and C (Saxony-Anhalt, Germany) [17,18], Pouch (Saxony-Anhalt, Germany) [1,19], Piekary IIa layer 7c [20] and Piekary III [21] (Poland), Wroław-Hallera Av. (Poland) [22], Pietraszyn 49a (Poland) [23], and Khotylevo (Russia) [24][25][26][27][28]. Potentially, comparable tools to the Lichtenberg Keilmesser occur as far east as southern Siberia [29].
From a morphological point of view, especially the handaxe-like oval outline shape, as well as the mostly rounded tip with a circumferential working edge extending to the distal posterior part make it different from other Keilmesser types (see e.g., figures in [12,13]). However, within the range of variability are also pointed tips, differing form the ideal case [8] (Fig 2:1). Alongside the Lichtenberg Keilmesser exist other types of bifacial backed knives within the Keilmessergruppen [3,12,13]. But Jöris [12,13] demonstrated the existence of a resharpening trajectory between them, and most of the types morphologically merge into one another. More importantly, there is a technological difference between Lichtenberg Keilmesser and Keilmesser resharpened with the tranchet blow (Keilmesser with tranchet blow, KMTB [30]) parallel along the working edge (see also [9,12,13,31,32] for further definition and explanation of the underlying concepts). Here, the distal end of the tool is prepared as striking platform for longitudinal resharpening removals directly along the working edge. This creates a sharp lateral working edge with practically one strike. Due to the performance of resharpening blows from the distal end [32], the KMTBs are also morphologically different from the Lichtenberg Keilmesser: The distal posterior part resembles here a highly convex bow without a circumferential working edge at the tip (that we see in the 'ideal' Lichtenberg Keilmesser) to create a striking platform perpendicular to the working edge for the longitudinal resharpening removals.
Within his type definition, Veil [8,14] interprets the morphology of the Lichtenberg Keilmesser as a result of a special aesthetical form-funtion concept that Neanderthals had in mind during manufacture. For him, the concept of a relatively long back, a sharp tip, together with a longitudinal symmetry and a convex cutting edge of Keilmesser is conceptionally different to other tool categories within the Lichtenberg assemblage. However, according to Veil the most standardized element of Keilmesser is the convex cutting edge, which he also recognized on other (static) tool types, like "Faustkeilblätter", handaxes, and leaf-shaped scrapers. Faustkeilblätter (Fig 2:5 and 2:7) are related to Keilmesser, as they have also a working edge opposite a back, a base, and a tip with a circumferential working edge. In contrast to the latter, their distal posterior part is relatively long (i.e., longer than the back) and resembles a rather thin edge. Handaxes (Fig 2:8) are characterized by a thin symmetric tip, formed by two lateral working edges, and an unworked base. One of the lateral edges can be slightly shorter than the other and is connected to a back-like extension of the base. Leaf-shaped scrapers are bifacial tools with an oval outline shape and a transversal symmetry. Based on their morphological difference to the tools mentioned before as well as their opposite symmetry concept, they are not included into the further analyses. Despite aesthetics, Veil interprets the overall shapes of the specific tool types as means to fulfill specific functional tasks. However, he admitted that morphological variability within and divergence from the ideal form-concepts exist in the assemblage. Veil [8] argues that this may be caused by resharpening or the pragmatic use of raw material features, like e.g., a Keilmesser-like shaped natural piece which was transformed through marginal retouch into a bifacial backed knife (Veil et al. 1994: 34 [8]).
Jöris [13] also characterized the Lichtenberg Keilmesser by a high overall morphological variability. He highlights as well the standardized convex cutting edge, which seems to be contrary to a stated high morphological variability of the tool. In contrast to Veil, Jöris [13] observed a shape continuum from Keilmesser through to Faustkeilblätter and handaxes.
But what does the "high morphological variability" mean and how is variability structured within this type? If the morphology is highly variable, what, in the end, constitutes the type or the form-function concept besides the retouched tip, the convex cutting edges and the lack of tranchet blows? Contrasting Veil's ideas of fixed form-function concepts, Iovita [33] hypothesized that angle reduction of the active edges is one of the main factors that drive LMP tool morphology and technological features. He states that the overall morphology of LMP tools is designed as a technical solution to handle the problem of increasing edge angles during use and subsequent resharpening. Further, it was suggested [19,34,35] and has been shown [1,33,[36][37][38] that these concepts and related life histories apply likewise to unifacial and bifacial LMP tools. Following Iovita, there are three different solutions in the LMP to solve the problem of increasing edge angles: (1) thinning the tool volume using the back as striking platform (see also [11]), (2) the reduction of the edge angle through blows directly from the edge, and (3) the tranchet blow struck from the distal edge to thin the tool volume directly along the working edge [30,32]. The first concept includes the distal posterior part, as Iovita did not separate this edge from the back. According to him, non-KMTB Micoquian bifacial backed knives, including the Lichtenberg Keilmesser, where manufactured and maintained using the first and second solution. If we follow the arguments made by Iovita [33], this would imply that the tool morphology is dictated by technological solution(s) to maintain an acute angle of the working edge during subsequent use of a long-living tool. On the other hand we have to be cautious, as subsequent resharpening can also alter the shape of tools [31,33,36,37,[39][40][41][42][43][44][45][46][47]. Especially Keilmesser change overall shape and size during subsequent reduction [12,13,31,37,[47][48][49][50]. But Iovita [37] could show that despite an allometric shape change during resharpening, the individual parts of Keilmesser from Buhlen (Hesse, Germany) change isometrically in relation to each other and the techno-functional and prehensile units stay constant on the tools (for a contrasting view on Keilmesser resharpening see e.g., Richter [49] and Uthmeier [50]). Summarizing Iovita's ideas, he created the idea that the Keilmesser is designed for edge angle maintenance during subsequent resharpening, while the functional units stay constant.
Following from what I said above, there are two main hypotheses to explain the variability and morphology of the Lichtenberg Keilmesser: 1) it was a solution for maintaining acute edge angles on the active parts of a tool, or 2) it was a form-function template and an aesthetic design type.
The present study is an attempt to get new insights about the mechanisms and the structure of variability within this tool type. I am going to focus here on the analysis of Lichtenberg Keilmesser from the eponymous site (Fig 1). To increase sample size and to analyze the tools within a broader context, I incorporated my recently published data set [51] of late Middle Paleolithic Keilmesser from central Germany. The tools are also characterized by a convex cutting edge opposite a back, an often sharp distal posterior part and a retouched and mostly rounded tip, and match therefore the definition of the Lichtenberg Keilmesser.
To evaluate the assumed standardization of the working edge, I used the following approach: If we split the tool concept into morpho-functional units [16,48,[56][57][58][59][60], it consists of a prehensile part, the base and the back, and two active edges, the working edge and the distal posterior part, including the distal tip formed by both edges. As the back and the base consist mostly of natural and/or roughly worked surfaces with an inherent natural variability, I assumed that the retouched active edges of the tool concept are the most important parts to trace mechanisms that structure tool variability. I therefore anaylzed separately the 3D geometry of the distal posterior part and the back on the one hand, and the working edge on the other hand to see which parts are the most variable.
In the next step, I applied an automated approach to measure edge angles on 3D models [53]. With this method I was able to conduct a detailed edge angle analysis of the active edges to evaluate the ideas brought forward by Iovita [33], and to get insights about edge function. Finally, I used the edge angle and 3DGM data to analyze the reduction and resharpening of the Keilmesser within my dataset.
The combined approach of technological observations, 3DGM and automated edge angle analysis obtained from 3D models aims to provide new insights into the structure of variability underlying a dataset of central European LMP knives and the meaningfulness of their classification as a special type.
Artifacts
The sample of tools from Lichtenberg analyzed in the present study are stored in the Landesmuseum Hannover, Das Weltenmuseum, Willy-Brandt-Allee 5, 30169 Hannover, Germany. The permission to study the material was granted with a cooperation contract between the Max Planck Institute for Evolutionary Anthropology, Deutscher Platz 6, 04103 Leipzig, Germany, Dept. of Human Evolution and the Landesmuseum Hannover. All necessary permits were obtained for the described study, which complied with all relevant regulations. The assemblage of Pouch and parts of the collection from Goitzsche are stored in the Landesamt für Denkmalpflege und Archäologie Sachsen-Anhalt-Landesmuseum für Vorgeschichte Richard-Wagner-Straße 9, 06114 Halle (Saale), Germany. The second part of the assemblage Goitzsche, as well as the collection from Löbnitz are stored in the Landesamt für Archäologie Sachsen, Zur Wetterwarte 7, 01109 Dresden, Germany. The datasets from Pouch, Löbnitz and Goitzsche have been already published by the author within four articles and his dissertation [1,19,51,61,62] and required no additional permits, which complied with all relevant regulations. The numbers of the individual specimens are provided within the text, the .RData file within the Supplementary Information to recreate the article with rMarkdown, as well as in the S1 Table. I incorporated 35 bifacial Keilmesser and 7 unifacially shaped Keilmesser (Table 1) from the sites Lichtenberg, Pouch, Löbnitz and Goitzsche (Fig 1) in my dataset. The sample size for Pouch (6) and Goitzsche (3) is low, but the focus of the present study lies mainly on the Lichtenberg Keilmesser type and not on a comparison of tools from different sites. In the few cases that I do that the results have to be regarded as tentative. I included also 4 handaxes (Fig 2:8) as a morphological out group to test the reliability of the 3D geometric morphometric analysis. Although the late Middle Paleolithic handaxes are interpreted as related to Keilmesser [63,64], their two working edges and their overall symmetrical shape result in a morphology distinct from the asymmetric Keilmesser [1,8,13]. Due to their symmetric shape and narrow range of morphological variability [1] they should form their own group within the multivariate analyses. Furthermore, at least two of the Keilmesser from Lichtenberg could by typed as Faustkeilblätter (Fig 2:5 and 2:7) according to Veil [8,14]. I included them because regardless of their longer distal posterior part, they share the main techno-morphological elements with Keilmesser. Further, their inclusion may help to evaluate Jöris' hypothesis of a continuum between Keilmesser, Faustkeilblätter and handaxes.
Assemblages
The site of Lichtenberg was discovered in 1987 and subsequently excavated by the Landesmusem Hannover until 1993 [8,14]. The assemblage contained 405 artifacts with recorded provenience, among them 76 retouched tools. The numerical age was measured using thermoluminescense. Dating uncertainties place the assemblage between MIS 5a and early MIS 3. The thermoluminescence age range from 66±14.6 ka to 52±6.8ka [8]. I incorporated a sample of 19 bifacial Keilmesser, 3 unifacial Keilmesser and one handaxe (
PLOS ONE
The Lichtenberg Keilmesser -it's all about the angle Volunteer archaeologists discovered the site in 2002 [19] and the find layers were excavated thereafter by the Landesamt für Denkmalpflege und Archäologie Sachsen-Anhalt-Landesmuseum für Vorgeschichte. The sediments that contained the finds were silts and sands connected to a last glacial braided river terrace (Lower Terrace). Luminescence dating of the find layers yielded ages of 46.2±2.5 ka and 47.1±2.7 ka [19]. Unfortunately, the site was destroyed by a flood of the Mulde river. At the time of the excavation, the former mine was refilled to create a lake and the flood in 2002 raised the water level to the present state within a few days. However, the excavators recovered 371 artifacts, including seven refit sequences pointing to the relatedness of the find material [19,51]. The 58 bifacial and unifacial tools are mostly characterized by a knife-like character, with sharp working edges often opposite a back, together with modified and unmodified pointed tips [1,19,51]. I included 3 bifacial and 3 unifacially shaped Keilmesser (Table 1) into the dataset, all of which are preserved in fresh condition. Goitzsche (Saxony-Anhalt and Saxony), or Goitzsche Collection [19,51], is an assemblage of late Middle Paleolithic artifacts that volunteer archaeologists collected between 1991 and 2002 in the same former open-cast mine where the site Pouch was located. All the finds stem from the basal layers of the last glacial river terrace deposits and were numerical dated to the onset of MIS 3, between 55 ka and 40 ka [1,19]. I recently analyzed 1008 complete artifacts [19,51] and the presence of prepared core blank production methods, together with the occurrence of Keilmesser, handaxes, bifacial and leaf-shaped scrapers attribute the assemblage to the late Middle Paleolithic of central and eastern Europe [8,[66][67][68]. I included 2 bifacial and one unifacially shaped Keilmesser into the present dataset. Except one which was affected by fluvial transport, the artifacts are preserved in good condition. This suggests that two of them have been collected from primary contexts. However, due to this preservational issues, I excluded the finds from most of the edge angle analyses.
The ongoing gravel pit Löbnitz (Saxony) [1,51,69,70] is situated less than 1 km east of the former brown coal quarry Tagebau Goitzsche, quarry field Rösa-Sausedlitz. There, the gravels of the same Lower Terrace sequence as Goitzsche and Pouch are exploited by a floating dredger. Directly following the mining, the gravel was separated in different size fractions and the coarse gravel was dumped on a separate pile. From the latter, volunteer archaeologists and geologists have collected more than 3000 stone artifacts since the 1990s [69]. The sample of 838 complete artifacts that I analyzed recently [1,51] includes Keilmesser, handaxes, leaf-shaped scrapers, and prepared core blank production methods. Therefore, and as there is no other gravel accumulation in this area other than the last glacial Lower Terrace sequence, it can be inferred that the artifacts collected in the gravel pit of Löbnitz originate roughly from the same chronological and geological context as the stone artifacts from the Goitzsche Collection and from Pouch. I incorporated 11 bifacial Keilmesser and 3 handaxes from the Löbnitz assemblage into the dataset. Although the shape of the tools is well preserved, the edges of the artifacts are preserved in varying conditions due to post-depositional processes like fluvial transport or mining. Therefore, I excluded the specimens from most of the edge angle analyses.
Technological characterization
Following procedures of technological lithic analyses [16,48,49], the technological characterization is based on five categories for the Keilmesser in the dataset: (1) the shaping of the surfaces, which includes also the roughing out the pre-form, (2) modifications of the back and the base, (3) the final modification and regularization of the working edge, (4) the modification of the distal posterior part, and (5) the thinning of the distal part and/or the tip. These categories provide the most important information about the manufacture and maintenance of these late Middle Paleolithic tools, e.g., the edge configurations and the distal thinning bear information about the strategies of edge angle maintenance [33]. I put my emphasis here on the analysis of the active edges, as they lie as well in the main focus of the subsequent 3DGM and edge angle analyses. I especially recorded the state of the distal posterior part as it was reported to play an important role as striking platform for thinning the distal volume of Central European Keilmesser [33].
3D geometric morphometric analysis and edge angles
I collected the landmarks on the 3D scans using MeshLab open-source software. Thereafter, I conducted the entire data processing in R [54]. Landmarks were processed using the package geomorph [52]. For further 3DGM analyses, like Procrustes superimposition, I applied algorithms of the package morpho [55]. To automatically measure the edge angles on the 3D scans, I used the package Lithics3D [53]. I created the diagrams of the present study with ggplot2 [71].
3D geometric morphometric analysis. 3DGM is nowadays a widespread set of methods for quantitative analyses of stone artifact shape variability [1,38,45,46,[72][73][74][75][76][77][78]. I applied it here with the intention to reveal patterns of variability within the Lichtenberg Keilmesser type and to further analyze the variability of the active edges.
The artifacts from Lichtenberg were 3D scanned using an ARTEC structured light scanner. The 3D dataset of Pouch, Löbnitz and Goitzsche was generated with a BIR Actis 225/300 CTscanner with resolutions of 36 to 69 μm [1].
For the 3DGM analyses, I collected 5 fixed landmarks ( Fig 3A) at the following positions: the tip, the proximal end of the working edge, the dorsal and ventral inflection points between the base and the back, and the inflection point between the back and the distal posterior part. These points are present on all specimens in the dataset. Together with 67 semi-landmarks, equally spaced with the geomorph [52] package in R, they define 8 curves: the working edge, the dorsal and ventral outline of the base, the inflection between the base and the back, the dorsal and ventral outline of the back and the dorsal and ventral outline of the distal posterior part. The 3D outline shape is able to capture the morphology of the most important aspects (e.g., thickness, extension) of the tools' individual techno-functional parts as well as the asymmetric shape of Keilmesser. Additional surface landmarks [73,79] were not needed for the scope of the present analysis because it is not the surfaces but the edge configurations that define the Keilmesser-concept [12,13].
I applied Procrustes superimposition to standardize orientation, scale and location of the landmark dataset [80]. In the next step, I analyzed the morphological variability in shape space with a principal component analyses (PCA) on the translated dataset of individual landmark configurations. I further conducted an additional cluster analysis on the principal component scores using the kmeans function in R with 1000 iterations and 10 random starts. The parameters of the function where adjusted until kmeans was stable and revealed the same clusters in every run. The application of the cluster analysis does not alter the patterns of the PCA result. It serves here mainly as a quantitative help to visualize clusters within the PCA result and to structure the interpretation. In a further step, I calculated the mean shapes of the individual clusters for comparison and to reveal patterns of morphological variability.
To analyze which parts are the most variable, I conducted the 3DGM analysis on two parts of the Keilmesser individually: (1) the working edge, and (2) the distal posterior part together with the back. I focused here on the active parts of the tools, as their morphologies are mostly affected and altered by retouch. Here, the distal posterior part is also interpretetd as an active edge as the distal part often creates a sharp working edge as well. Additionally, for the distal posterior part the curves of the back were included, as the morphology of the distal posterior part can only be understood in relation to the back. Important are, for example, the angle between the back and the distal posterior part or the length of the distal posterior part in relation to the extension of the back [51].
Edge angle measurements. The package Lithics3D by Pop [53] provides a function that automatically calculates edge angles from 3D models at equidistant fixed points along an edge. For my study, I chose 30 equidistant points and measured the angle at 5 mm from the edge ( Fig 3B). As the back and the base are thick, often naturally prehensile parts by definition, I focussed on the edge angles of the active parts, the distal posterior part and the working edge.
The edgeAngles algorithm computes the angles along a path defined by ordered surface coordinates, at a given distance (here: 5 mm) perpendicular to the path. This function works by first computing planes perpendicular to the edge by implementing the curve.pp function. Once these planes have been obtained, mesh edges that intersect the planes are identified with the edgesOnPlane algorithm. In a subsequent step, edgeAngles uses the e2sIntersect function to compute the intersection points of these mesh edges with a sphere of radius in mm to identify the location where the mesh thickness should be measured. The intersections with the greatest distances between them are then used to measure mesh thickness, and angles are then computed using simple trigonometry [53]. There was only one case from Lichtenberg (54/45-8-64) where the edge angles could not be measured. This artifact was refit from two fragments
PLOS ONE
The Lichtenberg Keilmesser -it's all about the angle (transversally broken) and because of a small gap on the working edge, the algorithm for the automatic measurement failed.
In addition to the edge angle analysis, algorithms of the package Lithics3D were applied to automatically measure the maximum length, width and thickness of each artifact. Table 2.
Technological description
Surface shaping, which the knappers carried out in a bifacial and unifacial way [1], was done directly from the working edge, the base, and from the back (Fig 4). The latter is common in LMP assemblages from Central Europe, as suggested by Iovita [33] and demonstrated by refits from Pietraszyn 49a [22]. Shaping from the back, often carried out on the flat ventral side, thins the entire tool volume and is applied in the initial stages of tool manufacture but also as resharpening solution when the piece gets proportionately thicker [33]. More detailed examples are given in the section about edge angles and Keilmesser reduction below. Prior to the final edge regularization (Fig 4) the surface along the lateral working edge was thinned more precisely with removals directly from the working edge, equivalent to the second non-KMTB (re)sharpening solution after Iovita [33].
The distal posterior part is often angled towards the working edge to form an often rounded, a more or less pointed tip with the latter. Only in a few cases ( Table 2; Fig 2:3) the distal posterior part is rather steeply angled and/or very round with no pointed tip. The thinning of the distal volume, i.e. the tip, is a very common feature across all assemblages of the dataset ( Table 2). This thinning was mostly realized from the distal posterior part. Either it was done perpendicular to the working edge from the middle and proximal part of the distal posterior part (Fig 4:1a, 4:2a, 4:3a, 4:4a and 4:6a) and/or parallel to the working edge and struck from the distal edge of the distal posterior part (Fig 4:1b, 4:2a, 4:3b, 4:4a and 4:6b). One Keilmesser from Pouch (Fig 4:1b) shows a removal directly along the working edge that may represent a former tranchet blow. However, this cannot be proven as it is only partly preserved and it therefore belongs to an earlier stage of distal thinning before the tool was potentially resharpened. The latter is also evidenced by a neighbouring highly reduced shaping scar along the same edge (Fig 4:1b). Fig 4:3 illustrates that removals from the distal posterior part could also thin out the entire piece. Neanderthals designed the distal posterior part as striking platform for these surface removals. This was either achieved by coarse or fine preparation, a thick edge, a thick natural surface, or an intentional break (Fig 4:6b). The latter was only observed in Lichtenberg.
The base and the back consist mostly of natural surfaces or bear some modifications by coarse retouch. The base is often unworked. However, in some specimens the base was retouched as a striking platform for shaping (Fig 4:2) or was modified by non-invasive retouch on the surfaces (Fig 4:4).
The lateral working edge is predominantly convex, although there is some variation. I will come back to this in the section about the 3DGM results below.
3DGM
Fig 5 displays the first two principal components in shape space of 46 bifacial and unifacially shaped tools. The center of the entire plotting area has the highest density of tools. Furthermore, the density graphs at the plot margins indicate that the tools from Lichtenberg scatter over the entire plot area and overlap with the tools from central Germany. With regard to the limiting factor of low sample size for the distribution of artifacts from Pouch and Goitzsche, this overlap suggests a strong relatedness of the tool designs from both Lichtenberg and the central German assemblages Pouch, Löbnitz, and Goitzsche (Fig 1). However, there are some differences within the central German dataset. The artifacts from Goitzsche and Pouch scatter on the left half of the PC1 axis, whereas the Löbnitz tools are distributed more two the right. But regarding PC2 they are all, including Lichtenberg, centered on the axis. significant relationship. To evaluate if this is also the case for Keilmesser only, I excluded the handaxes from this analysis. We already saw that they form a separate morphological cluster at the upper extreme of PC1 (Fig 5), separating them from Keilmesser. Without the handaxes, the relationship is not significant anymore (Fig 6A). In other words, the morphological variation of Keilmesser is independent of size, and-as last consequence-also of a decrease in size during reduction. This is reinforced by the result for length in relation to PC2, as there exists also no significant relationship (Fig 6B). On the other hand, the separation of handaxes from Keilmesser is not only due to shape, but depends also on handaxes being different in size.
Due to the low sample size of handaxes, the robustness of the result (Fig 6A) for the relationship of size and PC1 needs to be inspected further. Therefore, I resampled the data 1000 times with replacement and excluded 10 specimens in each run. The results are listed in Table 3 and demonstrate that when handaxes are included the relationship of size and PC1 is
PLOS ONE
The Lichtenberg Keilmesser -it's all about the angle significant in 75% of the cases. In contrast, assemblages without handaxes give a significant result for the relationship of size and PC1 in only 19% of the cases. These results confirm the initial observation that the shape variability of Keilmesser is mostly independent of size.
The cluster analysis on the principal component scores revealed 8 different groups. Thereby, all the groups incorporate tools from at least two sites. This suggests that the variability within the dataset is not structured by site. Of course, this result is only tentative, as some sites have smaller sample sizes than others. Further, it has to be kept in mind that these groups are mainly a help here to interpret the patterns of the PCA result and to visually structure the shape variability. The groups are no fixed "natural" clusters and rely a lot on the parameters of the kmeans function. In other words, the groups do not represent sub-types. Further, the groups do not alter the result of the PCA, i.e., specimens that plot closer together nevertheless share more shape similarities than tools that plot further from each other. Additionally, it is important to keep in mind that PC1 and PC2 represent a shape continuum where the tool morphologies form rather gradients. Therefore, an alternative way of inspecting the morphological relatedness is presented below. I interpret the variability in the clusters tentatively based on their mean shapes as follows: 1. Group 1 consist exclusively of handaxes. They differ from Keilmesser in having a symmetrical tip formed by two lateral working edges. One of the working edges is slightly shorter and connected to a short back, and is interpreted here as equivalent to the distal posterior part of Keilmesser. Similar to the results of my previous study [1], the handaxes form a rather tight group within the plot. This was an expected result and suggests that the 3DGM is in fact measuring aspects of edge variability of interest here. The morphological separation of handaxes from Keilmesser is reinforced by the fact that they plot outside the highest density areas of the scatter plot (Fig 5).
Group 2 is represented by a single specimen that represents an outlier within the dataset.
Although it consists of the morphological parts defined for Keilmesser, the back is elongated and concave, the distal posterior part is very short, and the working edge is extremely convex.
3. The Keilmesser of Group 3 are rather elongated, with a short and steeply angled distal posterior part and a straight back.
4. Group 4 forms a rather tight cluster. The mean shape points to an oval shape with a symmetric tip. This symmetrical tip is typical for handaxes, but the distal posterior part is here rather short compared to handaxes. The base and the back are relatively thin in Group 4. In this group falls one of the Keilmesser that Veil [8,14] would classify as Faustkeilblatt (Fig 2:7).
PLOS ONE
The Lichtenberg Keilmesser -it's all about the angle 5. Group 5 comprises specimens with a tendency to have an overall elongated shape, a long, broad and convex back, and a short and steeply angled distal posterior part.
6. Group 6 forms the major part of the highest density scatter in the plot. The meanshape of the Keilmesser tends to have a narrow base, a broad back and a longer distal posterior part than Groups 2, 3 and 5. Thereby, the distal posterior part is straight instead of slightly convex and together with the working edge it forms a more pointed tip.
7. The distal posterior part of Group 7 is similarly straight as in Group 6, forming also a pointed tip. But in contrast, the back is here more narrow and the base longer. However, Groups 6 and 7 are fairly similar.
8. The mean shape of Group 8 is characterized by an elongated distal posterior part, a rounded tip, and a relatively short back which is angled towards the proximal side of the Keilmesser. With this morphological features, the group is close to handaxes, reinforced by its position in the plot closest to the latter. In the Group 8 also includes one of the Keilmesser that Veil [8,14] would classify as Faustkeilblatt (Fig 2:5).
Despite these groups I also calculated the mean shape for the highest density of the scatterplot. This serves here as an additional measure for morphological relatedness independent from the cluster analysis. 13 out of 46 tools are concetrated in this area and represent Keilmesser with the closest shape relation within the dataset. The highest density area is represented by Group 6 and the lower left part of Group 7. Further, the group is dominated by Keilmesser from Lichtenberg and Pouch, but one specimen from Löbnitz and two out of three artifacts from Goitzsche are present as well. The mean shape of the highest density area resembles the definition that Veil [8] gave for the "ideal" Lichtenberg Keilmesser: an oval outline shape with a longitudinal symmetry, especially in the tip region, a convex working edge that forms a retouched tip at the distal part, and a relatively long back. In the light of the data presented here up to now, this may point to the presence of an underlying form-function template for these tools.
The 3DGM result does not clearly confirm the separation of the tools from Lichtenberg into Keilmesser and Faustkeilblätter as suggested by Veil [8]. Despite that handaxes form a tight cluster outside the highest density areas and are generally larger, there seems to exist a shape continuum along PC2: from elongated pieces with a long back and short distal posterior part on the left, to broader pieces with longer distal posterior parts and more symmetrical tips (Faustkeilblätter) at the center right, thorough to handaxes with symmetrical tips, oval shapes, long distal posterior parts and short backs on the right. Faustkeilblätter also plot together in groups with slightly different Keilmesser variants. This rather confirms the similar interpretation of a shape continuum between these tools by Jöris [13].
Despite morphological differences, the convex working edge is common to the mean shapes of all groups. The most variable parts instead seem to be the distal posterior part, the base, and back. To inspect this observation further, I conducted the 3DGM analysis on the back and the distal posterior part on the one hand, and the working edge on the other hand individually (Fig 7).
The working edge indicates a low variability, as most of the tools with a convex working edge shape are concentrated in the center area of the plot (Fig 7A). Thereby, more straight working edges tend to plot towards the left part of the PC1 axis, whereas straight-convex edges are situated more in the lower part of the PC2 axis. One specimen from Löbnitz has an irregular edge shape and is separated in the upper extreme of PC2. Two tools with extreme convex working edges plot outside the main cluster at the upper extreme of PC1. One of them is again the specimen of the outlier Group 2 within the main PCA (Fig 5). Figure (Fig 7B) shows the result for the first two principal components in shape space of the distal posterior part and the back. In contrast to the result for the working edge, the shapes of the distal posterior parts and the backs form no clusters and scatter over the entire plotting area. Only the handaxes, with their elongated distal posterior part are separated to the right along PC1, confirming their different distal posterior part morphologies compared to Keilmesser. The result suggests that the highest variability of Lichtenberg Keilmesser is concentrated indeed in the shape of the distal posterior part and its relation (length, angle) to the natural morphology of the back. The latter, of course, creates also a high degree of variability.
I could show that the working edge is relatively constant in its convex shape, confirming the observations made by Veil [8,14] and Jöris [13] for the Lichtenberg Keilmesser. In contrast, the second active edge, the distal posterior part, is in its morphology a highly variable part of the tool. According to the hypothesis by Iovita [33] this part (and parts of the back) is used as a striking platform for thinning the distal volume on the tools. This is an observation that is indeed common in the dataset (Fig 4, Table 2). And subsequent thinning may, in my opinion, alter the length and shape of the distal posterior part, causing variability. In contrast, Veil [8] defines the distal posterior part as a fixed extension of the working edge around the tip that forms a second sharp edge. To evaluate these two contrasting views, I am going to present the results for the edge angles of the distal posterior part and the working edge in the next section.
Edge angles
The boxplots in Fig 8 compare the edge angles of the distal posterior part for the tools from Lichtenberg and from central Germany. The edge angles for Keilmesser are generally larger on the distal posterior parts than on the working edges, whereas for handaxes the angle ranges mostly overlap. The latter was expected, as handaxes are defined as having two working edges. An exception for Keilmesser is Löbnitz, where the edge angles are generally higher compared to the other assemblages. Furthermore, the range of the angles for the distal posterior parts and the working edges overlap more often in Löbnitz than in the other assemblages. This confirms the observation stated earlier that the edge angles of Löbnitz have to be regarded with caution, as there is a high potential for post-depostional edge damage in this assemblage. Similar observations are true for the Goitzsche specimens. Therefore, both assemblages will be excluded from the following edge angle analyses and I am going to work only with the excavated assemblages Lichtenberg and Pouch. The remaining handaxe from Lichtenberg will be excluded for its sample size of 1. From now on, I will especially focus on the Keilmesser as main subject of the study.
The angles of the distal posterior part for the Keilmesser of Lichtenberg and Pouch are centered between 57.5˚and 78.5˚: The larger edge angles for the distal posterior part on Keilmesser suggest a different function of this active edge compared to the working edge with its relatively lower angles. The larger angles point towards a function such as striking platform and/or an extension of the prehensile part of the tool. However, Veil [8] and Jöris [13] observed a second sharp edge in the distal part of the distal posterior part on the Lichtenberg tools. To evaluate this observation, we need to look at the distribution of the edge angles along the active edges of Keilmesser.
PLOS ONE
The mean edge angle graphs for Keilmesser from Lichtenberg and Pouch in Fig 9 can be viewed as edge morphology translated into angles. As plotted here in actual edge direction, the edge angle values almost perfectly resemble visually the distal morphology of Keilmesser. In other words, there is a high potential that edge angles and edge angle management on the active edges have an influence on tool morphology. And this pattern is not only visisble in Lichtenberg, but also in the assemblage of Pouch (however, note that in Pouch the distal posterior part can also be thinner than the distal part of the working edge). That means that this morpho-technological principle was applied in different regions of the northern central European Plain.
The edge angles are distributed differently on the two active edges (Fig 9). To draw inferences of differing edge functionality, a morpho-functional threshold is set here at 60˚, because acute edge angles <60˚are interpreted as sharp and suitable for cutting tasks [81]. The working edge has a relatively even distribution of angles, suggesting constant edge functionality along the entire edge. Thereby, the mean angles are distributed around 49˚, which means a sharp edge suitable for cutting tasks. In contrast, the angles of the distal posterior part decrease towards the tip. With the mentionned threshold at 60˚, I can divide the distal posterior part into two morpho-functional parts: (1) approximately two thirds of the edge have large angles, extending the back as prehensile part on the one hand, and serving as a striking platform (natural and/or retouched, see Fig 4 and Table 2) for distal thinning on the other hand. (2) The distal third of the distal posterior part is below 60˚and constitutes a sharp tip together with the distal part of the working edge. From a functional point of view this sharp tip seems to have been important for certain cutting tasks. A slight difference between the assemblages is that the sharp part is shorter in Lichtenberg than in Pouch. Both morpho-functional parts suggest the distal posterior part as a multifunctional edge for prehensile aspects, distal thinning, and potentially cutting.
Edge angles and keilmesser reduction
In the final part of this study, I combine the results from the technological, 3DGM and the edge angle analysis. I focus here on the analysis of the working edge as main active edge. Thereby, I assume that high edge angles are an indication for a decrease of edge functionality. In the beginning, my aim is to inspect if the grouped morphologies of Keilmesser are related to subsequent reduction/resharpening. Or, in other words, if the mean shapes for the groups are influenced by resharpening. I already showed above (Fig 6) that shape variation of Keilmesser is independent of size and now I want to analyze this aspect further in looking at the edge angles. Because of issues with edge angle preservation in collected assemblages, I include here only on the excavated artifacts. Fig 10 shows the groups from the 3DGM results together with the median working edge angle for each tool and for each group. The median group angles with values between 47˚and 51.5˚(for groups with more than two artifacts) indicate that no group of more than two artifacts consists exclusively of more or less reduced pieces, respectively. It is obvious that the tools with their differing median angles are distributed evenly among the groups. This is further underpinned by the result for a one-way ANOVA: Kruskal-Wallis rank sum test data: Median_Angle_WE by Group At a 0.05 significance level, I conclude that the median working edge angles per specimen of the groups with more than two artifacts are identical populations. In other words, there is nor relation between shape change and working edge angles of Keilmesser in the dataset. The angles of the working edge are relatively constant compared to overall shape. Viewed from upside down, this also implicates that the specific morphological characteristics of Keilmesser can be found on tools with differing working edge angles.
But does that mean that there was no heavy reduction or resharpening in both assemblages? To inspect this, let's take a look at the reduction pattern in the dataset. As a measure for reduction, I use the relative thickness index (RTI) [61,82], calculated as: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi maxLength � maxWidth p =maxThickness This index represents the thickness of each artifact in relation to its surface area. If the value of this measure decreases, the artifact gets thicker in relation to its surface area and may indicate an increase in overall reduction. However, there is also some caution needed in the interpretation of the RTI, as artifacts manufactured on blanks with a naturally thick back can also have high maximum thickness values. The RTI of the Keilmesser from Lichtenberg and Pouch is distributed as follows: Min. 1st Qu. Median Mean 3rd Qu. Max. The summary statistic shows that the data is not normally distributed. This is related to an outlier, a Keilmesser or Faustkeilblatt (Fig 2:7) which was manufactured on a very thin blank. If this outlier is removed, the values are more normally distributed: The data shows that the RTIs for the Keilmesser are distributed in a relatively narrow range, most of the specimens have values between 3.46 and 4.3. But what does this mean for our dataset? Let's inspect the relationship between the measure of reduction and the median edge angle on the working edges. If the outlier is included, we can see a significant relationship between the two variables at a significance level of 0.05 (p = 0.02). However, the low adjusted R-squared (0.154) suggests a high variation of the data and suggests a rather weak relationship. If the outlier of the exceptionally thin Keilmesser is removed (Fig 11), the relationship looses its significance. In other words, when the artifacts get thicker, either through edge modification in the framework of the initial manufacture, or resharpening, the edge angles were kept rather constant. Based on the data presented here, I infer that Neanderthals had ways to keep the edge angle low and preserve the functionality of the working edge during manufacture, use and subsequent reduction. This is illustrated by the values for the angles of the working edges. As we already saw in Figs 8 and 10, the angles of the working edges of Keilmesser from Lichtenberg and Pouch are mainly below 60˚with a median of 49˚. There are only two artifacts with a median angle of 60( example 53/56-6-39 below) and an angle of 61˚ (Fig 2:6), respectively. And even those pieces are just hitting the defined value of <60˚for cutting purposes. In the following, I picked two examples from Lichtenberg, to illustrate strategies to maintain the edge angle, one successful (Fig 12:1) and one not (Fig 12:2).
Keilmesser 56/47-14-42 (Fig 12:1) looks technologically heavily reduced compared to the other artifacts: it is relatively small and narrow, and the flake scars on the dorsal face suggest that the use-life of the piece started with a larger size. However, the median working edge angle of 53˚degrees would not suggest a heavily resharpend tool. But a closer look at the artifact reveals that Neanderthals took care repeatedly to maintain the angle of the working edge to preserve its functionality. As observed by Iovita [33], they thinned the volume from the distal posterior part, but here alternating on the dorsal and ventral side. As the Keilmesser got thicker and narrower, they reduced the volume of the entire ventral face using also the back as a striking platform. This increased the RTI to a value of 3.9 (the mean of the dataset) and the angle of the working edge could be kept low. In the distal half of the tool, the removals from the back and the distal posterior part were extensive enough to thin the entire surface inlcuing the working edge. The latter was subsequently regularized by dorsal retouch. In contrast, on the proximal half of the tool thinning removals from the back did not reach the working edge. Here, the knappers changed their strategy. Through ventral edge retouch, they created a striking platform for subsequent dorsal thinning removals perpendicular to the working edge.
In contrast, the resharpening of the unifacially shaped Keilmesser 53/56-6-39 (Fig 12:2) seems to have been given up when the working edge angle became too steep (median 60˚). Also, it has one of the lowest RTIs with a value of 2.8. The knappers tried to thin out the distal volume from the distal posterior part with two large removals. But the two removals in the thick center of the piece hinged. Further dorsal thinning from the distal posterior part was therefore not possible any more. The striking platform of the distal posterior part was oriented to remove flakes from the dorsal face, and this also prohibited the thinning of the ventral face as an alternative solution. The thick cortical and irregular back also provided no suitable angles for surface removals.
These two examples and the results of the reduction, edge angle and 3DGM analyses show three things: firstly (1) Lichtenberg Keilmesser seem to have had a long use life and there functionality was maintained as long as possible, (2) the tool may potentially have been ultimately discarded when the angle increased above 60˚and the edge lost its functionality (cutting), and
Discussion
I could show in the present analysis that: (1) Lichtenberg Keilmesser form morphological subgroups in the PCA result that are, keeping in mind sample size, not structured by the sites within the dataset, (2) Keilmesser shape and shape change is independent of size, (3) there is a shape continuum at least between Keilmesser and Faustkeilblätter, (4) handaxes form a tight cluster outside the density areas for Keilmesser due to their symmetric morphology and difference in size, (5) the convex lateral working edges are rather standardized in angle and shape, (6) the distal posterior part in relation to the back is the most variable active part in its morphology and angle, and represents a multi-functional edge, (7) edge angle distribution along the active edges influences edge shape, (8) the establishment and maintenance of a sharp and thin distal tip seems to be an important feature of these tools, (9) edge angle maintenance is not related to overall shape alteration, and (10) tools may have been ultimately discarded when the functionality of the edges (i.e. the angles) could not be maintained. In the following, I will discuss some of these aspects further.
Form-function and angles
The 3DGM analysis has shown that the Lichtenberg Keilmesser has an inherent morphological variability across assemblages which resulted in specific, although artificial, sub-groups. This confirms similar earlier observations by Jöris [13] and Veil [8]. Thereby, the presence and positions of the individual techno-functional units stay constant. Furthermore, these units are independent of size, which was already suggested by Iovita [37] and confirmed by Frick et al. [30,32]. The overall shape varies between more elongated and broader specimens, from short and angled distal posterior parts to elongated distal posterior parts with a symmetric tip area. The latter is represented by Faustkeilblätter, which lie in the shape variability of Keilmesser. In contrast to the distal posterior part, and how it was also noted earlier [8,13,14], I could confirm that the working edge is less variable in its convex shape.
In favor of Veil's idea of a specific form-function concept, I demonstrated that the mean shape of the highest density within the scatterplot of the two first principal components in shape space (Fig 5) resembles his ideal template for the Lichtenberg Keilmesser. In other words, the fixed morphological parts and the relatively similar shapes independent of size may suggest that Neanderthals had a specific concept in mind of how such a tool should have looked like.
However, the analyses of the edge angles draw another picture. Firstly, working edge angles and working edge shape are independent from the varying Keilmesser morphology and Neanderthals tried to keep the specific working edge charcateristics constant during resharpening. In other words, the convex cutting edge was the most important active part of this tool and other tool units had the purpose to keep this functionality alive. I showed that increasing relative thickness of the tools did not commonly result in increased working edge angles. The back and the proximal part of the distal posterior part seem to fulfill foremostly prehensile functions [56][57][58]. But both were often also modified by majorly coarse retouch into striking platforms to initially shape and later thin the surfaces of the tools (Figs 4 and 12). This design made it possible to create multifunctional edges with angles that enable (1) prehension but also (2) the thinning of the surfaces during initial shaping, and for subsequent working edge angle maintenance [33].
Secondly, the distal posterior part and the back are less standardized in shape and angle than the working edge. For the latter, this might be mainly caused by natural variability as the back often incorporates thick natural and only marginally modified surfaces into the tool design. The distal posterior part was used as a striking platform for thinning (natural or prepared, see Table 2) and Neanderthals took special care in the establishment and maintenance of a pointed or slightly rounded, thin and sharp tip. Therefore, the distal posterior part required three morphological criteria to enable this twofold functionality: (1) large angles at the proximal portion, (2) decreasing angles towards the distal portion, and (3) it needed to be angled towards the working edge to create a tip and to enable thinning of the largest possible volume of the distal tip area. This morphology made removals that vary between directions parallel and perpendicular to the working edge possible. These morphological requirements and the related knapping behavior imply that angle configurations influence the distal tool morphology.
My results do not generally neglect that the manufacture of prehensile parts, striking platforms, a convex cutting edge, and a sharp tip on a tool [12,13] is related to the idea of a specific form-function concept [8]. But the fracture mechanics and the specific angle configurations that were needed to create this functional units predetermined, in my opinion, its morphological realization. In other words, my data suggests that there may have existed a general template of how a Lichtenberg Keilmesser should look, but "it seems to be all about the angle": For Neanderthals, the goal of these tools was to establish an acute angle along the working edge and the tip, and to maintain these angles during use. In contrast, Neanderthals designed the distal posterior part to have a large edge angle. They realized this with a thick edge, a flat natural surface, intentional breaks, or a fine prepared striking platform. The purpose of the knappers was to establish an edge that they used as a striking platform to thin the distal volume of the tool. This behavior resulted in tools with a rather constant morphology of the working edge and the tip, but a variable distal posterior part. In the course of intensive thinning and (re-) preparation of the edge, the distal posterior part became potentially longer and sharper, causing morphological variability and leading eventually to typologically different but strongly related [8,13] tool classifications, like Faustkeilblätter.
However, my data also suggests that Keilmesser stayed morphologically Keilmesser: they were ultimately discarded when the main working edge did not fulfill its primary functionality, i.e. when the edge angle increased over 60˚. With the data presented here, I could find no evidence that these specific tools were transformed into other tools, e.g., bifacial scrapers [49,50]. But a future study will analyze the relation between tools further, as I plan to incorporate more tool classes found in the Lichtenberg assemblage, like bifacial and leaf-shaped scrapers.
The Lichtenberg Keilmesser
Following the results presented here, the Lichtenberg Keilmesser should rather be understood as a dynamic tool concept than a static type. However, the Lichtenberg tool as a type is meaningful in so far that it refers to one solution to create a tool with specific functionalities: cutting, prehension, and reusability. As I explained above, to establish and maintain its functionality, certain angles where created by the knappers along the active edges. This behavior resulted in specific shapes and positions of the active parts and created the standardized or template morphology of this Keilmesser concept.
Reducing tool shape to edge angle creation and maintenance does not necessarily neglect archaeological groups or named stone tool industries-NASTIES [83] that are based on the occurence of specific bifacial tool concepts, like the LMP Keilmessergruppen [7,8,12,13,48,[66][67][68]84] or the contemporaneous Mousterian of Acheulian Tradition (MTA) of western Europe [60]. Recently, Uthmeier [50] argued that the finished tool itself may not have served as social marker for group identity (in the sense of Weißmüller's [85] finished tools as symbolic markers). He assumes that as tool manufacture and maintenance seems to be learned by social interaction, "[‥] identical or similar manufacture of lithics is another way to confirm that all group members share the same worldview." (Uthmeier 2016:67). Recently, Frick and Herkert [32] made similar observations for the conceptionally uniform but highly dynamic production of Keilmesser with tranchet blow in their research area Saône-et-Loire, France. In the present case study, the production, resharpening and edge angle maintenance strategy of the Lichtenberg Keilmesser is conceptionally different from these Keilmesser resharpened by a tranchet blow parallel to the distal part of the working edge [30,32,48]. So far, I did not observe any clear evidence for the frequent application of the tranchet blow in the assemblages analyzed here, or in neighboring LMP assemblages, like Königsaue or Salzgitter-Lebenstedt, both of which I have analyzed [61,62]. For the latter, Pastoors [16] reports the occurrence of the technology, but only on three bifacial scrapers and not on Keilmesser. Additionally, he found six flakes resulting from tranchet blows within the assemblage. Furthermore, there are two questionable artifacts from Pouch that may be related to the technique of tranchet blow: the Keilmesser described above and displayed in Fig 4:1 and a flake [19]. However, these examples are single occurences and can therefore not serve as evidences for the frequent application of the tranchet blow as resharpening strategy. In other words, the presence of the Lichtenberg Keilmesser-concept seems mostly to exclude the conceptionally different solution of tranchet blow edge modification and resharpening. This suggests the existence of shared ideas and concepts within a specific Neanderthal life-world or tool manufacture domain. Of course, as the tool only represents a single aspect of Neandertal material culture and daily life [86], the Lichtenberg Keilmesser concept is not necessarily the main marker for identity within a specific Neanderthal realm. But its presence and manufacture strategies might be shared by a late Middle Paleolithic Neanderthal community of yet unknown size, within an estimated geographical range across the northern European Plain from Germany to western Russia [15-18, 20, 22-27] and a time depth between potentially MIS 5a to MIS 3 [28,84] or MIS 3 only [87].
Conclusion
Here I have re-analyzed the Lichtenberg Keilmesser. I evaluated the ideas of the Keilmesser as specific form-function concept vs. a pragmatic solution on a tool to maintain edge angles. Using a combined approach of 3DGM and edge angle analysis, I could draw inferences about shape variability, edge morphology variability, edge angle distribution and reduction/resharpening and their influence on shape. The tool consists of two prehensile units, the proximal base and the lateral back, and two active edges, the distal posterior part and the working edge opposite the distal posterior part and the back. The two active edges form together a sharp distal tip. In my analysis, I focus on the two active edges, as the base and the back incorporate natural surfaces to a large extent and are driven by natural variability. However, the distal posterior part was analyzed together with the back, as its morphology and length can only be understood in relation to the latter.
My results show that the morphology of the Lichtenberg Keilmesser is predominantly driven by edge angle configurations to enable the functionality of prehensile and active units on the tool. I could identify two morpho-functional fixed edges, the convex working edge and the sharp distal tip. Especially the working edge has a low variability in shape and angles. This implies that during reduction, resharpening, and reuse, Neanderthals tried to keep these characteristics constant to preserve functionality. An edge angle of 60˚is understood as upper threshold for cutting functionality. As the median angles of Keilmesser from the well preserved excavated assemblages Lichtenberg and Pouch only once exceed this threshold by about 1˚, I infer that tools might be ultimately discarded if the primary functionality of the working edge could not be maintained. This was further reinforced for my results of the reduction analysis, where I illustrated with two examples successful (the angle was kept low) and unsuccessful resharpening (working edge angle could not be maintained) strategies.
Contrasting the results for the working edge, I found that the second active edge, the distal posterior part, represents a multifunctional edge. Together with the back, it has prehensile functions and serves as a striking platform for surface thinning. Additionally, its distal part was designed as a sharp edge to create a sharp tip together with the working edge. The twofold functionality of the edge was realized by the knappers with specific edge angle configurations: large angles at the proximal portion of the distal posterior part and decreasing angles towards its distal end. The former, i.e. the striking platform, was achieved by coarse or fine preparation, a thick edge, a thick natural surface or an intentional break (Fig 4, Table 2). Subsequent shaping and resharpening of the distal end of Lichtenberg Keilmesser lead to variations in length and shape of the distal posterior part. Therefore, the distal posterior part has a higher range of variability than the working edge and is responsible for a large share of the morphological Keilmesser variability. In conclusion, the distal posterior part can be understood as a unit to maintain the functionality of the two fixed active edges, the working edge and the tip.
Although it seems to be "all about the angle", my analysis does not necessarily argue against a definition of this tool as type. However, this type should not be seen as static, but rather as a dynamic concept. I understand the Lichtenberg Keilmesser as conceptual solution to create and maintain certain functional purposes with the help of specific edge angle configurations. The morphological requirements for these configurations result in shape characteristics that we today identify and recognize as a type or a form-function concept. However, I cannot rule out that Neanderthals had a template in mind for the positioning of prehensile and active parts when they manufactured a unifacially shaped or bifacial Lichtenberg Keilmesser. | 2020-10-08T13:05:51.440Z | 2020-10-06T00:00:00.000 | {
"year": 2020,
"sha1": "e2affc332af480b932596ed0782a664153fe0154",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0239718&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "395232572adbbee588d490d022bd34320f564ea1",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
259520051 | pes2o/s2orc | v3-fos-license | Knowledge assessment for nurses and medical staff regarding incidence of cystic fibrosis
The recent study conducted in Basrah city aimed to assess the nurses knowledge regarding fibrosis cyst. assessment questionnaire used to reach the aims and include demographic and scientific information. Data statistically analysed for percentage and mean of score. The results showed that female nurses were more 67% more than male, regarding educational level most nurses participated the questionnaire have BSc degree(51.9%) and 52.36% of them have less than one year of experience (35%). Of participants On other hand the knowledge of participants were showed significant mean of score with some insignificant items corresponding the knowledge related to clinical signs. 50% of the participants' answers were significant, regarding knowledge of the disease in terms of symptoms and clinical signs The same percentage was not significant regarding knowledge of information related to the history of the disease case.According to the results, Increasing nurses' knowledge of information related to the disease through curricula and educating the importance of the disease through publications were recommended.
Introduction
Cystic fibrosis as a life-limiting, recessive disease caused by mutations in the cystic fibrosis trans membrane conductance regulator (CFTR) gene [1]. Cystic fibrosis (usually called CF) is an inherited* disease. It causes certain glands in the body to not work properly. These glands are called the exocrine (outward-secreting) glands. Exocrine glands normally make thin, slippery secretions including sweat, mucus, tears, saliva and digestive juices.. The typical measure of lung function is forced expiratory volume in 1 second (FEV1). FEV1 is a key predictor of life expectancy in people with cystic fibrosis, and optimising lung function is a major goal of care. Symptoms often appear in infancy and childhood, such as bowel obstruction due to meconium ileus in new born babies. [2]. CF is caused by a mutation in the gene cystic [3,4].Mutations may also lead to fewer copies of the CFTR protein being produced [5].This mutation accounts for two-thirds (66-70% [20]) of CF cases worldwide and 90% of cases in the United States; however, over 1500 other mutations can produce CF [6]. About one in 46 Hispanic Americans, one in 65 African Americans, and one in 90 Asian Americans carry a mutation of the CF gene. [7]. There is no known cure for cystic fibrosis. [8].
Cystic fibrosis is the most common life-limiting autosomal recessive disease among people of European heritage [9]. In the United States, about 30,000 individuals have CF; most are diagnosed by six months of age. In Canada, about 4,000 people have CF" [10]. Around 1 in 25 people of European descent, and one in 30 of white Americans, [11]. March 2010.is a carrier of a CF mutation. Although CF is less common in these groups, roughly one in 46 Hispanics, one in 65 Africans, and one in 90 Asians carry at least one abnormal CFTR gene. Ireland has the world's highest prevalence of CF, at one in 1353 [12]. Their role complements that of parents. During school hours, school teachers are actually the first-respondent in cases of disasters or emergencies. They must be able to deal properly with health emergencies both in normal children, and those children with special health care needs [20].
Material and methods
Fifty nurses (male and female in Basra hospitals). Participated assessment questionnaire designed to assessment nurses knowledge about fibrosis cysts. comprised of questions taken by written. Before Introduction this items distributed for teachers of college.It divided in two Main parts, the socio-demographic characteristic questions concerning with fibrosis cysts. Data collected and statistically data analysis to obtained Frequency. Percentage and Mean of score
Results and discussion
Cystic fibrosis associated with a shortened life span and impaired quality of life and requires lifelong medical care, as well as extensive support from relatives and friends, which may interfere with the normal daily life of both affected individuals and their relatives.
To investigated the nurses' knowledge's concerning fibrocystic disease after introducing them to the disease through a publication that included brief scientific information for this disease .The results showed after participating in the paper and electronic questionnaire. The demographic axis Table (1). The table showed that female nurses were more 67% more than male, regarding educational level most nurses participated the questionnaire have BSc degree(51.9%) and 52.36% of them have less than one year of experience (35%).Cystic fibrosis (usually called CF) is an inherited [13]. Regarding symptoms, questions 6, 8, 9, 11 and 12 table 1 were not statistically significant, which is considered as one of the clinical diagnostic indicators, like Poor growth and poor weight gain despite normal food intake, [14].Accumulation of thick, sticky mucus) [15]. Mental health among people with any chronic illness, including cystic fibrosis, remains an important part of maintaining long-term health and quality of life [16]. [19].
Conclusions
The study concluded that mean of score for 50% of the answered questions were significant corresponding knowledge for incidence of cystic fibrosis
Compliance with ethical standards
Acknowledgments Cystic fibrosis; Nurses. Assessment; Medical staff
Disclosure of conflict of interest
There are no conflicts of interest
Statement of informed consent
There is no study for anyone, but a study that is within the university study | 2023-07-11T18:03:47.113Z | 2023-06-30T00:00:00.000 | {
"year": 2023,
"sha1": "327966343621491b3da7a0aa40ce50e58f0596d0",
"oa_license": "CCBYNCSA",
"oa_url": "https://wjarr.com/sites/default/files/WJARR-2023-1188.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "336eea1c5d9346ccd7da47702f8ef4033c6f0086",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
15584427 | pes2o/s2orc | v3-fos-license | Attaining the canopy in dry and moist tropical forests: strong differences in tree growth trajectories reflect variation in growing conditions
Availability of light and water differs between tropical moist and dry forests, with typically higher understorey light levels and lower water availability in the latter. Therefore, growth trajectories of juvenile trees—those that have not attained the canopy—are likely governed by temporal fluctuations in light availability in moist forests (suppressions and releases), and by spatial heterogeneity in water availability in dry forests. In this study, we compared juvenile growth trajectories of Cedrela odorata in a dry (Mexico) and a moist forest (Bolivia) using tree rings. We tested the following specific hypotheses: (1) moist forest juveniles show more and longer suppressions, and more and stronger releases; (2) moist forest juveniles exhibit wider variation in canopy accession pattern, i.e. the typical growth trajectory to the canopy; (3) growth variation among dry forest juveniles persists over longer time due to spatial heterogeneity in water availability. As expected, the proportion of suppressed juveniles was higher in moist than in dry forest (72 vs. 17%). Moist forest suppressions also lasted longer (9 vs. 5 years). The proportion of juveniles that experienced releases in moist forest (76%) was higher than in dry forest (41%), and releases in moist forests were much stronger. Trees in the moist forest also had a wider variation in canopy accession patterns compared to the dry forest. Our results also showed that growth variation among juvenile trees persisted over substantially longer periods of time in dry forest (>64 years) compared to moist forest (12 years), most probably because of larger persistent spatial variation in water availability. Our results suggest that periodic increases in light availability are more important for attaining the canopy in moist forests, and that spatial heterogeneity in water availability governs long-term tree growth in dry forests. Electronic supplementary material The online version of this article (doi:10.1007/s00442-009-1540-5) contains supplementary material, which is available to authorized users.
. As a result, saplings of many species require periods of increased light (gaps) in order to reach the forest canopy (Denslow 1980;Brokaw 1985). Such high-light periods are probably responsible for the fast growth intervals ("releases") that are typically observed in diameter growth trajectories Baker and Bunyavejchewin 2006). In tropical dry forests, on the other hand, understorey light levels are relatively high, due to a more open and lower canopy compared to moist forests (Holbrook et al. 1995;Murphy and Lugo 1986;Coomes and Grubb 2000). As a consequence, sapling growth is expected to be less limited by light, resulting in less severe growth suppressions, of shorter duration. Also, releases from suppressed growth conditions that result from gap formation will probably occur at a lower frequency in dry forests and be less strong than in moist forests. Thus, temporal variation in light levels does not strongly aVect tree growth in dry forests (Fowler 1986;Nath et al. 2006). This is diVerent for water availability, which may vary strongly among years (Bullock 1997), with important consequences for tree growth. For instance, the occurrence of periodic droughts causes strong drops in diameter growth during 1 to several years (Bullock 1997;Fichtler et al. 2004;Nath et al. 2006).
Thus, the main drivers of tree growth (light, water and nutrient availability) exhibit diVerent temporal patterns in moist and dry forests (Fowler 1986;Terborgh 1992;Mooney et al. 1995;Nath et al. 2006). As a result, tree growth trajectories in dry forests are expected to show strong spiked year-to-year variation, while those in moist forest should reveal block-like patterns of extended periods of low growth and shorter periods of high growth (e.g. . Apart from the diVerences in temporal pattern, the spatial variation in these drivers also varies between forest types. In moist forest, understorey light availability at one spot may strongly change over time due to canopy dynamics. By contrast, the spatial distribution of water (and nutrient) availability in dry forests remains relatively stable over long periods of time, as this is often related to spatial variation in topography and soil properties (Murphy and Lugo 1986;Ceccon et al. 2006;Nath et al. 2006). As a result, favourable conditions for sapling growth in moist forests (i.e. high light) are likely to change after a number of years or decades, while those in dry forests (i.e. high water and nutrient availability) are likely to persist over longer periods of time, perhaps even throughout a tree's life. This may have important implications for tree growth trajectories. Previous studies in moist forests revealed that growth diVerences among juveniles are maintained for some time (up to 20 years; ), but gradually disappear as temporal variation in light availability diVers among individuals (Baker and Bunyavejchewin 2006;. In dry forests, growth diVerences among trees may persist over longer time intervals if water (and nutrient) availability strongly governs tree growth, and if spatial variation in these drivers persists over time. But few studies have analysed variation in tree growth patterns in dry forests so far (cf. Nath 2007).
The hypothesis that light is less important for tree growth in dry forests than in moist forests has often been proposed (Fowler 1986;Smith and Huston 1989;Terborgh 1992;Mooney et al. 1995;Baker et al. 2003a), but has not been empirically tested so far. Testing this hypothesis is diYcult as it requires information on tree growth and light levels from the same species in diVerent forest types (e.g. dry and moist forests). In addition, tree growth data should cover long time spans as suppression-release cycles typically last for decades (van der Meer and Bongers 1996;. Here we test this hypothesis by comparing long-term tree ring data from the Neotropical canopy tree Cedrela odorata in dry and moist forest. C. odorata is highly suitable for such a comparison as it has a wide geographical distribution across multiple tropical forest types and possesses clear annual rings (Brienen and Zuidema 2005). Our speciWc hypotheses are: 1. That saplings experience more and longer periods of low growth (suppression) and more and stronger growth increases (releases) in the moist forest than in the dry forest. 2. That variation in growth patterns prior to attaining the canopy is larger in the moist forest than in the dry forest. 3. That growth diVerences between juvenile trees persist for longer time periods in the dry forest due to strong and lasting spatial variation in water availability.
Study sites and species
This study was conducted in a tropical dry forest on the Peninsula of Yucatan, Mexico and a tropical moist forest in the Bolivian Amazon. ClassiWcation of forests into "dry forest" and "moist forest" follows Chave et al. (2005), and will be used from here on. The dry forest site is located in the state of Campeche, in the communal forest area of the Ejido Pich (19°03ЈN, 90°00ЈW). The moist forest is located about 50 km south of Cobija in the Pando Department (11°24ЈS, 68°43ЈW). The studied forests diVer in the total amount of precipitation, length of dry season, soil type, and forest structure (Table 1; Fig. S1). In the dry forest, the terrain consists of low karstic hills with very good drainage and Xat valley areas with clay and loam soils that occasionally inundate during the rainy season. Forest stature and deciduousness diVer between hills and valleys, with higher stature and fewer deciduous trees in the latter. Terrain in the moist forest site is slightly undulating. Our study species, Cedrela odorata L. (Meliaceae; Spanish cedar), is a relatively light-demanding canopy tree, although it does tolerate shade . It occurs from northern Argentina (28°S) up to Mexico (26°N), in areas which vary in rainfall and soil type (Pennington and Styles 1975). Cedrela trees that reach the forest canopy diVer in height, diameter and age, with higher values for the moist forest (Table 1). In the dry forest, it is most abundant on karstic hill sites, whereas no habitat preference is apparent in the moist forest.
C. odorata forms distinct and strictly annual rings, as proven by cambial markings (Worbes 1999;Dünisch et al. 2002) and correlations between ring-widths and climate (Brienen and Zuidema 2005). The formation of these annual rings is induced by the annual occurrence of a dry season of 3-6 months (cf. Table 1; Fig. S1) during which the species is completely deciduous, enters cambial dormancy and forms a terminal parenchyma band (Worbes 1999;Dünisch et al. 2002;Brienen and Zuidema 2005). It is one of many tropical tree species that produces recognizable and annual growth rings (e.g. Worbes 1999;Brienen et al. 2009).
Sample collection and ring measurements
Tree ring samples in the dry forest (Mexico) were collected in 2007 from an area of »32 ha of mainly undisturbed forest. Some patches within this area had been selectively logged <20 years ago at a low density (<5 trees/ha), and without a noticeable eVect on forest structure. We collected ten stem discs from trees harvested in 2004 (>40 cm diameter at breast height; DBH) and increment cores from 70 trees >5 cm DBH in two to three directions.
Tree ring samples from the moist forest (Bolivia) were collected from an area of 850 ha in 2002, just after selective logging took place. The area had not been disturbed before. We collected entire stem discs from the bases of 60 felled trees of >60 cm DBH and increment cores in two to three directions from 94 trees >5 cm DBH. Samples were distributed evenly over diameter categories in both areas.
In both forests samples were taken at 25-to 130-cm height. Core and disc samples were air-dried and sanded until rings were visible. We measured ring widths using a ring-measurement device (VELMEX; BloomWeld, N.Y.) and stereomicroscope, in one to three directions depending on regularity of growth around the trunk. Ring-width measurements of diVerent radii or cores were averaged and multiplied by 2 for conversion to diameter increments. We corrected for irregular growth forms that may lead to over or underestimations of growth by calculating the diVerence between the summed diameter increments and actual tree diameter at sample height. A correction factor was applied to each increment value that was weighted by the maximum diVerence between the increments along the radii for each particular year [for details on applied methods see Brienen and Zuidema (2005)].
Samples were crossdated using COFECHA (Holmes 1982). Successful crossdating was achieved for a substantial subsample of adult trees with regular bole form and known year of formation of the outer rings (dry forest 80%, moist forest 70%; Brienen and Zuidema 2005). Trees with irregular bole form, lacking outer rings, and dead trees were excluded from crossdating analysis. Crossdating proved diYcult for juvenile rings due to long periods of slow growth concealing common growth variation due to climate. Mean interseries correlation and sensitivity were 0.68 and 0.64 for the dry forest , and 0.57 and 0.49 for the moist forest (1950-2000cf. Brienen and Zuidema 2005), respectively. These high interseries correlations are clear proof of the annual nature of the rings (cf. Stahle 1999). In both sites, ring width also correlated signiWcantly with rainfall, conWrming that rings were formed strictly annually [dry forest (unpublished data R. J. W. Brienen); moist forest (Brienen and Zuidema 2005)].
Allometry, light availability and growth-light relations
We measured DBH (i.e. 1.3 m) or diameter above buttresses, and diameter at sample height. Total tree height of trees was visually estimated for standing trees (estimated precision § 10%) and measured for felled trees. We related DBH and height using logarithmic regressions (Table 1), and applied these to determine DBH on reaching the canopy (i.e. 2/3 of mean canopy height; Table 1). This yielded DBH on reaching the canopy of 16 and 29 cm for dry and moist forest, respectively, which we rounded to 15 and 30 cm DBH (cf. Clark and Clark 1999). In both forests, these sizes correspond with the DBH at which an average tree receives substantial direct overhead light [i.e. crown exposure index (CE) of 3b, see below]. For each individual of C. odorata that was alive when sampled, we assessed light availability using the modiWed CE of Dawkins (Clark and Clark, 1992). We used the following CEs: 1 = no direct lateral or overhead light; 2a = little direct lateral light, no overhead light; 2b = some direct lateral light, no overhead light; 2c = substantial direct lateral light, no overhead light; 3a = little direct overhead light; 3b = substantial direct overhead light; 4 = more than 90% of crown receives full overhead direct light; and 5 = full overhead and lateral, direct light. CE values of all trees at both sites were estimated by the same researcher (R. J. W. B.). Note that these estimates are only gross indications for diVerences in light levels between sites, and that actual light levels for trees in the dry site are possibly higher in the same CE class compared to the moist site as crown structure in the dry site is opener (cf. Holbrook et al. 1995).
We related crown exposure to DBH using multinomial logistic regressions (Poorter et al. 2005) for each site separately and tested for diVerences between sites in these relations. We then tested for diVerences in DBH growth (over past 5 years) among CE classes, using a multiple regression with log-transformed growth data in which CE classes were entered as dummy variables and average DBH as a covariate. All statistical analyses were performed in SPSS 15.0.
Detecting periods of suppressed growth
We investigated whether juveniles (i.e. pre-canopy trees smaller than 15 or 30 cm in diameter) in the two forest types diVered in frequency and duration of suppressions until reaching the canopy (hypothesis 1). As we were interested in light-related suppressions we established a threshold below which growth is likely suppressed based on light-growth relationships. We considered that individuals with a CE of 2 (some direct lateral light, but no overhead direct light) were suppressed, while those with a CE of 3 or higher were not. We set a growth threshold to the value halfway between the median growth at CE2 and that at CE3 (cf. Canham 1985;Landis and Peart 2005), using 5-year average growth rates to minimize the eVect of yearto-year climatic variation.
We identiWed periods of suppressed growth when growth was below the threshold for at least 5 years (cf. Canham 1985;Landis and Peart 2005). We used site-speciWc thresholds to account for diVerences between sites in light responses. Clearly, when site-speciWc thresholds strongly diVer between sites, this may generate variation in calculated suppressions between sites. We therefore checked the robustness of these results by repeating the suppression calculations using a threshold that is independent of light conditions or light-diameter relations. This alternative threshold is the 25th percentile of all growth rates of juvenile trees (cf. Baker and Bunyavejchewin 2006).
Detecting release events
We investigated whether the number and strengths of growth releases experienced by juvenile trees diVered between forest types (hypothesis 1). To detect growth releases we used relative growth increases, which have proven to be reliable indicators of the occurrence of canopy openings in temperate and tropical trees (Nowacki and Abrams 1997;Rubino and McCarty 2004;Baker and Bunyavejchewin 2006). We used moving averages to remove long-term size growth relationships and year-toyear variation on growth rates caused by weather variation. A window of 10 years was applied to calculate the percentage growth change between subsequent 10-year growth periods using the formula of Nowacki and Abrams (1997): %GC i = [(M 2 ¡ M 1 )/M 1 ] £ 100, where %GC i = percentage growth change for year 1, M 1 = the preceding 10-year mean diameter growth (including year of change), and M 2 = the subsequent 10-year mean diameter growth. We regarded a growth increase of more than 100% as a growth release (cf. , under the condition that growth rates were above the suppression threshold. Such a relative growth threshold may lead to biases as they depend on prior growth rates and thus may discriminate against fast-growing trees (cf. Black and Abrams 2003).
We checked for synchronously occurring suppressions and releases, by calculating the proportion of trees with suppressions or releases that started in the same year ( §1). We checked whether synchronous suppressions and releases could be related to rainfall or temperature. To this end, we correlated average growth rates to rainfall and temperature.
Canopy accession patterns
To compare growth trajectories by which trees reach the canopy between forest types (hypothesis 2), we classiWed trees into "canopy accession patterns" based on the occurrence and frequency of release events and suppressed growth periods (cf. Baker and Bunyavejchewin 2006;. Such patterns are usually interpreted in terms of temporal variation in light availability, as releases and suppressions are generally not caused by climatic Xuctuations. Still, changes in climatic conditions that last for multiple years, or lag eVects of climatic Xuctuations on growth may occur. Such "climate-induced" releases and suppressions were excluded from the analysis of canopy accession patterns (cf. Results section). Each tree was classiWed into one of the following three canopy accession patterns: 1. "Direct growth". This pattern corresponds to trees that did not experience periods of suppressed growth that were followed by a release event. These trees were probably not strongly released from dark understorey conditions, and may have been in favourable light conditions early in their life. 2. "One suppression-release pattern". This corresponds to trees that experienced one period of suppression directly followed by a release event. Trees with such a pattern probably did experience strong release from a dark understorey position through canopy dynamics. 3. "Multiple suppression-release pattern". This corresponds to trees that went through several suppressionrelease cycles. These trees required several high-light periods before attaining the canopy.
Persistent growth variation and ages on reaching the canopy We evaluated to what extent growth variation among trees persists over time (hypothesis 3). To this end, we calculated Spearman rank correlations between mean growth rates in pre-canopy size classes and those in successive size classes of 5-cm width [cf. "among-tree autocorrelation" in ]. To facilitate comparison between sites, we relate correlation coeYcients at a similar position in the canopy, i.e. at a similar CE index (cf. Fig. 1). Finally, we reconstructed lifetime growth (size-age) trajectories for all trees and evaluated age variation (coeYcient of variation; CV) on reaching the canopy in both sites.
Light proWles
Crown exposure increased with increasing DBH in both forests in a similar fashion (Fig. 1). The multinomial logistic regressions closely followed the relation between DBH and CE in both forests, with DBH explaining more of the variation in CE in the moist forest than in the dry forest (Nagelkerke-R 2 0.61 vs. 0.50). Variation in crown exposure was lower in the dry forest, ranging from CE2b (some direct light) to CE5 (full overhead and lateral, direct light), compared to a range of CE1 (no direct light) to CE5 in the moist forest. Size variation of trees with the same crown exposure was also lower in the dry forest. At the same DBH, trees in the dry forest had a higher crown exposure than those in the moist forest, as indicated by a signiWcant site-eVect when CE and DBH data were combined in the multinomial regression (n = 282; Nagelkerke-R 2 = 0.65; P < 0.001).
InXuence of light on growth, and suppressed growth threshold Diameter growth rates were signiWcantly related to CE in both forest sites (multiple regression, P < 0.05), while diameter did not enter in the regression equations (P > 0.15). Partial correlations conWrmed this pattern: while diameter growth was signiWcantly correlated to CE when correcting for diameter (P < 0.001), growth was not correlated to diameter after taking its relation to CE into account (P > 0.05). As expected, the proportion of variation in tree growth explained by canopy exposure was higher in the moist forest (dry, R 2 = 0.21; moist, R 2 = 0.35). Growth rates increased with CE in both forests (Fig. 2). In the dry forest, growth rates in CE2 diVered signiWcantly with those in CE4 and CE5, and growth in CE3 with that in CE5 (P < 0.05). Growth in CE2 and CE3 diVered marginally (P = 0.071). In the moist forest, growth diVered signiWcantly among all CE classes (P < 0.05). Based on these results, we set threshold values for detecting suppressed growth periods in our ring series to 0.381 cm year ¡1 for the dry forest and 0.156 cm year ¡1 for the moist forest. These thresholds correspond to the midpoint between the median growth rates for CE2 and CE3 (Fig. 2). Note that we did not apply diameter-dependent thresholds as diameter did not inXuence growth after accounting for the eVect of light (i.e. crown exposure). The alternative threshold values-the 25th percentile of precanopy growth data-were almost equal for both forests: 0.190 and 0.184 cm year ¡1 , for dry and moist forest, respectively.
Suppressed growth, release events and canopy accession patterns
Examples of lifetime growth trajectories of two trees from the dry and two from the moist forest are shown in Fig. 3. The trajectories show high temporal variation in growth rates over the lifetime of individual trees and a large variation among diVerent individuals. Cedrela trees in the two forests diVered strongly in the number and intensity of suppressions they experienced before reaching the canopy (Table 2; Fig. 4). In the dry forest a much lower proportion of trees experienced suppressions compared to the moist forest. The median number of suppressions per tree until reaching canopy size and the frequency of suppressions were also much lower in the dry forest. Large diVerences were also found in the duration of suppressed growth periods (Fig. 4), being nearly half as long in the dry forest. In the dry site suppressions did not exceed 10 years, whereas in the moist site suppressions of up to 58 years were found. We found similarly large diVerences in the percentage of time that trees were suppressed until reaching the canopy (Fig. 4). DiVerences between forests in the duration of suppressions and the percentage of time in suppressions were highly signiWcant (Mann-Whitney tests, P < 0.001).
Observed diVerences in suppressed growth between forests were robust to changes in threshold values. Using the alternative threshold value (25th percentile of pre-canopy growth), diVerences in suppressions between sites were maintained (Table 2).
Our test of synchronously occurring suppressions did not reveal clear evidence of climate-induced suppressions. The maximum proportion of synchronous suppressions was low in both sites (dry, 12%; moist, 16%). Releases in the moist site also did not occur simultaneously, and a maximum of 15% of releases occurred in the same year ( §1). However, we did Wnd a high proportion of concurrent releases: in the dry site 79% of the trees exhibited releases between 1972 and 1976. These releases represented 48% of all observed growth spurts. They concurred with a sharp decrease in maximum temperatures between 1969 and 1979 (Fig. S2). A negative correlation between ring width and maximum temperature (Pearson r = ¡0.48, P < 0.001) over 1950-2004 explains why growth rates increased in the 1970s. Ring width also correlated with rainfall during the rainy season (May-July, r = 0.35, P < 0.05), but there were no particular increases in rainfall during the 1970s that may explain increased growth in this period. The simultaneous occurrence of releases and the negative temperature-growth correlation suggest that these releases were climate-induced and not due to canopy dynamics.
After excluding the climate-induced releases between 1972 and 1976, the portion of trees showing releases in the dry forest (41%) was nearly half of that in the moist forest (Table 3). Also, dry forest juveniles experienced fewer releases until reaching the canopy, they showed a lower frequency of releases per decade, and the release strength was much weaker (Table 3).
The typical growth trajectory by which trees reach the canopy-the canopy accession pattern-also diVered strongly between forest types. The proportion of trees assigned to each of the canopy accession patterns diVered signiWcantly between sites (Table 4). In the dry forest, the majority of trees (89%) reached the canopy through direct growth while only a few (11%) showed suppression followed by release. No multiple suppression and release patterns were observed in the dry forest. Including climateinduced releases in the dry forest only slightly changed these results (84% direct growth, 15% suppression-release). By contrast, in the moist forest, the majority of trees showed one (41%) or more (23%) suppression-release events. The time to reach the canopy diVered signiWcantly between sites and canopy accession patterns (two-way ANOVA, F site = 30.7, F patterns = 43.8, P < 0.001). On average, trees that showed direct growth reached the canopy (c) Onesuppression-release, dry forest Table 2 Suppressions of C. odorata trees in a dry and a moist forest site, applying two growth thresholds: growth suppressed at low light (crown exposure index = 2; Fig. 2), or growth suppressed below the 25th percentile growth of all juvenile trees Fisher exact tests were used to test for diVerences between sites in the percentage of trees with one or more suppressions and Mann-Whitney tests for all other cases * P < 0.05, ** P < 0.005, *** P < 0.001 Growth threshold applied Low light 25th percentile growth faster (dry, 31 years; moist, 43 years) compared to those that underwent one suppression-release sequence (dry, 46 years; moist, 60 years), while trees experiencing multiple suppression-release cycles took longest to reach the canopy (moist, 84 years).
Persistence of growth variation
The degree to which growth variation among trees was sustained over size classes diVered considerably between sites (Fig. 5) In the dry forest, ranking of individuals according to their growth rates in the smallest size class (0-5 cm) was maintained up to the last size class of 35-40 cm in diameter (i.e. over >64 years), whereas in the moist forest ranking generally disappeared in the subsequent size classes (i.e. after 12 years). Also for larger size classes, we found that growth diVerences in the dry forest site were maintained longer and over more size classes. Hence, in the dry forest, a relatively fast-growing sapling was also likely to grow fast as an adult, which was much less likely to be the case for saplings in the moist forest site, where such juvenile trees change more frequently in growth rank. These diVerences were not caused by between-site diVerences in passage time from one size class to another as these were comparable.
Variation in age on reaching the canopy The age-size variation for both sites is shown in Fig. 6. Both maximum age and mean age on reaching the canopy were higher in moist forest compared to dry forest (Table 2). However, mean age at a given diameter was remarkably similar for both forest types, and so was the degree of age variation among individuals (CV = 33% in dry and 34% in moist forest).
Attaining the canopy in dry and moist forests
Our results support hypothesis 1 that trees in the moist forest experience more and longer suppressions and more and stronger releases during their growth into the canopy than trees in the dry forest. The observed diVerences were large, with trees in the moist forest experiencing 4 times more suppressions and 2 times more releases. These diVerences were robust to the application of diVerent growth thresholds. It is important to note that they were not due to the longer trajectory towards the canopy in the moist forest, as they were maintained when expressed as the proportion of time suppressed, or the frequency of suppression or release events. Canopy accession patterns were also very distinct.
In the moist forest a high percentage of trees (64%) required one or several releases from suppressions, whereas only a small portion in the dry forest trees (11%) showed such patterns. Taken together, these results suggest that light is a stronger limiting factor in the moist forest and that releases from low-light conditions are more important for trees in the moist forest compared to the dry forest.
The fact that 40% of juvenile trees still showed releases suggests a role of gap formation in dry forests. One characteristic of dry-forest gaps may be relevant in this respect. For a given gap width, the amount of direct light received close to the forest Xoor is probably higher in dry forests (Segura et al. 2003) and are shorter (Holbrook et al. 1995). In comparison, moist forest trees often die uprooted or snapped thereby forming large (>300 m 2 ) gaps (Martínez-Ramos et al. 1988a;Brokaw 1996). Such large gaps take a long time (>10 years) to close (Martínez-Ramos et al. 1988b;van der Meer and Bongers 1996). Also, gap closure in moist forests may take longer as the forest is taller. In all, we argue that in moist forests, diVerences in light levels between gaps and understorey are larger and that gaps close more slowly. These contrasting canopy dynamics may explain why Cedrela juvenile trees experienced longer suppressions and stronger releases in the moist forest. Combining these results with the light-demanding nature of our study species, one may suggest that juvenile Cedrela trees in the dry forest establish in gaps and are then able to reach the canopy before the gap closes. By contrast, juvenile trees that establish in gaps in moist forests take more time to reach the canopy and may thus experience closure of the canopy and suppressed growth resulting from this. This comparison needs to be made cautiously, though, as the (generally) larger canopy gaps in moist forests probably close more slowly.
Interestingly, half of the growth releases in the dry forest occurred synchronously and coincided with a considerable drop in maximum temperatures in the 1970s. The negative correlation between temperature and growth rate suggests that these releases were indeed induced by periodic change in climatic conditions. Several studies have shown lower diameter growth rates of tropical trees at higher temperature, as a result of lower photosynthesis rates, increased respiration or drought stress (Clark et al. 2003;Fichtler et al. 2004;Feeley et al. 2007). The climate-induced releases that we observed may have been caused by either of these causes, or their combined eVect. It is interesting to note that climate-induced releases were not observed in the moist forest, as indicated by the low portion of simultaneous releases. Whether this is due to lower drought stress in the moist forest, or diVerences in the degree of climatic variation between sites, remains to be studied. The Wnding of climate-induced releases shows that climate variability may also inXuence long-term growth patterns, and this eVect needs to be taken into account in release-suppression studies (cf. Rubino and McCarty 2004). Can diVerences in growth trajectories between forest types be expected for more tree species? The diVerences in understorey light levels between dry and moist forests (Holbrook et al. 1995;Coomes and Grubb 2000) certainly may give rise to distinct growth patterns for other tree species. It is likely that species with light requirements similar to those of Cedrela will also show fewer and shorter periods of suppression in dry forests compared to moist forests. We expect that for such species, the importance of suppression-release cycles decreases towards drier forests.
The observed diVerence in canopy accession across forest types may have important demographic consequences (cf. Smith and Huston 1989;Zuidema et al 2009). Cedrela trees in the moist forest clearly have less favourable gap regeneration opportunities and take longer to reach the canopy. Thus, assuming similar juvenile mortality, cumulative survival to reproductive size (which is typically attained on reaching the canopy) may be low, compared to that in the dry forest. This limitation may be oVset by a longer life span of trees in the moist forest (cf. Table 1) resulting in higher reproductive output (e.g. Zuidema and Boot 2002).
Thus, moist-forest populations are likely maintained in a diVerent way than dry-forest populations, and may give rise to variation in life history strategies of the same species in contrasting forest types. Clearly, a thorough analysis of the life history consequences of our results requires a demographic study that includes all vital rates of all life stages.
Determinants of long-term growth variation
In accordance with our expectations, growth diVerences among saplings persisted over longer time spans in the dry forest than in the moist forest. Sites were strikingly distinct in this respect: in the dry forest, growth rate diVerences between saplings persisted until they reached adult size (i.e. over 64 years), whereas growth rate diVerences in the moist forest disappeared relatively rapidly (after 12 years). The long persistence of growth diVerences in the dry forest is consistent with the expected consequences of strong spatial heterogeneity in water availability on growth variation among trees. DiVerences in water availability likely have a life-long impact on tree growth while the eVect of temporal variation in light availability due to canopy dynamics is typically limited to periods of one or two decades (van der Meer and Bongers 1996; Brienen and Zuidema 2005). Although heterogeneity in other resources like nutrients may also cause part of the observed persistent growth diVerences among trees in dry forests, the eVect of soil water availability on growth is probably much larger (Medina 1995;Mooney et al. 1995;Oliveira-Fihlo et al. 1998). In moist forests, eVects of soil type and water availability on tree growth rate (Ashton et al. 1995;Gunatilleke et al. 1998;Baker et al. 2003b) are probably less inXuential than temporal variation in light, particularly for light-demanding canopy tree species. In the case of Cedrela, the very distinct growth patterns in diVerent forest types yielded the same degree of variation in ages. This is probably a coincidence, as site-speciWc growth patterns likely cause diVerences in age variation among forest types. Canopy accession and maintenance of tree diversity in dry and moist forests Gap dynamics play a central role in theories on niche diVerentiation and maintenance of biodiversity (Grubb 1977;Ricklefs 1977). These theories have been developed mostly based on processes in wet tropical forests (eg. Denslow 1980;Brokaw 1985;Turner 2001) and are thought not to apply to dry forests . However, this assertion had not been tested so far. By comparing canopy accession of trees in dry and moist forests, our study represents a Wrst step by showing that growth trajectories of trees of the same species vary strongly between forest types. If these diVerences are indeed caused by the diVerences in the importance of gap formation, it implies that gap dynamics impose less selection pressure on trees in the dry forest. The main environmental gradient in dry forests seems to be spatial heterogeneity in soil water and nutrients (Murphy and Lugo 1986;Mooney et al. 1995;Ceccon et al. 2006). However, even in wet forests spatial heterogeneity in water plays a role (Ashton et al. 1995;Baker et al. 2003b) and even in dry forests light availability determines species distribution to a small degree (Oliveira-Filho et al. 1998). Hence, we expect a gradual shift with light variation becoming less important towards drier forests and spatial heterogeneity in water availability becoming less important in wetter forests (cf. Engelbrecht et al. 2007). DiVerences in the relative importance of these basic resources for tree growth (light and water) probably suggest that mechanisms of species diversity maintenance vary across tropical forest types. The approach used here-to study the same species in contrasting forest types and apply a combination of long-term tree ring data and Weld measurements on light conditions-proved to be successful. Similar analyses need to be undertaken for more species and at more sites to determine how climate and forest structure interact to shape growth trajectories of juvenile trees. Results of such studies will improve the understanding of tree demography, forest dynamics and diversity patterns across tropical forest types. | 2014-10-01T00:00:00.000Z | 2009-12-24T00:00:00.000 | {
"year": 2009,
"sha1": "669041b4ce47fcc3af7da004d88c9b21af9a37c8",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00442-009-1540-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d79820675c83715df487f395421f436a61b91b6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
11380974 | pes2o/s2orc | v3-fos-license | On Paraphrase and Coreference
By providing a better understanding of paraphrase and coreference in terms of similarities and differences in their linguistic nature, this article delimits what the focus of paraphrase extraction and coreference resolution tasks should be, and to what extent they can help each other. We argue for the relevance of this discussion to Natural Language Processing.
Introduction
Paraphrase extraction 1 and coreference resolution have applications in Question Answering, Information Extraction, Machine Translation, and so forth. Paraphrase pairs might be coreferential, and coreference relations are sometimes paraphrases. The two overlap considerably (Hirst 1981), but their definitions make them significantly different in essence: Paraphrasing concerns meaning, whereas coreference is about discourse referents. Thus, they do not always coincide. In the following example, b and d are both coreferent and paraphrastic, whereas a, c, e, f, and h are coreferent but not paraphrastic, and g and i are paraphrastic but not coreferent. (1) [Tony] a went to see [ The discourse model built for Example (1) contains six entities (i.e., Tony, the eye doctor, Tony's eyes, Tony's cataracts, Tony's mother, cataracts). Because a, c, e, f, and h all point to Tony, we say that they are coreferent. In contrast, in paraphrasing, we do not need to build a discourse entity to state that g and i are paraphrase pairs; we restrict ourselves to semantic content and this is why we check for sameness of meaning between cataracts and cloudy vision alone, regardless of whether they are a referential unit in a discourse. Despite the differences, it is possible for paraphrasing and coreference to co-occur, as in the case of b and d. NLP components dealing with paraphrasing and coreference seem to have great potential to improve understanding and generation systems. As a result, they have been the focus of a large amount of work in the past couple of decades (see the surveys by Androutsopoulos and Malakasiotis [2010], Madnani and Dorr [2010], Ng [2010], and Poesio and Versley [2009]). Before computational linguistics, coreference had not been studied on its own from a purely linguistic perspective but was indirectly mentioned in the study of pronouns. Although there have been some linguistic works that consider paraphrasing, they do not fully respond to the needs of paraphrasing from a computational perspective.
This article discusses the similarities between paraphrase and coreference in order to point out the distinguishing factors that make paraphrase extraction and coreference resolution two separate yet related tasks. This is illustrated with examples extracted/adapted from different sources (Dras 1999;Doddington et al. 2004;Dolan, Brockett, and Quirk 2005;Recasens and Martí 2010;Vila et al. 2010) and our own. Apart from providing a better understanding of these tasks, we point out ways in which they can mutually benefit, which can shed light on future research.
Converging and Diverging Points
This section explores the overlapping relationship between paraphrase and coreference, highlighting the most relevant aspects that they have in common as well as those that distinguish them. They are both sameness relations (Section 2.2), but one is between meanings and the other between referents (Section 2.1). In terms of linguistic units, coreference is mainly restricted to noun phrases (NPs), whereas paraphrasing goes beyond and includes word-, phrase-and sentence-level expressions (Section 2.3). One final diverging point is the role they (might) play in discourse (Section 2.4).
Meaning and Reference
The two dimensions that are the focus of paraphrasing and coreference are meaning and reference, respectively. Traditionally, paraphrase is defined as the relation between two expressions that have the same meaning (i.e., they evoke the same mental concept), whereas coreference is defined as the relation between two expressions that have the same referent in the discourse (i.e., they point to the same entity). We follow Karttunen (1976) and talk of "discourse referents" instead of "real-world referents." In Table 1, the italicized pairs in cells (1,1) and (2,1) are both paraphrastic but they only corefer in (1,1). We cannot decide on (non-)coreference in (2,1) as we need a discourse to first assign a referent. In contrast, we can make paraphrasing judgments Table 1 Paraphrase-coreference matrix.
Coreference
(1,1) Tony went to see the ophthalmologist and got his eyes checked. The eye doctor told him . . .
(1,2) Tony went to see the ophthalmologist and got his eyes checked.
±
(2,1) ophthalmologist eye doctor (2,2) His cataracts were getting worse. His mother also suffered from cloudy vision. without taking discourse into consideration. Pairs like the one in cell (1,2) are only coreferent but not paraphrases because the proper noun Tony and the pronoun his have reference but no meaning. Lastly, neither phenomenon is observed in cell (2,2).
Sameness
Paraphrasing and coreference are usually defined as sameness relations: Two expressions that have the same meaning are paraphrastic, and two expressions that refer to the same entity in a discourse are coreferent. The concept of sameness is usually taken for granted and left unexplained, but establishing sameness is not straightforward. A strict interpretation of the concept makes sameness relations only possible in logic and mathematics, whereas a sloppy interpretation makes the definition too vague. In paraphrasing, if the loss of at the city in Example (2b) is not considered to be relevant, Examples (2a) and (2b) are paraphrases; but if it is considered to be relevant, then they are not. It depends on where we draw the boundaries of what is accepted as the "same" meaning.
(2) a. The waterlogged conditions that ruled out play yesterday still prevailed at the city this morning.
b. The waterlogged conditions that ruled out play yesterday still prevailed this morning.
(3) On homecoming night Postville feels like Hometown, USA . . . For those who prefer the old Postville, Mayor John Hyman has a simple answer.
Similarly, with respect to coreference (3), whether Postville and the old Postville in Example 3 are or are not the same entity depends on the granularity of the discourse. On a sloppy reading, one can assume that because Postville refers to the same spatial coordinates, it is the same town. On a strict reading, in contrast, drawing a distinction between the town as it was at two different moments in time results in two different entities: the old Postville versus the present-day Postville. They are not the same in that features have changed from the former to the latter. The concept of sameness in paraphrasing has been questioned on many occasions. If we understood "same meaning" in the strictest sense, a large number of paraphrases would be ruled out. Thus, some authors argue for a looser definition of paraphrasing. Bhagat (2009), for instance, talks about "quasi-paraphrases" as "sentences or phrases that convey approximately the same meaning." Milićević (2007) draws a distinction between "exact" and "approximate" paraphrases. Finally, Fuchs (1994) prefers to use the notion of "equivalence" to "identity" on the grounds that the former allows for the existence of some semantic differences between the paraphrase pairs. The concept of identity in coreference, however, has hardly been questioned, as prototypical examples appear to be straightforward (e.g., Barack Obama and Obama and he). Only recently have Recasens, Hovy, and Martí (2010) pointed out the need for talking about "near-identity" relations in order to account for cases such as Example (3), proposing a typology of such relations.
Linguistic Units
Another axis of comparison between paraphrase and coreference concerns the types of linguistic units involved in each relation. Paraphrase can hold between different linguistic units, from morphemes to full texts, although the most attention has been paid to word-level paraphrase (kid and child in Example (4)), phrase-level paraphrase (cried and burst into tears in Example (4)), and sentence-level paraphrase (the two sentences in Example (4)).
b. The child burst into tears.
In contrast, coreference is more restricted in that the majority of relations occur at the phrasal level, especially between NPs. This explains why this has been the largest focus so far, although prepositional and adverbial phrases are also possible yet less frequent, as well as clauses or sentences. Coreference relations occur indistinctively between pronouns, proper nouns, and full NPs that are referential, namely, that have discourse referents. For this reason, pleonastic pronouns, nominal predicates, and appositives cannot enter into coreference relations. The first do not refer to any entity but are syntactically required; the last two express properties of an entity rather than introduce a new one. But this is an issue ignored by the corpora annotated for the MUC and ACE programs (Hirschman and Chinchor 1997;Doddington et al. 2004), hence the criticism by van Deemter and Kibble (2000).
In the case of paraphrasing, it is linguistic expressions that lack meaning (i.e., pronouns and proper nouns) that should not be treated as members of a paraphrase pair on their own (Example (5a)) because paraphrase is only possible between meaningful units. This issue, however, takes on another dimension when seen at the sentence level. The sentences in Example (5b) can be said to be paraphrases because they themselves contain the antecedent of the pronouns I and he. In Example (5b), A. Jiménez and I/he continue not being paraphrastic. Polysemic, underspecified, and metaphoric words show a slightly different behavior. It is not possible to establish paraphrase between them when they are deprived of context (Callison-Burch 2007, Chapter 4). In Example (6a), police officers could be patrol police officers, and investigators could be university researchers. However, once they are embedded in a disambiguating context that fills them semantically, as in Example (6b), then paraphrase can be established between police officers and investigators. As a final remark, and in accordance with the approach by Fuchs (1994), we consider Example (7)-like paraphrases that Fujita (2005) and Milićević (2007) call, respectively, "referential" and "cognitive" to be best treated as coreference rather than paraphrase, because they only rely on referential identity in a discourse.
(7) a. They got married last year.
b. They got married in 2004.
Discourse Function
A further difference between paraphrasing and coreference concerns their degree of dependency on discourse. Given that coreference establishes sameness relations between the entities that populate a discourse (i.e., discourse referents), it is a linguistic phenomenon whose dependency on discourse is much stronger than paraphrasing. Thus, the latter can be approached from a discursive or a non-discursive perspective, which in turn allows for a distinction between reformulative paraphrasing (Example (8)) and non-reformulative paraphrasing (Example (9)).
b. X is the author of Y.
Reformulative paraphrasing occurs in a reformulation context when a rewording of a previously expressed content is added for discursive reasons, such as emphasis, correction, or clarification. Non-reformulative paraphrasing does not consider the role that paraphrasing plays in discourse. Reformulative paraphrase pairs have to be extracted from a single piece of discourse; non-reformulative paraphrase pairs can be extracted-each member of the pair on its own-from different discourse pieces. The reformulation in the third utterance in Example (8) gives an explanation in a language less technical than that in the first utterance; whereas Examples (9a) and (9b) are simply two alternative ways of expressing an authorship relation. The strong discourse dependency of coreference explains the major role it plays in terms of cohesion. Being such a cohesive device, it follows that intra-document coreference, which takes place within a single discourse unit (or across a collection of documents linked by topic), is the most primary. Cross-document coreference, on the other hand, constitutes a task on its own in NLP but falls beyond the scope of linguistic coreference due to the lack of a common universe of discourse. The assumption behind cross-document coreference is that there is an underlying global discourse that enables various documents to be treated as a single macro-document.
Despite the differences, the discourse function of reformulative paraphrasing brings it close to coreference in the sense that they both contribute to the cohesion and development of discourse.
Mutual Benefits
Both paraphrase extraction and coreference resolution are complex tasks far from being solved at present, and we believe that there could be improvements in performance if researchers on each side paid attention to the others. The similarities (i.e., relations of sameness, relations between NPs) allow for mutual collaboration, whereas the differences (i.e., focus on either meaning or reference) allow for resorting to either paraphrase or coreference to solve the other. In general, the greatest benefits come for cases in which either paraphrase or coreference are especially difficult to detect automatically. More specifically, we see direct mutual benefits when both phenomena occur either in the same expression or in neighboring expressions.
For pairs of linguistic expressions that show both relations, we can hypothesize paraphrasing relationships between NPs for which coreference is easier to detect. For instance, coreference between the two NPs in Example (10) is very likely given that they have the same head, head match being one of the most successful features in coreference resolution (Haghighi and Klein 2009). In contrast, deciding on paraphrase would be hard due to the difficulty of matching the modifiers of the two NPs.
(10) a. The director of a multinational with huge profits.
b. The director of a solvent company with headquarters in many countries.
In the opposite direction, we can hypothesize coreference links between NPs for which paraphrasing can be recognized with considerable ease (Example (11)). Light elements (e.g., fact), for instance, are normally taken into account in paraphrasing-but not in coreference resolution-as their addition or deletion does not involve a significant change in meaning.
(11) a. The creation of a company.
b. The fact of creating a company.
By neighboring expressions, we mean two parallel structures each containing a coreferent mention of the same entity next to a member of the same paraphrase pair. Note that the coreferent expressions in the following examples are printed in italics and the paraphrase units are printed in bold. If a resolution module identifies the coreferent pairs in Example (12), then these can function as two anchor points, X and Y, to infer that the text between them is paraphrastic: X complained today before Y, and X is formulating the corresponding complaint to Y.
(12) a. Argentina X complained today before the British Government Y about the violation of the air space of this South American country.
b. This Chancellorship X is formulating the corresponding complaint to the British Government Y for this violation of the Argentinian air space.
Some authors have already used coreference resolution in their paraphrasing systems in a similar way to the examples herein. Shinyama and Sekine (2003) benefit from the fact that a single event can be reported in more than one newspaper article in different ways, keeping certain kinds of NPs such as names, dates, and numbers unchanged. Thus, these can behave as anchor points for paraphrase extraction. Their system uses coreference resolution to find anchors which refer to the same entity. Conversely, knowing that a stretch of text next to an NP paraphrases another stretch of text next to another NP helps to identify a coreference link between the two NPs, as shown by Example (13), where two diction verbs are easily detected as a paraphrase and thus their subjects can be hypothesized to corefer. If the paraphrase system identifies the mapping between the indirect speech in Example (13a) and the direct speech in Example (13b), the coreference relation between the subjects is corroborated. Another difficult coreference link that can be detected with the help of paraphrasing is Example (14): If the predicates are recognized as paraphrases, then the subjects are likely to corefer.
(13) a. The trainer of the Cuban athlete Sotomayor said that the world record holder is in a fit state to win the Games in Sydney.
b. "The record holder is in a fit state to win the Olympic Games," explained De la Torre.
b. The investigators carried out 11 searches in stores in the center of Barcelona.
Taking this idea one step further, new coreference resolution strategies can be developed with the aid of shallow paraphrasing techniques. A two-step process for coreference resolution might consist of hypothesizing first sentence-level paraphrases via n-gram or named-entity overlapping, aligning phrases that are (possible) paraphrases, and hypothesizing that they corefer. Second, a coreference module can act as a filter and provide a second classification. Such a procedure could be successful for the cases exemplified in Examples (12) to (14). This strategy reverses the tacit assumption that coreference is solved before sentence-level paraphrasing. Meaning alone does not make it possible to state that the two pairs in Example (5b), repeated in Example (15) However, cooperative work between paraphrasing and coreference is not always possible, and it is harder if neither of the two can be detected by means of widely used strategies. In other cases, cooperation can even be misleading. In Example (17), the two bold phrases are paraphrases, but their subjects do not corefer. The detection of words like another (Example (17b)) gives a key to help to prevent this kind of error.
(17) a. A total of 26 Cuban citizens remain in the police station of the airport of Barajas after requesting political asylum.
b. Another three Cubans requested political asylum.
On the basis of these various examples, we claim that a full understanding of both the similarities and disparities will enable fruitful collaboration between researchers working on paraphrasing and those working on coreference. Even more importantly, our main claim is that such an understanding about the fundamental linguistic issues is a prerequisite for building paraphrase and coreference systems not lacking in linguistic rigor. In brief, we call for the return of linguistics to paraphrasing and coreference automatic applications, as well as to NLP in general, adhering to the call by Wintner (2009: 643), who cites examples that demonstrate "what computational linguistics can achieve when it is backed up and informed by linguistic theory" (page 643). | 2014-07-01T00:00:00.000Z | 2010-12-01T00:00:00.000 | {
"year": 2010,
"sha1": "d65bb9853f40ae646b687097d86948e1bb7a7630",
"oa_license": "CC0",
"oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/21392/1/578465.pdf",
"oa_status": "GREEN",
"pdf_src": "ACL",
"pdf_hash": "3d1e47847fdda8e7374a0a2d89f7155a63883969",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
988487 | pes2o/s2orc | v3-fos-license | Risk Factors for Repeat Abdominal Surgery in Korean Patients with Crohn's Disease: A Multi-Center Study of a Korean Inflammatory Bowel Disease Study Group
Purpose The purpose of this study was to assess the risk factors for repeated abdominal surgery in Crohn's disease (CD) patients after the first abdominal surgery. Prior studies have tried to identify the risk factors for postoperative recurrence in CD patients, but the results of the studies have been inconsistent. Furthermore, few data on the risk factors for repeated abdominal surgery are available. Methods Clinical data on CD patients who underwent abdominal surgery from January 2000 to December 2009 were collected from seventeen university hospitals and one colorectal clinic. Data from a total of 708 patients were analyzed to find the risk factors for repeated abdominal surgery in CD patients. The mean follow-up period was 72 months. Results The risk of repeated abdominal surgery was 3 times higher in young patients (below 16 years old) than in older patients (odds ratio [OR], 3.056; 95% confidence interval [CI], 1.021 to 9.150); P = 0.046). Stricturing behavior at diagnosis was also a risk factor for repeated abdominal surgery (OR, 2.438; 95% CI, 1.144 to 5.196; P = 0.021). Among operative indications, only intra-abdominal abscess was associated with repeated abdominal surgery (OR, 2.393; 95% CI, 1.098 to 5.216; P = 0.028). Concerning type of operation, an ileostomy might be a risk factor for repeated abdominal surgery (OR, 11.437; 95% CI, 1.451 to 90.124; P = 0.021). Emergency surgery (OR, 4.994; 95% CI, 2.123 to 11.745; P < 0.001) and delayed diagnosis after surgery (OR, 2.339; 95% CI, 1.147 to 4.771; P = 0.019) also increased the risk of repeated abdominal surgery. Conclusion Young age (below 16 years), stricturing behavior, intra-abdominal abscess, emergency surgery, and delayed diagnosis after surgery were identified as possible risk factors for repeated abdominal surgery in CD patients.
INTRODUCTION
Crohn's disease (CD) is heterogeneous in nature; consequently, its recurrence, except smoking, have been identified [7,8]. Furthermore, few studies have evaluated the risk factors for reoperation after the primary surgery in CD patients. Therefore, the aim of the present study was to evaluate the probability of reoperation and to assess the risk factors for reoperation after the primary surgery for CD.
Data management
The data for biopsy-proven CD patients who underwent abdominal surgery from January 2000 to December 2009 were collected retrospectively. The operations were performed at 18 different hospitals (17 university hospitals and one colorectal clinic). The factors included in the study protocol were decided at a meeting held before data collection. The data of 754 patients were collected initially, and 46 cases were excluded because of data duplication or unmet criteria (Fig. 1). Data from a total of 708 CD patients were analyzed to determine the risk factors for repeated abdominal surgery. The mean follow-up period was 72 months. The variables that were analyzed were gender, family history, age at diagnosis, disease behavior at diagnosis, and disease location at diagnosis ac-cording to the Montreal classification. Other variables were indications, type of first abdominal surgery, and the time interval between diagnosis and surgery.
Statistical analysis
For the continuous variables, normality was tested first. When the normality assumption was satisfied, a one-way analysis of variance was used; otherwise, a Kruskal-Wallis test was utilized to test whether a mean difference between the number of operations existed (1, 2, ≥3). In addition, for the categorical variables, a Pearson chi-square test or a Fisher's exact test was used as appropriate to evaluate the association between the number of operations (1, 2, ≥3). In order to evaluate the relationships between the number of operations and the risk factors, we used univariate and multivariate logistic regression modeling. Since the percentage of patients who had more than three operations was only 6.2%, the number of the operations was divided into two categories (1, ≥2). Additionally, within the independent variables of surgery type, indication for surgery and main symptoms, a few categories had a low percentage of patients; thus, for the stability of the analysis, those categories were eliminated/merged. For the goodness-of-fit, Hosmer-Leme show statistics were utilized, and the area under the receiver operating characteristic curve was used to assess the model discrimination. All statistical analyses were performed using SPSS ver. 15.0 (SPSS Inc., Chicago, IL, USA).
RESULTS
Twenty-four percent of the CD patients with primary abdominal surgery experienced repeated abdominal surgery, and 25% of that 24% underwent one additional abdominal surgery. The male-tofemale ratio of the CD patients with abdominal surgery was 2.27:1, and the mean ages of male and female patients were 34.0 and 34.16 years, respectively (Table 1). Family history was confirmed in only 2.2% of the patients. Gender and family history were not associated with repeated abdominal surgery in CD patients. All patients were classified according to the Montreal classification ( Table 2). Patients between 17 and 40 years of age (A2), ileal location (L1), and stricturing behavior (B2) were the most common subgroups in the CD patients who underwent primary abdominal surgery (Table 3). Medical intractability (21.5%), intestinal obstruction (24.2%), intra-abdominal abscess (23.4%), and enterocutaneous fistula (18.5%) were common causes of primary abdominal surgery. Intestinal obstruction (31.5%) and intra-abdominal abscess (33.1%) were also common causes of repeated abdominal surgery (Table 4). Small bowel segmental resection (28.2%) was the most common procedure performed in the first abdominal surgery. Of the total number of first abdominal surgeries for CD patients, a right colectomy and an ileocecectomy were performed in 26.7% and 25.0% of the patients, respectively (Table 5). Table 6 summarizes the results of the univariate and the multivariate analyses. The risk of repeated abdominal surgery was approximately three times higher in young patients (under the age of 16 years) than in older patients (odds ratio [OR], 3.056; 95% confidence interval [CI], 1.021 to 9.150; P = 0.046). Stricturing behavior was a greater risk factor for repeated abdominal surgery (OR, 2.438; 95% CI, 1.144 to 5.196; P = 0.021) than was penetrating behavior or non-stricturing and non-penetrating behavior. However, the number of surgeries was not influenced by the location of the disease at diagnosis. Intractability, obstruction, abscess and fistula were the main causes of surgery, and only abscess was associated with repeat surgery (OR, 2.393; 95% CI, 1.098 to 5.216; P = 0.028). Concerning the type of surgery, an ileostomy may be a risk factor for repeat abdominal surgery (OR, 11.437; 95% CI,
DISCUSSION
According to the European consensus on definitions and diagnosis of CD, recurrence is primarily used to define the reappearance of lesions after surgical resection while relapse refers to the reappearance of symptoms [9]. Recurrence is the main problem during the treatment of CD. Knowing the risk factors for recurrence would be helpful in managing CD not only for physicians but also for patients. However, due to the heterogeneous nature of CD, the reported risk factors for recurrence after abdominal surgery for CD patients have been inconsistent, so clarifying the risk factors for repeated abdominal surgery in CD patients is difficult. The recurrence rate varies according to the diagnostic method, follow-up duration and ethnicity. Despite the obvious limitation of using age at diagnosis as a surrogate marker of disease onset, it is nevertheless an attractive, readily available, and stable criterion for distinguishing different disease patterns at diagnosis [10]. With respect to age of onset, the Montreal classification allows for early onset of disease to be categorized separately as a new A1 category for patients with a diagnosis age of 16 years or younger whereas A2 and A3 account for diagnosis ages of 17 to 40 years and of over 40 years, respectively [11]. This change reflects many studies that have reported that specific serotypes or genotypes are more frequently found in early onset CD [12][13][14][15]. Early onset CD is usually more severe than A2 or A3 onset. Thia et al. [10] observed that young patients had a tendency toward recurrent clinical flares, indicating a more active disease, and were more likely to receive immunosuppressive therapy than older patients. Polito et al. [16] observed that compared to an older age at diagnosis, defined as individuals older than 40 years of age, a younger age at diagnosis, defined as individuals under 20 years of age, was associated with greater small bowel involvement, more severe stricturing disease, and a higher surgical rate. These results are consistent with the results from the present study. A young age, less than 16 years, was a risk factor for repeated abdominal surgery in CD patients.
Several studies have reviewed early disease onset or diagnosis among CD patients with a family history [16,17]. Polito et al. [16] postulated that the role of genetic anticipation with greater contribution of maladaptive genes led to an earlier onset of disease manifestation and greater severity in patients with a positive family history. Given that only 2% of the patients in the present study had a positive family history, other reasons may account for young patients experiencing a more severe clinical course than older patients in the present study cohort [10].
Non-stricturing, non-penetrating, stricturing, and penetrating disease behaviors are the main categories in the classification of CD patients. The present study revealed that stricturing disease behavior was associated with repeated abdominal surgery in CD patients. This result is significantly different from other reports. In other studies, penetrating behavior was a risk factor for postsurgical recurrence [8,[18][19][20]. However, Khoury et al. [21] reported that stricturing behavior was a risk factor for early reoperation. Concerning disease location, several studies demonstrated that the risk of recurrence was highest in ileocolonic CD and lowest in colonic CD [22][23][24]. In the present study, disease location was not a risk factor for repeat abdominal surgery in CD patients. The influences of disease location and disease behavior on the postoperative recurrence of CD are unclear because these clinical characteristics have been observed to change over time [25]. Among the surgical procedures performed in the first abdominal surgery, only the ileostomy increased the risk for repeated abdominal surgery in CD patients. Because the diseased bowel was not resected in that procedure, an increased reoperation rate was inevitable. However, contrary to other reports, the strictureplasty was not associated with repeated abdominal surgery reports [26][27][28][29]. Unlike other reports [30][31][32], in the present study, the segmental colon resection was not a risk factor for repeated abdominal surgery in CD patients. Emergency surgery and delayed diagnosis of CD after surgery could increase the risk for repeated abdominal surgery in CD patients. Hellberg et al. [33] demonstrated that no difference in recurrence existed between emergency and elective surgery. A limited resection could not be performed under an emergency situation because determining the extent of resection during emergency surgery was difficult. Understandably, a repeated abdominal surgery could be necessary. Similarly, in the presence of an intra-abdominal abscess, surgeons usually tend to perform fewer surgeries. Therefore, the probability of recurrence might be higher after the first abdominal surgery if the indication for the first operation had been an intra-abdominal abscess.
The present study had several limitations. First, some data were missing because of the retrospective collection technique, which could affect the results. Second, the efficacy of the medical treatment after surgery was not analyzed. Medical treatment, such as the use of immunomodulators or anti-tumor necrosis factor-α, after abdominal surgery for CD could decrease the recurrence rate and affect the results of the present study.
In conclusion, young age (under 16 years of age), stricturing behavior, intra-abdominal abscess, emergency surgery, and delayed diagnosis after surgery may be risk factors for repeated abdominal surgery in Korean patients with CD. www.coloproctol.org | 2017-10-15T05:58:20.381Z | 2012-08-01T00:00:00.000 | {
"year": 2012,
"sha1": "7b597a02da9aa4ff1fbc82d6b56cd36ee240e185",
"oa_license": "CCBYNC",
"oa_url": "http://coloproctol.org/upload/pdf/jksc-28-188.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b597a02da9aa4ff1fbc82d6b56cd36ee240e185",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246021721 | pes2o/s2orc | v3-fos-license | Functional Selectivity of Coumarin Derivates Acting via GPR55 in Neuroinflammation
Anti-neuroinflammatory treatment has gained importance in the search for pharmacological treatments of different neurological and psychiatric diseases, such as depression, schizophrenia, Parkinson’s disease, and Alzheimer’s disease. Clinical studies demonstrate a reduction of the mentioned diseases’ symptoms after the administration of anti-inflammatory drugs. Novel coumarin derivates have been shown to elicit anti-neuroinflammatory effects via G-protein coupled receptor GPR55, with possibly reduced side-effects compared to the known anti-inflammatory drugs. In this study, we, therefore, evaluated the anti-inflammatory capacities of the two novel coumarin-based compounds, KIT C and KIT H, in human neuroblastoma cells and primary murine microglia. Both compounds reduced PGE2-concentrations likely via the inhibition of COX-2 synthesis in SK-N-SH cells but only KIT C decreased PGE2-levels in primary microglia. The examination of other pro- and anti-inflammatory parameters showed varying effects of both compounds. Therefore, the differences in the effects of KIT C and KIT H might be explained by functional selectivity as well as tissue- or cell-dependent expression and signal pathways coupled to GPR55. Understanding the role of chemical residues in functional selectivity and specific cell- and tissue-targeting might open new therapeutic options in pharmacological drug development and might improve the treatment of the mentioned diseases by intervening in an early step of their pathogenesis.
Introduction
A growing body of research demonstrates the enormous role of neuroinflammation in neurological and psychiatric diseases, such as Alzheimer's Disease (AD), Parkinson's Disease (PD), schizophrenia, and depression [1][2][3][4]. Therefore, pharmacological mechanisms targeting neuroinflammation in the treatment of the mentioned diseases might consequently, the organism itself. Agonism and overexpression of GPR55 are associated with cancer proliferation [22], metabolic diseases, such as obesity and diabetes [23], and decreased osteoclast formation [24]. Therefore, GPR55 antagonists might reverse negative GPR55-mediated effects and open new therapeutical options in the treatment of several diseases. Various in vivo and in vitro studies with central nervous cells or tissues and model organisms are focusing on the effects of GPR55 expression and antagonists in different conditions and diseases. A model for AD, 5xFAD-mice, showed a higher expression of GPR55 in the hippocampus compared to heterozygotic and wildtype mice with impairments in novel object recognition [25]. In the chemically-induced murine PD model, chronic abnormal cannabidiol (GPR55 agonist) treatment improved motoric functions and acted neuroprotectively [26]. GPR55 agonists, as well as antagonists, enfolded beneficial effects on motor coordination and sensorimotor deficits on 6-hydroxydopamine-induced PD symptoms in rats [27], suggesting a more complex role of GPR55 in PD. Furthermore, intrahippocampal administration of the GPR55 agonist O-1602 protected against LPS-induced inflammatory insults of neural stem cells [28]. In another study, intracerebroventricular injection of O-1602 induced anxiolytic effects in an elevated plus-maze test in rats, whereas ML 193 led to increased anxiety-like behavior [29]. In the corticosterone-induced depressivelike behavior of rats, O-1602 reversed depressive-like behavior and normalized increased levels of interleukin (IL)-1β and tumor necrosis factor (TNF)α [30]. GPR55-knockout mice repealed hyperalgesia to mechanical stimuli suggesting GPR55 to be a promising target for treating inflammatory and neuropathic pain [31]. The featured studies indicate a complex role of GPR55 in neurological and psychiatric diseases, with the agonism, as well as the antagonism, being beneficial dependent on the concrete situation. Association of a GPR55-mutation with psychiatric diseases has been shown in human clinical trials as well. In suicide victims without any diagnosed mental illness, decreased GPR55 and CB2 gene expression with increased GPR55-CB2 heteromers were found in the dorsolateral prefrontal cortex (DLPFC), eliciting a potential involvement of GPR55 in impulsivity and decision-making in suicide [32]. The single nucleotide polymorphism Gly195Val of the GPR55 is associated with an increased risk of Anorexia nervosa (AN) in a study comparing Japanese AN-patients with an age-unmatched control group [33].
GPR55 transduces extracellular signals via Gα 12/13 [34] and Gα q [35], resulting in the phosphorylation and activation of phospholipase C, protein kinase C (PKC), mitogenactivated protein kinases (MAPK) such as p38 MAPK, and extracellular signal-regulated kinase (ERK), followed by the activation of transcriptional factors [19]. The activation of the different pathways is complexly regulated and might differ between various ligands [36]. These phenomena might be explained by different primary active states of one receptor as a response to different ligands resulting in distinct conformations responsible for the selective pathway activation, also referred to as functional selectivity [37,38]. As shown in a previous study [13], comparing the three coumarin-based compounds, KIT 3, KIT 17, and KIT 21, the effects on PGE 2 -reduction of the compounds show enormous differences probably dependent on the chemical residues, which might be explained by functional selectivity for antagonists. In contrast to GPR55 agonists, GPR55 antagonists are defined by a head region with the most electronegative region near the end of the central portion, whereas agonists have the electronegativity in the head region. Furthermore, GPR55 antagonists show an aromatic or heterocyclic ring that protrudes out of the binding pocket of GPR55, potentially preventing any conformational change [19]. Therefore, the different residues of the tested coumarin-based compounds [13,14] might determine how deep the compounds fit in the binding pocket and therefore, how potent the compounds might change the receptors state following the extent of the biological effects.
For this current study, two novel synthesized coumarin derivates named KIT C and KIT H were investigated in human neuroblastoma cells and in primary microglial cell cultures of mice. The effects of KIT C and KIT H on the COX-2/PGE 2 pathway and proand anti-inflammatory mediators were evaluated in comparison to the commercial GPR55 agonist O-1602 and antagonist ML 193.
Effects of the Compounds on Cell Viability
Results of the performed MTT cell viability assay for the used compounds are presented in Figure 1. Neither KIT C (light grey bars) or KIT H (dark grey bars), as shown before [17], nor O-1602 (light blue bar) or ML 193 (blue bar) showed cytotoxic effects in IL-1β-stimulated SK-N-SH cells compared to untreated cells. KIT C in concentrations of 5 and 10 µM, 1 µM KIT H, and 25 µM ML 193, on the contrary, significantly increased cell viability or metabolism. Ethanol, used as positive control, strongly induced cell death as expected. Since none of the compounds in the concentrations tested elicited cytotoxic effects, we proceeded with further experiments.
Effects of the Compounds on Cell Viability
Results of the performed MTT cell viability assay for the used compounds sented in Figure 1. Neither KIT C (light grey bars) or KIT H (dark grey bars), a before [17], nor O-1602 (light blue bar) or ML 193 (blue bar) showed cytotoxic e IL-1β-stimulated SK-N-SH cells compared to untreated cells. KIT C in concentrat and 10 µ M, 1 µ M KIT H, and 25 µ M ML 193, on the contrary, significantly incre viability or metabolism. Ethanol, used as positive control, strongly induced cell expected. Since none of the compounds in the concentrations tested elicited cyto fects, we proceeded with further experiments. 193 (blue bar) on cell viability in IL-1β-stimulated SK-N-SH cells (24 h treatment). Cell viab measured by change in color due to MTT-oxidation, and absorbance was measured at 595 an ELISA-reader. Values are presented as the mean ± SEM of at least three independent exp Statistical analysis was performed using one-way ANOVA with Dunett's post hoc tests w 0.05, *** p < 0.001, **** p < 0.0001 compared to untreated cells. The figure is derived from our publication [17].
Effects of the Compounds on IL-1β-Induced PGE2-Release
Since PGE2 is the central molecule in the AA/COX-2/PGE2 pathway and acts flammatory, next we investigated the effects of KIT C, KIT H, O-1602, and ML PGE2-release in IL-1β-stimulated SK-N-SH cells. KIT C (light grey bars), as well a (dark grey bars), showed a significant and concentration-dependent reduction o induced PGE2-levels ( Figure 2) starting at concentrations of 5 µ M. KIT H elicited potent PGE2-reduction than KIT C, reaching basal PGE2-concentrations of untrea in the concentration of 25 µ M. ML 193 (blue bar), a GPR55 antagonist, also showe icant inhibition of IL-1β-mediated PGE2-release, with an effect size between KI KIT H. O-1602 (light blue bar), a potent GPR55 agonist, did not significantly inhib induced PGE2-synthesis. . Cell viability was measured by change in color due to MTT-oxidation, and absorbance was measured at 595 nm using an ELISA-reader. Values are presented as the mean ± SEM of at least three independent experiments. Statistical analysis was performed using one-way ANOVA with Dunett's post hoc tests with * p < 0.05, *** p < 0.001, **** p < 0.0001 compared to untreated cells. The figure is derived from our previous publication [17].
Effects of the Compounds on IL-1β-Induced PGE 2 -Release
Since PGE 2 is the central molecule in the AA/COX-2/PGE 2 pathway and acts proinflammatory, next we investigated the effects of KIT C, KIT H, O-1602, and ML 193 on PGE 2 -release in IL-1β-stimulated SK-N-SH cells. KIT C (light grey bars), as well as KIT H (dark grey bars), showed a significant and concentration-dependent reduction of IL-1βinduced PGE 2 -levels ( Figure 2) starting at concentrations of 5 µM. KIT H elicited a more potent PGE 2 -reduction than KIT C, reaching basal PGE 2 -concentrations of untreated cells in the concentration of 25 µM. ML 193 (blue bar), a GPR55 antagonist, also showed significant inhibition of IL-1β-mediated PGE 2 -release, with an effect size between KIT C and KIT H. O-1602 (light blue bar), a potent GPR55 agonist, did not significantly inhibit IL-1β-induced PGE 2 -synthesis. Cells were stimulated as described under material and methods. After 24 h of stimulation, supernatants were collected and the release of PGE2 was measured by EIA. Values are presented as the mean ± SEM of at least three independent experiments. Statistical analysis was performed using one-way ANOVA with Dunnett's post hoc tests with * p < 0.05, *** p < 0.001, **** p < 0.0001 compared to IL-1β.
GPR55 Activity of KIT C and KIT H
To prove whether the observed anti-inflammatory effects of KIT C and KIT H are mediated via GPR55, a GPR55 activation assay was performed ( Figure 3). AM251 (1 µ M), a GPR55 agonist with additional activities at CB1-and CB2-receptors, and LPI (10 µ M), the physiological agonist of GPR55, were used as positive controls. KIT C in concentrations of 5 and 10 µ M showed about a 4-fold higher GPR55 activation than 1 µ M AM251 without reaching significance but showing a clear trend. KIT H revealed about a 2-fold but not significantly higher GPR55 activation than 1 µ M AM251 in all tested concentrations, comparable to the GPR55 activation capacity of 10 µ M LPI. . GPR55 activation by AM251 (black bar), LPI (white bars), KIT C (light grey bars), and KIT H (dark grey bars) in HEK293T-GPR55 cells. Cells were treated as described under material and methods. After 6 h of stimulation, cells were lysed, and the luciferase activity was measured. Values are presented as the mean ± SEM of at least three independent experiments. Statistical analysis was performed using one-way ANOVA with Dunnett's post hoc tests compared to 1 µM AM251.
Effects of KIT C and KIT H on COX-2 mRNA and Protein Levels
To investigate the underlying mechanisms of the strong PGE2-reduction, COX-2 expression and synthesis were evaluated using Western Blot ( Figure 4A) and qPCR ( Figure 4B). COX-2 protein synthesis was potently increased by IL-1β if compared to the untreated Values are presented as the mean ± SEM of at least three independent experiments. Statistical analysis was performed using one-way ANOVA with Dunnett's post hoc tests with * p < 0.05, *** p < 0.001, **** p < 0.0001 compared to IL-1β.
GPR55 Activity of KIT C and KIT H
To prove whether the observed anti-inflammatory effects of KIT C and KIT H are mediated via GPR55, a GPR55 activation assay was performed ( Figure 3). AM251 (1 µM), a GPR55 agonist with additional activities at CB1-and CB2-receptors, and LPI (10 µM), the physiological agonist of GPR55, were used as positive controls. KIT C in concentrations of 5 and 10 µM showed about a 4-fold higher GPR55 activation than 1 µM AM251 without reaching significance but showing a clear trend. KIT H revealed about a 2-fold but not significantly higher GPR55 activation than 1 µM AM251 in all tested concentrations, comparable to the GPR55 activation capacity of 10 µM LPI. Values are presented as the mean ± SEM of at least three independent experiments. Statistical analysis was performed using one-way ANOVA with Dunnett's post hoc tests with * p < 0.05, *** p < 0.001, **** p < 0.0001 compared to IL-1β.
GPR55 Activity of KIT C and KIT H
To prove whether the observed anti-inflammatory effects of KIT C and KIT H are mediated via GPR55, a GPR55 activation assay was performed ( Figure 3). AM251 (1 µ M), a GPR55 agonist with additional activities at CB1-and CB2-receptors, and LPI (10 µ M), the physiological agonist of GPR55, were used as positive controls. KIT C in concentrations of 5 and 10 µ M showed about a 4-fold higher GPR55 activation than 1 µ M AM251 without reaching significance but showing a clear trend. KIT H revealed about a 2-fold but not significantly higher GPR55 activation than 1 µ M AM251 in all tested concentrations, comparable to the GPR55 activation capacity of 10 µ M LPI. , and KIT H (dark grey bars) in HEK293T-GPR55 cells. Cells were treated as described under material and methods. After 6 h of stimulation, cells were lysed, and the luciferase activity was measured. Values are presented as the mean ± SEM of at least three independent experiments. Statistical analysis was performed using one-way ANOVA with Dunnett's post hoc tests compared to 1 µM AM251.
Effects of KIT C and KIT H on COX-2 mRNA and Protein Levels
To investigate the underlying mechanisms of the strong PGE2-reduction, COX-2 expression and synthesis were evaluated using Western Blot ( Figure 4A) and qPCR ( Figure 4B). COX-2 protein synthesis was potently increased by IL-1β if compared to the untreated
Effects of KIT C and KIT H on COX-2 mRNA and Protein Levels
To investigate the underlying mechanisms of the strong PGE 2 -reduction, COX-2 expression and synthesis were evaluated using Western Blot ( Figure 4A) and qPCR ( Figure 4B). COX-2 protein synthesis was potently increased by IL-1β if compared to the untreated con-trol. Pre-treatment with exclusively 5 µM KIT C (light grey bars) reduced IL-1β-stimulated COX-2 levels in SK-N-SH cells. KIT H (dark grey bars) significantly reduced IL-1β-mediated COX-2 synthesis starting with the concentration of 1 µM. As shown in Figure 4B, COX-2 mRNA expression was potently induced by IL-1β-stimulation for 4 h. Whereas KIT H (dark grey bars) did not affect IL-1β-induced COX-2 expression, KIT C (light grey bars) significantly enhanced IL-1β-induced COX-2 expression in concentrations of 0.1, 5, and 10 µM which contrasts with the Western Blot results. Treatment with KIT C for different time points (2,4,8,12,24 h) followed by the analysis of COX-2 protein synthesis and mRNA expression did not explain the observed diverging effects on COX-2 synthesis and expression (Supplementary Figure S1), so we can exclude effects based on differences in the IL-1β-stimulation time course. COX-2 mRNA expression was higher or at least comparable to the IL-1β positive control at all time points, whereas COX-2 protein levels were first detectable after 8 h at higher levels than IL-1β-stimulated cells and started to decrease after 12 h of stimulation. After 24 h, COX-2 expression remained comparable to or higher than IL-1β-treated cells, while protein levels were significantly reduced as shown in Figure 4. Values are presented as the mean ± SEM of at least three independent experiments. Statistical analysis was performed using one-way ANOVA with Dunnett's post hoc tests with * p < 0.05, ** p < 0.01, *** p < 0.001 and **** p < 0.0001 compared to IL-1β.
Effects of KIT C and KIT H on COX-Activity
Besides COX synthesis and expression, enzyme activities of COX-1 ( Figure 5A) and COX-2 ( Figure 5B Values are presented as the mean ± SEM of at least three independent experiments. Statistical analysis was performed using one-way ANOVA with Dunnett's post hoc tests with * p < 0.05, ** p < 0.01, *** p < 0.001 and **** p < 0.0001 compared to IL-1β.
Effects of KIT C and KIT H on COX-Activity
Besides COX synthesis and expression, enzyme activities of COX-1 ( Figure 5A) and COX-2 ( Figure 5B) were examined as another possible mechanism of PGE 2 -reduction independent of COX synthesis and expression. Neither KIT C (light grey bars) nor KIT H (dark grey bars), the GPR55 agonist O-1602, and antagonist ML 193 affected COX-1 or COX-2 activities in concentrations between 0.1 and 10 µM. Both COX inhibitor controls potently decreased COX activities. The selective COX-1 inhibitor SC-560 decreased COX-1 activity by about 70% but did not reach significance. The COX-1 and COX-2 inhibitor diclofenac in concentrations of 0.1 and 1 µM significantly reduced COX-2 activity.
Int. J. Mol. Sci. 2022, 23, x FOR PEER REVIEW COX-2 activities in concentrations between 0.1 and 10 µ M. Both COX inhibitor potently decreased COX activities. The selective COX-1 inhibitor SC-560 decrease 1 activity by about 70% but did not reach significance. The COX-1 and COX-2 i diclofenac in concentrations of 0.1 and 1 µ M significantly reduced COX-2 activity
Effects of KIT C and KIT H on COX-1 and mPGES-1 Expression
Next, we studied the effects of KIT C (light grey bars) and KIT H (dark grey two other important enzymes involved in the AA/PGE2 pathway, COX-1 and m The expression of both enzymes was evaluated using qPCR. The expression of m ( Figure 6A Arachidonic acid -
Effects of KIT C and KIT H on COX-1 and mPGES-1 Expression
Next, we studied the effects of KIT C (light grey bars) and KIT H (dark grey bars) on two other important enzymes involved in the AA/PGE 2 pathway, COX-1 and mPGES-1. The expression of both enzymes was evaluated using qPCR. The expression of mPGES-1 ( Figure 6A) was strongly induced by IL-1β-treatment for 4 h and 10 µM of KIT C slightly but significantly increased mPGES-1 expression compared to the IL-1β positive control. KIT H did not affect IL-1β-stimulated mPGES-1 expression. COX-1 expression ( Figure 6B) was decreased by stimulation with IL-1β, and KIT C, as well as KIT H, partially ameliorated the IL-1β-induced reduction of COX-1 expression.
Figure 6.
Effects of KIT C (light grey bars) and KIT H (dark grey bars) on mPGES-1 (A) an mRNA expression (B) in IL-1β-stimulated SK-N-SH cells. Cells were stimulated as describe material and methods. After 4 h of stimulation, RNA was isolated and mRNA levels of th target genes were measured using qPCR. Values are presented as the mean ± SEM of at le independent experiments. Statistical analysis was performed using one-way ANOVA w nett's post hoc tests with * p < 0.05 and ### p < 0.001 compared to IL-1β (A) or to untreated
Effects of KIT C and KIT H on IL-1β-Induced Cytokine Release
Besides the AA/PGE2 pathway, the effects of KIT C (light grey bars) and KIT grey bars) on IL-1β-induced IL-6 as pro-inflammatory and IL-10 as anti-inflamma tokines were investigated (Figure 7). Stimulation with IL-1β for 24 h potently indu 6 release in SK-N-SH cells but neither KIT C nor KIT H nor O-1602 affected IL-6 tion ( Figure 7A). The GPR55 antagonist ML 193, however, significantly reduced IL increased IL-6 release by about 60% in IL-1β-treated SK-N-SH cells.
IL-1β reliably induced IL-10 mRNA expression in SK-N-SH cells as shown by and KIT C as well as KIT H both enhanced IL-1β-stimulated IL-10 mRNA levels centrations of 10 µ M compared to IL-1β the positive control ( Figure 7B).
Effects of KIT C and KIT H on IL-1β-Induced Cytokine Release
Besides the AA/PGE 2 pathway, the effects of KIT C (light grey bars) and KIT H (dark grey bars) on IL-1β-induced IL-6 as pro-inflammatory and IL-10 as anti-inflammatory cytokines were investigated (Figure 7). Stimulation with IL-1β for 24 h potently induced IL-6 release in SK-N-SH cells but neither KIT C nor KIT H nor O-1602 affected IL-6 production ( Figure 7A). The GPR55 antagonist ML 193, however, significantly reduced IL-1β and increased IL-6 release by about 60% in IL-1β-treated SK-N-SH cells.
IL-1β reliably induced IL-10 mRNA expression in SK-N-SH cells as shown by qPCR, and KIT C as well as KIT H both enhanced IL-1β-stimulated IL-10 mRNA levels in concentrations of 10 µM compared to IL-1β the positive control ( Figure 7B). After 24 h of stimulation, supernatants were collected and the release of IL-6 was measured by ELISA (A). After 4 h of stimulation, RNA was isolated and the mRNA levels of the shown target genes were measured using qPCR. Values are presented as the mean ± SEM of at least three independent experiments. Statistical analysis was performed using one-way ANOVA with Dunnett's post hoc tests with * p < 0.05, *** p < 0.001 and **** p < 0.0001 compared to IL-1β.
Effects of KIT C and KIT H on PGE2-and IL-6 Release in LPS-Stimulated Primary Mouse Microglia
The promising results of KIT C and KIT H in SK-N-SH cells were re-evaluated in primary mouse microglia as preliminary results for follow-up studies. Only KIT C (light grey bars) but not KIT H (dark grey bars) significantly reduced PGE2-levels after LPSinduction in primary microglia ( Figure 8A). A total of 10 µ M KIT C reduced PGE2 levels to concentrations compared to untreated primary microglia.
LPS-stimulation for 24 h potently induced IL-6 production ( Figure 8B) in primary mice microglia as well. KIT C (light grey bars) significantly decreased LPS-induced IL-6synthesis by about 50% and KIT H (dark grey bars) showed a non-significant trend of reducing IL-6 release in primary mice microglial cells. After 24 h of stimulation, supernatants were collected and the release of IL-6 was measured by ELISA (A). After 4 h of stimulation, RNA was isolated and the mRNA levels of the shown target genes were measured using qPCR. Values are presented as the mean ± SEM of at least three independent experiments. Statistical analysis was performed using one-way ANOVA with Dunnett's post hoc tests with * p < 0.05, *** p < 0.001 and **** p < 0.0001 compared to IL-1β.
Effects of KIT C and KIT H on PGE 2 -and IL-6 Release in LPS-Stimulated Primary Mouse Microglia
The promising results of KIT C and KIT H in SK-N-SH cells were re-evaluated in primary mouse microglia as preliminary results for follow-up studies. Only KIT C (light grey bars) but not KIT H (dark grey bars) significantly reduced PGE 2 -levels after LPSinduction in primary microglia ( Figure 8A). A total of 10 µM KIT C reduced PGE 2 levels to concentrations compared to untreated primary microglia.
LPS-stimulation for 24 h potently induced IL-6 production ( Figure 8B) in primary mice microglia as well. KIT C (light grey bars) significantly decreased LPS-induced IL-6-synthesis by about 50% and KIT H (dark grey bars) showed a non-significant trend of reducing IL-6 release in primary mice microglial cells.
Discussion
The current study investigates the anti-neuroinflammatory effects of two novel coumarin derivates, KIT C and KIT H, in SK-N-SH cells as well as in primary microglial cell cultures. Selected results of KIT C and KIT H were compared to the commercial GPR55 agonist O-1602 and the GPR55 antagonist ML 193. None of the tested compounds showed toxic effects in SK-N-SH cells. KIT H demonstrated the most potent reduction of IL-1βinduced PGE2-levels compared to KIT C and ML 193, whereas O-1602 did not affect PGE2levels. In primary mice microglia, KIT C decreased LPS-induced PGE2-release, whereas KIT H did not affect PGE2-levels significantly. Although KIT C increased IL-1β-induced COX-2 expression in SK-N-SH cells, both coumarin derivates decreased COX-2 protein synthesis. Neither COX-1 nor COX-2 enzymatic activity was affected by the tested compounds and thus, the PGE2 decreasing effects are not due to direct inhibition of those enzymes as shown for classical NSAIDs. Furthermore, the IL-1β-mediated reduction of COX-1 expression was partially reversed by KIT C and KIT H. In SK-N-SH cells, ML 193 significantly decreased IL-1β induced IL-6 levels, but neither KIT C nor KIT H or O-1602 affected IL-6 release. In primary microglia, however, KIT C but not KIT H significantly inhibited LPS-induced IL-6 release. IL-1β-mediated IL-10 expression was significantly increased by KIT C and KIT H in SK-N-SH cells.
The anti-neuroinflammatory effects of other coumarin derivates have been shown using different coumarin derivates (KITs) in primary rat microglia, potently inhibiting
Discussion
The current study investigates the anti-neuroinflammatory effects of two novel coumarin derivates, KIT C and KIT H, in SK-N-SH cells as well as in primary microglial cell cultures. Selected results of KIT C and KIT H were compared to the commercial GPR55 agonist O-1602 and the GPR55 antagonist ML 193. None of the tested compounds showed toxic effects in SK-N-SH cells. KIT H demonstrated the most potent reduction of IL-1β-induced PGE 2 -levels compared to KIT C and ML 193, whereas O-1602 did not affect PGE 2 -levels. In primary mice microglia, KIT C decreased LPS-induced PGE 2 -release, whereas KIT H did not affect PGE 2 -levels significantly. Although KIT C increased IL-1β-induced COX-2 expression in SK-N-SH cells, both coumarin derivates decreased COX-2 protein synthesis. Neither COX-1 nor COX-2 enzymatic activity was affected by the tested compounds and thus, the PGE 2 decreasing effects are not due to direct inhibition of those enzymes as shown for classical NSAIDs. Furthermore, the IL-1β-mediated reduction of COX-1 expression was partially reversed by KIT C and KIT H. In SK-N-SH cells, ML 193 significantly decreased IL-1β induced IL-6 levels, but neither KIT C nor KIT H or O-1602 affected IL-6 release. In primary microglia, however, KIT C but not KIT H significantly inhibited LPS-induced IL-6 release. IL-1β-mediated IL-10 expression was significantly increased by KIT C and KIT H in SK-N-SH cells.
The anti-neuroinflammatory effects of other coumarin derivates have been shown using different coumarin derivates (KITs) in primary rat microglia, potently inhibiting LPSinduced PGE 2 -release [13,14]. Our group showed a strong reduction of PGE 2 -synthesis, COX-2 gene expression, and protein synthesis as well as mPGES-1 protein synthesis after treatment with KIT 17 [13]. KIT 10 also reduced COX-2 and mPGES-1 protein synthesis [14]. Neither KIT 17 nor KIT 10 affected COX-2 enzymatic activity, whereas KIT 17 but not KIT 10 increased COX-1 activity [13,14]. The demonstrated reduction of COX-2 protein synthesis by KIT 17 was replicated in the current study for KIT C and KIT H. Since COX-2 is a key step in the enzymatic transformation of arachidonic acid to PGH 2 [39], the reduction of COX-2 protein levels might be at least partially responsible for the observed PGE 2 -reduction. mPGES-1, the enzyme responsible for the final step in the synthesis of PGE 2 out of PGH 2 [40], could not be investigated on protein levels in the current study, since SK-N-SH cells showed a high basal mPGES-1 signal (data not shown). This might be explained by the cross reaction of the available antibodies, potentially binding to cPGES or mPGES-2 as well and, therefore, pretending high basal mPGES-1 protein levels. Thus, the observed reduction of PGE 2 in SK-N-SH cells is likely and at least in part mediated by decreased COX-2 protein levels. The role of mPGES-1 in the observed decrease of PGE 2 after KIT C and KIT H pretreatment, however, needs to be evaluated in future studies. Since COX-1 expression was not significantly affected by KIT C or KIT H compared to the untreated control, gastrointestinal ulcers as a side effect of COX-1 inhibition in vivo may be prevented. However, COX-1 protein levels were not examined in the current studies. The observed effects of KIT C and KIT H in LPS-stimulated primary mice microglia are likely to be mediated by similar mechanisms as in IL-1β-stimulated SK-N-SH cells. Therefore, further studies are necessary to investigate the effects of coumarin derivates on COX-1 protein synthesis and gastrointestinal effects as well as the responsible pathways and mechanisms in primary microglia. The reduction of the pro-inflammatory IL-6 in LPS-stimulated primary microglia and the enhanced expression of the anti-inflammatory IL-10 in IL-1β-stimulated SK-N-SH cells after treatment with KIT C and KIT H supports the anti-inflammatory potential of these coumarin compounds. Since the results of KIT C and KIT H in SK-N-SH cells as well as the preliminary data in primary mice microglia are promising, future experiments with both compounds using animals are ethically justifiable.
The signal transduction induced by GPR55 activation and targeted by KIT C and H resulting in the shown COX-2 protein reduction might involve different downstream pathways associated with the GPR55. We analyzed different downstream pathways of the GPR55 using KIT 10 in primary microglia demonstrating no effects on the mitogen activated protein kinases (MAPK)-pathway but a reduced LPS-induced IκB-α phosphorylation [14]. As shown in previous publications, PGE 2 -suppression as well as decreasing COX-2 protein levels can be achieved by inhibiting NF-κB-and MAPK-pathways [41]. For the current study, different pathways, such as MAP-kinases (phosphor-Erk1/2, p38 MAPK, SAPK/Jnk), IκB-α, NF-κB, protein kinase C (PKC), nuclear factor of activated t-cells (NFAT), nuclear growth factor (NGF), and brain-derived neurotrophic factor (BDNF) were examined, but neither KIT C nor KIT H significantly affected the investigated pathways (data not shown).
The inhibition of oxidative stress/isoprostane pathways independent of COX-2 might be responsible for the reduction of PGE 2 by KIT C and KIT H as well, as discussed in our previous study [42] showing that COX enzymatic activity is enhanced by antioxidative stress [43]. In this and a previous study, KIT C as well as KIT H decreased oxidative stress and 8-Iso-PGF 2α synthesis via GPR55 coupled to the inhibition of PGE 2 and therefore possibly affecting the AA pathway by reduced COX-activity [17]. However, COX enzymatic activity is not significantly affected by treatment with KIT C or KIT H in the current study. Inhibition of phospholipase A2 (PLA2) reduces the concentration of available AA in the cells; therefore, PGE 2 -levels are reduced as a consequence [44]. In the current study, we did not assess PLA2 expression, protein levels, or activity, so we are not able to exclude a PLA2 dependent mechanism of the PGE 2 -suppression. For this reason, the molecular targets of KIT C and KIT H leading to PGE2/COX-2 reduction need to be further investigated in future studies.
In contrast to the decrease of COX-2 protein levels in 24-h IL-1β-stimulated SK-N-SH cells, mRNA concentrations were significantly increased due to the treatment with KIT C and as tendence after treatment with KIT H in 4-h stimulated cells. We conducted a time curve with 10 µM of both compounds, investigating protein as well as mRNA levels after 2, 4, 8, 12 and 24 h of IL-1β-stimulation. After 2 and 4 h of stimulation, we were not able to detect COX-2 by Western Blot. Protein levels of KIT C and KIT H pretreated cells were increased compared to IL-1β only treated cells at 8 (KIT C and KIT H) and 12 h (KIT C only). KIT H pretreated cells showed less COX-2 protein after 12 h of stimulation. COX-2 mRNA levels were significantly increased by both KITs after only 4 h of stimulation but were higher or at least comparable to the IL-1β positive control at all time points. Therefore, the difference between mRNA levels and protein levels of COX-2 cannot be explained by the dynamic over time. It has been shown that COX-2 mRNA stability is mediated by Erk1/2 and can be affected by G-protein coupled receptors (GPCRs). The Kaposi sarcoma virus oncogenic protein (vGPCR), for example, is associated with COX-2 overexpression and enhances mRNA stability of COX-2 [45]. Reduced mRNA-stability, however, leads to an earlier degradation of COX-2 mRNA, so even higher levels of mRNA might not usher high levels of COX-2 protein, because the mRNA is shorter and available for translation. Since the inhibition of pathways such as Erk1/2 coupled to the GPR55 has been shown to affect mRNA stability, this could be a possible explanation for increased COX-2 mRNA levels not leading to increased protein levels. However, phosphorylation of Erk1/2 is not affected by treatment with KIT C or KIT H in SK-N-SH-cells (not shown). Another mechanism leading to reduced COX-2 protein levels with increased mRNA concentrations might be a ubiquitin-mediated degradation of COX-2 protein. It has been shown that Centromere Protein U (CENPU) reduces ubiquitination as well as degradation of COX-2 in breast cancer [46]. Even if CENPU-dependent affection of COX-2 degradation is not likely to be responsible for the observed effects of KIT C and KIT H on COX-2 protein and mRNA levels, the mechanism of enhanced COX-2 degradation might be triggered by other pathways such as protein kinase C. Further studies are needed to elucidate the underlying mechanisms of the observed effects of KIT C and H on the AA pathway.
The observed COX-2 protein levels as well as PGE 2 reduction after 24 h of IL-1βstimulation is not accompanied by a reduction of COX-1 or COX-2 enzymatic activity. The enzymatic activity was measured after 30 min of treatment with KIT C and KIT H, so the enzymatic activity is independent of changes to protein levels of COX-1 or COX-2, as demonstrated in the time curve (supplementary Figure S1). The time curves demonstrate that changes of COX-2 levels can be found earliest after 8 h. The results show that KIT C and KIT H do not specifically bind and inhibit COX-1 or COX-2 enzymes, but rather affect the enzyme synthesis via pathways coupled to the GPR55.
It has been shown and suggested that coumarin derivates (KITs) enfold their biological effects by GPR55 antagonisms with inverse agonistic activity at the GPR55 [13,14,16]. However, neither for KIT C nor for KIT H radioligand assays have been implemented. Instead, a GPR55 activation assay was performed demonstrating a biological activity of KIT C and KIT H at the receptor, comparable to the known agonists LPI and AM251. The GPR55 activation data support the hypothesis that KIT C and KIT H enfold their effects by acting via direct activation of GPR55. Therefore, we further studied the effects of the commercially available GPR55 agonist O-1602 as well as the GPR55 antagonist ML 193. Interestingly, the GPR55 antagonist ML 193, but not the agonist O-1602, showed comparable biological effects to KIT C and KIT H in most experiments, suggesting an antagonistic activity of ML 193 with additional inverse agonistic activity like KIT C and KIT H at the GPR55 as shown with the previously studied coumarin derivates [16,21]. In line with this hypothesis, the decrease of IL-1β-induced PGE 2 -levels after ML 193 treatment remains between the effect size of PGE 2 -inhibition due to KIT C and KIT H administration. Our previous study, investigating the anti-oxidative effects of KIT C and KIT H in SK-N-SH cells, furthermore demonstrated the GPR55-dependency of the effects by both compounds since the antioxidative effects of KIT C and KIT H were abolished after GPR55 knockout in SK-N-SH cells [17]. The molecular structure of ligands determines either agonistic or antagonistic (respective with inverse agonistic activity) effects at the receptor [19]. All KITs share the coumarin scaffold with different chemical residues, which might momentously change the biological effects via GPR55. This might be an explanation of the observed differences between the effect size of KIT C and KIT H in different experiments. In a previous study, the efficacy of KIT 3, KIT 17, and KIT 21 showed enormous differences in the magnitude of inhibition of LPS-induced PGE 2 -release [13], underlining the hypothesized importance of the chemical residues. The chemical residues and the distribution of electronegativity in the molecules determine the position of the molecule and the deepness of binding in the GPR55 binding pocket and, therefore, the triggered effects [19]. Furthermore, heterocyclic or aromatic residues are described as a common characteristic of GPR55 antagonists, protruding out of the binding pocket and stabilizing the receptors "Off"-conformation [19], which at least KIT H exhibits. Since we observe effects that suggest an inverse agonistic activity of the substances, the lack of the heterocyclic or aromatic ring of KIT C basically does not question its GPR55-affinity as antagonist with inverse agonistic activity. On a biomolecular level, functional selectivity and differences in conformational change [37,38] after the binding of the compounds to GPR55 due to the differences in the chemical residues might be the explanation for the different effects of the tested coumarin derivates. However, the underlying mechanisms of the observed differences of the effects of KIT C and KIT H cannot be answered by the recent study. Therefore, future research is necessary to identify the exact role of the chemical residues in the biological effectiveness. This might open new options in the pharmacological drug design based on the coumarin scaffold.
Interestingly, the effects of KIT C and KIT H vary dependent on the cell type used as well. For example, KIT C but not KIT H showed a significant reduction of IL-1βinduced PGE 2 in primary mice microglia. In SK-N-SH cells, however, KIT H showed a stronger inhibition of PGE 2 than KIT C. Since we compared two different types of cells, namely neuroblastoma (neuronal) and microglial cells, the observed digressing effects might be explained by differences in GPR55 density on the cell surface, coupled Gα proteins, and associated downstream pathways as shown for other GPCRs. For example, for muscarinic agonists such as carbachol, weak efficacy in combination with differing levels of receptor affinity leads to cell-or tissue-dependent selectivity at the muscarinic GPCRs [37]. Furthermore, the proposed differences in receptor density and G-protein coupling dependent on the cell cycle phase has been shown for calcitonin before [47]. Since the investigated cells take over different functions in the CNS, with microglia fulfilling a key function in immune defense and neuroinflammation [48], the different effects might be explained by the different physiological cell functions. Neurons are primarily affected by the microglial inflammatory response [48] and physiologically building neuronal networks for signal transduction and processing [49].
Another possible explanation for the observed cell-dependent effects of KIT C and KIT H might be the cells' donor species. In this study, we compared human neuroblastoma cells with primary microglia cells of C57B1/6 WT mice. The cloning of the GPR55 showed 4.3, 7, and 10 kilobase (kb) mRNA transcripts in human brain tissue whereas the examination of the rat spleen revealed a 3.5 kb mRNA transcript [18]. A search in the NCBI Protein Database comparing GPR55-sequences evinced small differences in the number of amino acids. In humans, 319 amino acids form the GPR55, and in mice, the receptor is built by 327 amino acids. Therefore, the receptor most likely reveals differences between humans and mice, and for this reason, the same compound might elicit different effects depending on the species investigated. Species-selective effects in the endocannabinoid system have been shown for rats and mice investigating the cannabinoid receptor (CB)2. Rats and mice were treated with intranasal JWH-133, a selective CB2 agonist, and mice but not rats showed a reduction in the self-administration of intravenous cocaine as a consequence [50]. A third possible mechanism explaining the observed species-differences might be the expression and protein synthesis of the GPR55 as well as the coupled Gα proteins and pathways that may vary between species, as mentioned for cell-type selective effects earlier [37]. Further studies are necessary to distinguish the proposed mechanisms of the cell-/species-selective effects of KIT C and KIT H. Since studies in model organisms are often required before clinical studies can be implemented, differences between species are important to draw the right conclusions about doses and expected effects in humans.
The used concentrations of KIT C and KIT H as well as O-1602 and ML 193 were chosen based on previous experiences with similar coumarin derivates [13,14,17] as well as publications investigating the commercial GPR55 agonist and antagonist [51][52][53][54]. O-1602 is a potent agonist at the GPR55 with an EC 50 of 13 nm [55], and ML 193 is a potent antagonist of the GPR55 with a GPR55-potency of 221 nm [29]. IC 50 of other related coumarinderivates such as KIT 10 [14] and KIT 17 [13] was between approximately 5 µM and 10 µM in radioligand-assays. The IC 50 data of KIT 10 and Kit 17 were determined using a cellular system and was therefore in the µM range when compared to ML 193 that was determined using membrane preparation. Unfortunately, for KIT C and KIT H, no IC 50 values are available, and for the GPR55 and the investigated compounds no B max or K D values are available. However, the used concentrations of O-1602 and ML-193 are much higher than the reported EC 50 and IC 50 values. Therefore, the observed effects for at least those two compounds might not only be GPR55-dependent but mediated by specific intermolecular actions. Specific interactions are defined as ligand-receptor-binding, whereas non-specific interactions can occur between ligands and other enzymes or proteins of the cells that are not the targeted receptor [56,57]. The used concentrations of KIT C and KIT H, in contrast, are at sufficient levels compared to the IC 50 values of other coumarin based compounds. Moreover, for KIT C and KIT H, knockout of the GPR55 in SK-N-SH-cells, as described in a previously published paper, fully abolished observed effects of both coumarin compounds on oxidative stress-induced cell death [17]. This, and the data obtained from our GPR55 activity assay, support the hypothesis that effects of KIT C and KIT H are GPR55 dependent and not non-specific receptor-independent interactions, even if they have lower affinity to the GPR55 than the commercially available ligands used.
A growing body of research focuses on anti-inflammatory treatment in numerous neurological and psychiatric diseases. For depression, positive effects of NSAIDs and glucocorticoids were observed [5], but NSAIDs are associated with gastrointestinal ulcers, and long glucocorticoid treatments may cause numerous side effects, such as osteoporosis and an increase of infections [58]. Coumarin derivates with anti-inflammatory characteristics and possibly milder side effects might be an alternative to the already known anti-inflammatory drugs. Some authors support the hypothesis of the inflammatory genesis of depression by the fact that autoimmune diseases, as well as depression, are observed at about two times higher prevalence in women than in men [2]. Besides depression, anti-inflammatory drugs such as COX-2 inhibitors ameliorated psychiatric symptoms in schizophrenia, Borderline Personality Disorder, and obsessive-compulsive disorders, whereas NSAID treatment reduced the anti-depressive effects of selective serotonin reuptake inhibitors (SSRI) [4]. Furthermore, neuroinflammation is associated with neurological diseases, such as PD and AD, as well as neuroinflammation possibly disturbing the balanced interaction of different CNS cells and leading to neurodegeneration [3].
Anti-neuroinflammatory capacities have been shown for coumarin derivates in previous studies [13,14] as well as for KIT C and KIT H in the current study. Therefore, coumarin derivates such as KIT C as well as KIT H might be promising novel compounds in the treatment of diseases with (neuro)inflammatory etiologies and pathomechanisms. Further understanding of functional selectivity [37,38] at the GPR55 based on different chemical residues as well as species differences might improve pharmaceutical drug design for these diseases. The evaluation of KIT C and KIT H in animal disease models might improve the understanding of the potential of both compounds in the future treatment of neurological and psychiatric diseases and demonstrate possible side-effects of coumarin derivates.
Chemicals
KIT C and KIT H were synthesized as described previously [17] (100,000 U/mL in phosphate buffered saline (PBS)) was purchased from Roche Diagnostics (Manheim, Germany) and was used at a final concentration of 10 U/mL for the experiments. 5 mg/mL lipopolysaccharide (LPS) from Salmonella typhimurium (Sigma-Aldrich GmbH, Taufkirchen, Germany) was dissolved in PBS as stock and diluted with distilled water for a final concentration of 10 ng/mL in primary microglia cultures. Figure 9 shows the chemical structure of KIT C and KIT H that were already introduced in a previous paper [17], as well as the structures of O-1602 and ML 193 obtained from the manufacturer's (Cayman Chemicals, Ann Arbor, MI, USA) website.
neurological and psychiatric diseases and demonstrate possible side-effects of coumarin derivates.
Chemicals
KIT C and KIT H were synthesized as described previously [17] (ML 193). Human interleukin (IL)-1β (100,000 U/mL in phosphate buffered saline (PBS)) was purchased from Roche Diagnostics (Manheim, Germany) and was used at a final concentration of 10 U/mL for the experiments. 5 mg/mL lipopolysaccharide (LPS) from Salmonella typhimurium (Sigma-Aldrich GmbH, Taufkirchen, Germany) was dissolved in PBS as stock and diluted with distilled water for a final concentration of 10 ng/mL in primary microglia cultures. Figure 9 shows the chemical structure of KIT C and KIT H that were already introduced in a previous paper [17], as well as the structures of O-1602 and ML 193 obtained from the manufacturer's (Cayman Chemicals, Ann Arbor, MI, USA) website. [17], respectively, as provided by the Cayman Chemicals website (https://www.caymanchem.com (accessed on 6 January 2022)).
Ethics Statement
Animals were obtained from the Center for Experimental Models and Transgenic Services-Freiburg (CEMT-FR). All the experiments were approved and conducted according to the guidelines of the ethics committee of the University of Freiburg Medical School under protocol No. X-19/06R and the study was carefully planned to minimize the number of animals used and their suffering.
Primary Mouse Microglia Cultures
Primary mouse mixed glia cultures were prepared from 2 to 3 days old C57B1/6 WT mice as described before [44,[59][60][61]. Briefly, brains were carefully taken under sterile conditions and meninges were removed. The cortices were dissociated and filtered through a 70 µm nylon cell strainer (BD biosciences, Heidelberg, Germany). After centrifugation at 1000 rpm for 10 min the cells were resuspended in LPS-free Dulbecco's modified Eagle's medium (DMEM) with 10% fetal calf serum (FCS; Bio & SELL GmbH, Nürnberg/Feucht, Germany) and antibiotics (DMEM and anti-anti obtained from Gibco, Thermo Fisher Scientific, Bonn, Germany) and cultured in 10 cm cell culture dishes at a density of 5 × 10 5 cells/plate (Falcon, Heidelberg, Germany) in the humified atmosphere at 10% CO 2 and 37 • C. After 12 days in vitro, floating microglia were harvested by gently shaking on an orbital shaker and re-seeded into 6-well plates with approximately 3 × 10 5 cells per well and an 80% survival rate to obtain pure microglial cell cultures. The protocol for preparing pure primary microglial cultures was established by Gebicke-Härter et al. in our laboratory and published in 1989. This publication demonstrated a purity of >98% for microglia using morphological features, immunofluorescence (monocyte/macrophage marker anti-CD68 = ED1), and cytochemical analysis [60]. On the next day, the medium was changed to remove non-adherent cells, and after 1 h, cells were stimulated for the experiments.
Cell Viability Assay
MTT assay (Sigma-Aldrich GmbH, Taufkirchen, Germany) was used for measuring the viability of SK-N-SH neuronal cells after treatment with KIT C (1, 5 and 10 µM), KIT H (1, 5 and 10 µM), O-1602 (5µM), or ML 193 (25 µM). This assay determines the number of metabolically active cells and allows conclusions about viable cells in the culture based on the reduction of a yellow tetrazolium salt (3-(4,5-dimethylthiazol-2-yl)-2,5diphenyltetrazolium bromide or MTT) to purple formazan in the cells. Briefly, cells were cultured in 96-well plates at a density of 25 × 10 3 cells/well for 24 h. The medium was changed and after at least 1 h, cells were pre-treated with different concentrations of the compounds for 30 min. Cells were then incubated with or without IL-1β for the next 20 h. 20 µL Ethanol (approximately 20% end conc.) was used to induce cell death as a positive control. Next, 20 µL of MTT-solution (working concentration 5 mg/mL) were added to all wells and incubated for another 4 h at 37 • C. Afterwards, the medium was removed and replaced with 200 µL of DMSO. The colorimetric reaction was measured using the MRX e Microplate reader (Dynex Technologies, Denkerdorf, Germany) at 595 nm. Afterwards, cells were incubated for the next 24 h with or without IL-1β (10 U/mL) and supernatants were collected. The levels of PGE 2 were measured using a commercially available enzyme immunoassay (EIA) kit (from Cayman Chemicals, Ann Arbor, MI, USA, distributed by BioMol, Hamburg, Germany) following the manufacturer's protocol. The results were normalized to IL-1β and presented as a percentage of change in PGE 2 levels of at least three independent experiments. PGE 2 -release in primary microglia cultures was measured as described for SK-N-SH cells. Only KIT C (5 and 10 µM) and KIT H (5 and 10 µM) were evaluated in primary microglia and LPS (10 ng/mL) was used for stimulation instead of IL-1β. Supernatants were collected after 24 h and used in the EIA following the protocol. The results were normalized to LPS and presented as percentage of change in prostaglandin levels of at least three independent experiments.
Determination of GPR55 Agonistic Activity
The determination of GPR55 agonistic activity was carried out using HEK293T-GPR55 cells that overexpress the human GPR55 [62]. Briefly, HEK293T-GPR55 cells were cultured in 24-well plates (10 5 cells/well) and transiently transfected with 0.2 µg of the reporter plasmid CRE-Luc that contains six consensus cAMP responsive elements (CRE) linked to firefly luciferase reporter gene using Roti©-Fect (Carl Roth, Karlsruhe, Germany). Transfected cells were treated with increasing concentrations of either the test compounds KIT C and KIT H or the positive controls AM251 and LPI for 6 h. Then, the cells were washed twice with PBS 1× and lysed in 100 µL lysis buffer containing 25 mM Tris-phosphate (pH 7.8), 8 mM MgCl 2 , 1 mM DTT, 1% Triton X-100, and 7% glycerol for 15 min at room temperature in a horizontal shaker. Luciferase activity was measured using a TriStar2 Berthold/LB942 multimode reader (Berthold Technologies, Bad Wildbad, Germany) following the instructions of the luciferase assay kit (Promega, Madison, WI, USA). The relative light units (RLUs) were calculated, and the results were expressed as the percentage of activation over the control. The experiment was performed three times.
Cyclooxygenase Activity Assay
COX enzymatic activity was investigated using the arachidonic acid assay, as described previously [63]. For COX-1 activity, neuroblastoma cells were plated in 24-well plates and after 24 h, the medium was removed and replaced with a serum-free medium. KIT C or KIT H (0.1-10 µM) or the selective inhibitor of COX-1 SC560 [(1 and 10 µM); Sigma-Aldrich GmbH, Taufkirchen, Germany] was added and left for 15 min. Then, arachidonic acid (15 µM; Sigma-Aldrich GmbH, Taufkirchen, Germany) was applied for another 15 min. Finally, supernatants were collected and used for the determination of PGE 2 as described above.
Determination of IL-6 Release
Effects of KIT C and KIT H (0.1-25 µM) on IL-6 release in IL-1β-stimulated SK-N-SH cells and (concentrations 5 and 10 µM) in LPS-stimulated primary microglia were evaluated using ELISA. Commercially available Invitrogen TM eBioscience TM human or mouse IL-6 ELISA Ready-SET-Go! TM Kits (Thermo Fisher Scientific, Bonn, Germany) were used following the manufacturer's protocol. Briefly, cells were stimulated after pretreatment with KIT C or KIT H for 24 h with IL-1β (SK-N-SH cells) or LPS (primary microglial cells). Supernatants were collected and stored at −80 • C for further experiments. ELISA-plates (Nunc MaxiSorp TM ; Thermo Fisher Scientific, Bonn, Germany) were coated with IL-6 capture antibody overnight. The next day, samples were added followed by the addition of an IL-6 determination antibody after removing supernatants and washing the plate. The amount of bound IL-6 determination antibody was quantified using an HRP-dependent colorimetric reaction. The absorbance of the wells was read at 450 nm using the MRX e Microplate reader and calculated as % of IL-1β or LPS after blank substraction.
Statistical Analysis
Raw values were converted to percentage and IL-1β (10 U/mL), LPS (10 ng/mL), or the appropriate negative control, such as untreated cells for MTT-assay, were considered as 100%. Data are represented as mean ± SEM of at least three independent experiments. The statistical comparisons were performed using one-way ANOVA with Dunett's post hoc test (Prism 8 software, GraphPad software Inc., San Diego, CA, USA). The level of significance was set at * p < 0.05, ** p < 0.01, *** p < 0.001 and **** p < 0.0001 and is indicated in the figures.
Conclusions
Anti-inflammatory treatment of neurological and psychiatric diseases has been shown to abate disease's symptoms in murine models and clinical studies. However, many available anti-inflammatory drugs are associated with momentous side effects. Therefore, the development of new pharmacological therapeutics might improve the treatment of these CNS diseases. KIT C and KIT H, likely enfolding their effects via GPR55, might be promising novel anti-inflammatory strategies for future interventions by decreasing | 2022-01-19T16:24:56.021Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "a047e991471b6ec35072cbd09f4a3e68857bbd4f",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8779649",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "41a044a22744af8857300a2d76555ce60dc12de1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260269987 | pes2o/s2orc | v3-fos-license | Vitamin D—An Effective Antioxidant in an Animal Model of Progressive Multiple Sclerosis
Vitamin D (VD) is the most discussed antioxidant supplement for multiple sclerosis (MS) patients and many studies suggest correlations between a low VD serum level and onset and progression of the disease. While many studies in animals as well as clinical studies focused on the role of VD in the relapsing-remitting MS, knowledge is rather sparse for the progressive phase of the disease and the development of cortical pathology. In this study, we used our established rat model of cortical inflammatory demyelination, resembling features seen in late progressive MS, to address the question about whether VD could have positive effects on reducing cortical pathology, oxidative stress, and neurofilament light chain (NfL) serum levels. For this purpose, we used male Dark Agouti (DA) rats, with one group being supplemented with VD (400 IE per week; VD+) from the weaning on at age three weeks; the other group received standard rodent food. The rat brains were assessed using immunohistochemical markers against demyelination, microglial activation, apoptosis, neurons, neurofilament, and reactive astrocytes. To evaluate the effect of VD on oxidative stress and the antioxidant capacity, we used two different oxidized lipid markers (anti- Cu++ and HOCl oxidized LDL antibodies) along with colorimetric methods for protective polyphenols (PP) and total antioxidative capacity (TAC). NfL serum levels of VD+ and VD− animals were analyzed by fourth generation single-molecule array (SIMOA) analysis. We found significant differences between the VD+ and VD− animals both in histopathology as well as in all serum markers. Myelin loss and microglial activation is lower in VD+ animals and the number of apoptotic cells is significantly reduced with a higher neuronal survival. VD+ animals show significantly lower NfL serum levels, a higher TAC, and more PP. Additionally, there is a significant reduction of oxidized lipid markers in animals under VD supplementation. Our data thus show a positive effect of VD on cellular features of cortical pathology in our animal model, presumably due to protection against reactive oxygen species. In this study, VD enhanced remyelination and prevented neuroaxonal and oxidative damage, such as demyelination and neurodegeneration. However, more studies on VD dose relations are required to establish an optimal response while avoiding overdosing.
Introduction
Multiple sclerosis (MS) is a chronic disease of the central nervous system (CNS) that mainly affects young adults. It is caused by an autoimmune response to central nervous system structures, including both white and grey matter. After decades of research, many pathomechanisms behind this disease are still not fully understood. Despite the availability of immunomodulatory therapeutics, which effectively reduce inflammatory processes Nutrients 2023, 15, 3309 2 of 14 directed against the CNS, the symptoms still might worsen, which heavily impairs the quality of life of patients. This underlines the importance of identifying novel treatment targets, especially for the progressive disease phase (PMS), which is associated with the development of severe and irreversible disability. Vitamin D (1,25-dihydroxyvitamin D; VD) is the most discussed antioxidant supplement for several autoimmune diseases, including MS. Its functions comprise regulation of calcium homeostasis as well as effects on immune response. VD is not only assigned to the environmental risk factors for developing MS, but is also part of a genetic factor and an epigenetic factor [1,2]. The association between VD receptor single-nucleotide polymorphisms and MS risk has been reported by many authors, with a few studies producing opposite results [3]. Oxidative stress is discussed as a mediator of demyelination and axonal damage in both MS and respective animal models [4] and VD seems to act as a regulating factor on oxidative stress [5]. Many studies, especially in relapsing-remitting MS, suggest a positive effect of higher VD serum levels on onset, disease activity, and progression of the disease, leading to the idea of using VD as a supplementary therapy for MS patients. Several clinical trials and research in experimental animal models of MS addressed this question, resulting in mixed and sometimes even contradictory data [6][7][8]. The design of clinical trials can be especially problematic due to the possibility of overdose. Thus, some studies focused on safe VD supplementation and, in clinical trials, add-on therapy for short-term periods with doses up to 40,000 IU per day were considered safe. Due to the still existing knowledge gap for long-term and high-dose therapy, the supplementation of VD in MS is still a sensitive task that needs to be supervised by physicians [9,10]. Nevertheless, when applied correctly, VD could have high potential in MS therapy. Especially in PMS, establishing novel treatment approaches is an unmet need, as most available MS therapeutics are only effective in the relapsing-remitting disease phase. Similarly, much more research results for the safe use of VD supplementation are available for the relapsing-remitting MS compared to the progressive disease phase. Thus, the potential and possible effect of VD on this disease form or on preventing progression remains widely unclear. In the literature, there are only a few associations between low VD status and early conversion to the secondary progressive MS phase, pointing towards the role of VD in regulating immune responses against the CNS [11]. Other studies only compared a few outcomes (associations between VD serum level and imaging data) in PMS patients-without comparing to a corresponding VD-supplemented patient group-with the conclusion that there is no association between VD levels and visual function or brain volume in PMS [12,13]. Despite the great importance of further clinical trials to evaluate the potential of VD as an MS treatment, one big issue is the heterogeneity of the disease, genetic factors, and, of course, differences in sunlight exposure. Thus, experimental animal models are a valuable and important tool for the basic research of cellular mechanisms to achieve standardized conditions.
Since cortical pathology is sparse or absent in common MS animal models, such as experimental autoimmune encephalomyelitis, our research team developed a rat model that reassembled cellular features of the PMS very well [14]. In the present study, we used this model to gain more insight into the effect of VD on cortical pathology. We focus on VD as an antioxidant through analysis of total antioxidant capacity (TAC), a parameter that is modulated either by radical overload or by supplementation of antioxidants [15]. Additionally, we measured the polyphenol (PP) concentration in the serum of our animals, which are compounds that are potentially valuable for modulating local and systemic inflammatory environments [16,17]. Histologically, we comprehensively investigated two different oxidized lipids (via anti-Cu ++ and HOCl oxidized LDL antibodies).
Furthermore, we assessed the effect of VD on neurofilament light chain (NfL) serum levels as a marker of axonal damage in the rats' sera through high-sensitivity singlemolecule array (SIMOA) quantitation [18].
All analyses were correlated to thorough histopathological evaluation to assess differences between VD-supplemented (VD + ) and not supplemented (VD − ) animals (controls).
Animals
In total, 45 male Dark Agouti (DA) rats aged 10-12 weeks, obtained from Janvier, France, underwent the protocol described in detail in Ücal et al. 2017 [14]. Animals were divided into VD-supplemented (n = 22) and not supplemented (n = 23) groups. Table 1 shows a detailed list of the animals used and their groups. An overview with corresponding schemata is given in supplementary Figure S1. Rats with implanted catheter only and no further treatments were used as healthy appearing control animals (HAC). All animal experiments were carried out under approval of the local authorities (Federal Ministry of Science and Research; 66.010/0072-WF/V/3b/2017). Table 1. Animal groups and experimental setup. In total, 45 DA rats were used during this experiment, divided into the VD + (n = 22) and VD − (n = 23) groups (indicated by the two color boxes). Blood was taken (indicated by "blood" on the timeline) before the catheter implantation (HAC), after MOG immunization ("MOG") and on days 1 to 45 after cytokine injection. The shortcut "d" stands for the day; the number of animals sacrificed in the different groups on the three days is written in the respective box, and the associated group notation is below in bold. The asterisk in group d45* indicates the second cytokine injection on d30.
Furthermore, we assessed the effect of VD on neurofilament light chain (NfL) ser levels as a marker of axonal damage in the rats' sera through high-sensitivity single-m ecule array (SIMOA) quantitation [18].
All analyses were correlated to thorough histopathological evaluation to assess ferences between VD-supplemented (VD + ) and not supplemented (VD − ) animals (c trols).
Animals
In total, 45 male Dark Agouti (DA) rats aged 10-12 weeks, obtained from Janv France, underwent the protocol described in detail in Ücal et al. 2017 [14]. Animals w divided into VD-supplemented (n = 22) and not supplemented (n = 23) groups. Tab shows a detailed list of the animals used and their groups. An overview with correspo ing schemata is given in supplementary Figure S1. Rats with implanted catheter only no further treatments were used as healthy appearing control animals (HAC). All ani experiments were carried out under approval of the local authorities (Federal Ministr Science and Research; 66.010/0072-WF/V/3b/2017). Table 1. Animal groups and experimental setup. In total, 45 DA rats were used during this exp ment, divided into the VD + (n = 22) and VD − (n = 23) groups (indicated by the two color boxes). Bl was taken (indicated by "blood" on the timeline) before the catheter implantation (HAC), after M immunization ("MOG") and on days 1 to 45 after cytokine injection. The shortcut "d" stands for day; the number of animals sacrificed in the different groups on the three days is written in respective box, and the associated group notation is below in bold. The asterisk in group d45* i cates the second cytokine injection on d30. .
Experimental Setup
All animals underwent the experimental protocol described in detail in Ücal et al. 2017 to elicit cortical pathology. The VD animal group orally received one drop of VD solution (=400 IE Vitamin D; Fresenius-Kabi, Graz, Austria) per week, starting from the weaning (at age of 3 weeks) until the end of the experiment (=VD + group; n = 22). All other groups received standard rodent food only (=VD − group; n = 23). Briefly, the animal model starts with the catheter implantation. After a healing period of two weeks, all rats are immunized with myelin oligodendrocyte glycoprotein (MOG) in incomplete Freund's adjuvants. The MOG antibody titer is validated via ELISA after four weeks. Once the titer is sufficient, animals receive 2 µL of a cytokine mixture of interferon gamma (IFN-γ) and tumor necrosis factor alpha (TNF-α) via a programmable syringe pump through the catheter in order to open the blood-brain barrier. The peak of cortical pathology can be observed on day (d) 15, and on d30 the first traces of remyelination are detectable, and even a "second relapse" can be generated by an additional cytokine injection on d30-indicated as d45* [14].
Blood Sampling, Euthanasia, and Tissue Extraction
Blood sampling was performed before catheter implantation (HAC), after MOG immunization, and on d1, d3, d15, d30, and d45* after cytokine injection. NfL was additionally measured in the sera of animals one day after catheter implantation. Serum was harvested by centrifugation at 4600 rpm two times, one hour after blood sampling, and was stored at −70 • C until use according to guidelines [19]. Animals were sacrificed on d15, d30, and d45* according to the protocol described in detail in Ücal et al. 2017 [14]. Briefly, anesthesia was induced with 4% isoflurane followed by cardial injection of 25 mg Thiopental (Sandoz, Kundl, Austria). After reaching deep anesthesia, animals were transcardially perfused with 4% paraformaldehyde (PFA; Merck, Darmstadt, Germany) in phosphate-buffered saline (PBS, pH = 7.4). Brains and spinal cords were dissected and post-fixed in 4% PFA for 24 h. Only brains were used for further detailed histopathological analysis, as spinal cords are unaffected in this animal model and are only routinely checked [14].
Neuropathology and Immunohistochemistry
After embedding in paraffin, brain tissue was cut into 1.5 µm sections. For immunohistochemical (IHC) staining, sections were dewaxed in xylene (Fisher Thermo Scientific, Schwerte, Germany), rehydrated, and steamed for 1 h in citric acid (Merck) [1,14]. After incubation with 2.5% normal horse serum (Vector Laboratories Burlingame, Newark, CA, USA) for 20 min at room temperature, sections were covered with primary antibodies and incubated overnight at 4 • C. A detailed list of the used antibodies and respective dilutions is given in supplementary Table S1. The ImmPRESS System (Vector Lab., secondary antibodies) was used, visualized with 3,3 diaminobenzidine-tetrahydrochloride (DAB, Sigma-Aldrich, Buchs, Switzerland), and counterstained with hematoxylin. After dehydration, slides were covered with Shandon Consul-Mount (Fisher Thermo Scientific) and a coverslip.
Quantitative Histopathological Evaluation
One investigator blinded for experimental groups assessed the quantification of demyelination, microglial activation, apoptotic cells, neuronal cell loss, and histological oxidative stress markers with an optical grid at a magnification of 20× on the microscope (Zeiss AXIO Imager.M2). Demyelination was assessed by counting the loss of proteolipid protein (PLP) immunoreactivity, and values were then transformed to PLP loss/mm 2 . Activated microglia (Iba1), apoptotic cells (Caspase-3), neurons (NeuN), and oxidative stress markers (Cu ++ oxidized LDL and HOCl oxidized LDL) were assessed in three full optical grids in the cortex per hemisphere with the 20× objective. Average values were converted to cells/mm 2 .
Colorimetric Methods for Antioxidative Capacity and Polyphenols
To evaluate antioxidative effects, we used two different test systems, i.e., the total antioxidative capacity (TAC ® ; Omnignostica Ltd., Höflein an der Donau, Austria) and Polyphenols (PPm ® ; Omnignostica Ltd., Austria) as described previously [20,21]. All serum samples were analyzed in double. The TAC assay was performed according to the manufacturer's instructions with distinct modifications for measuring small sample volumes. In brief, 10 µL standards, controls, and samples (undiluted) were pipetted into a 96-well microtiter plate. After that, 40 µL of reagent A was added into all wells within 1 min. Subsequently, 20 µL of reagent B was added to all wells within another minute. After an incubation time of precisely 20 min at 4 • C, the experiment was stopped by adding 20 µL of stop solution and the absorbance was measured at 450 nm using a microplate reader. The PP test was performed per the manufacturer's instructions; no modifications were required.
NfL Measurement Via Single-Molecule Array (SIMOA)
NfL was measured with a commercial ultrasensitive SIMOA NF-light assay on a SR-X analyzer (Quanterix, Billerica, MA, USA), based on single-molecule arrays and simultaneous counting of single captured microscopic beads carrying antibody complexes. The analytical sensitivity of this technique is manifold higher than that obtained with conventional photometric test systems, thus enabling a reliable measurement of low NfL concentrations in blood samples [18]. Advanced SIMOA NfL-kits (Quanterix, MA, USA) are used for clinical questions. Fortunately, there is a high cross-reactivity for rat serum, and we could use those advanced kits in this study according to the manufacturer's instructions.
Statistical Analysis
Statistical analysis was performed using SPSS Statistics (v23, IBM, Armonk, NY, USA), and graphs were illustrated in Microsoft Excel 2010 as box-plots. The data was checked for normal distribution via the Kolmogorov-Smirnov test; we used non-parametric tests in all cases. The text body provides median values and interquartile range (IQR). We used the Kruskal-Wallis test followed by Mann-Whitney U test for statistical significance testing. A difference of p < 0.05 was considered to be statistically significant. All statistical tests performed for each of the figures are summarized in supplementary Table S2.
VD + Animals Show a Better Preservation of Cortical Cellular Structures and Less Microglia Activation
The quantification of PLP loss is displayed in Figure 1 (a; ipsilateral side, and b; contralateral side). Overall, the PLP loss is lower in VD + animals, although reaching statistical significance only on d30 on the ipsilateral side (p = 0.05). There is also a significant difference in microglial activation detectable on the ipsilateral side on d30 (p = 0.004) and d45* (p = 0.018) (Figure 1c), and on the contralateral side ( Figure 1d) on d45* (p = 0.038). Furthermore, there is no significant difference between HAC and VD d30 animals on both the ipsilateral and contralateral sides. Apoptotic cells are significantly reduced on the ipsilateral side on d15 (p = 0.002), d30 (p = 0.004), and d45* (p = 0.023) (Figure 1e). There is also a significantly better neuronal preservation on d15 (p = 0.019) and d30 between VD + and VD − animals (p = 0.019), which is displayed in Figure 1f.
Representative microscopic pictures of histological results are displayed in Figure 2, and all scale bars represent 100 µm. In Figure 2a, the PLP loss of a representative d15 animal is shown with hardly any PLP fibers remaining. In comparison, Figure 2b shows a VD-supplemented d15 animal, where the PLP structures are still clearly visible. The microglial activation is much higher in d15 animals (Figure 2c) than in VD + animals (Figure 2d). Apoptotic cells are also reduced in VD + animals on d15 (Figure 2e, VD − , h, VD + ). Neurofilament structures are better preserved in VD-supplemented animals in comparison to normal d15 animals (Figure 2f,i). Astrocytic reactions are more pronounced in VD + animals (Figure 2g,j).
Neurofilament structures are better preserved in VD-supplemented animals in comparison to normal d15 animals (Figure 2f,i). Astrocytic reactions are more pronounced in VD + animals (Figure 2g,j). Microglial activation is lower overall in VD + animals with a significant difference on the ipsilateral side on d30 (p = 0.004) and d45* (p = 0.018), as well as on the contralateral side on d45* (p = 0.038). Quantification of apoptotic cells is shown in (e) on the ipsilateral side, with a significant reduction on d15 (p = 0.002), d30 (p = 0.004) and d45* (p = 0.023) in VD + animals. The preservation of neurons in VD + animals is shown in (f) on the ipsilateral side, with a significant result on d15 (p = 0.019) and d30 (p = 0.019). Asterisks indicate significant differences. In diagram (g), there is a significant increase of NfL detectable after MOG immunization compared with HAC (p < 0.001). In comparison to HAC, all VD− animals showed a significant increase in NfL serum levels (d1, p < 0.001; d3 p < 0.001; d15 p = 0.020). This pattern changes in VD+ animals, where only d3 animals differ significantly from HAC (p = 0.008). Although all groups show a similar trend over d3 to d15 after cytokine injection, VD+ animals have significantly lower NfL serum levels on all days (d1, p = 0.027; d3, p = 0.033; d15, p = 0.006). Asterisks indicate significant differences.
NfL Serum Level Is Lower in VD + Animals, Representing Less Axonal Loss
As expected, the catheter implantation itself pushes the NfL serum levels to an immense increase of 356 IQR 216 pg/mL (not shown in the diagram) compared to the HAC with 9.0 IQR3.3 pg/mL. This value, caused by the mechanical trauma, alleviates during the healing period and reaches a value of 14.4 IQR 5.5 pg/mL after MOG immunization. There is a significantly lower NfL concentration detectable in VD + animals on d1 (p = 0.027) d3 (0.033) and d15 (0.006) after cytokine injection (Figure 1g). VD + animals thereby follow the same pattern as it is shown for the VD − animals, with the highest NfL serum levels on d3, but in a significantly lower range.
Histological Oxidative Stress Markers Are Lower in VD + Animals
Both markers for oxidative stress show a similar trend towards the experimental groups (Figure 3a,b) with a peak in d15 animals and d45* animals. Overall, there is a significant reduction in VD + animals detectable in both markers on d15 (Cu ++ -oxLDL p = 0.002; HOCl-oxLDL p < 0.001), d30 (Cu ++ -oxLDL p = 0.004; HOCl-oxLDL p = 0.042), and on d45* (Cu ++ -oxLDL p < 0.001; HOCl-oxLDL p < 0.001). Figure 3c shows the results of protective PP in serum. The immunization with MOG causes no change in the animals' baseline serum (HAC) PP level. On d1, there is a significant difference detectable between VD + and VD − (p < 0.001), also remaining on d3 (p < 0.001) and d15 (p < 0.001) after cytokine injection. On d30, there is no significant difference detectable anymore. This changes again on d45*, where the difference between VD + and VD − becomes significant again (p < 0.025). In Figure 3d, the results of TAC are displayed. Other than PP, a significant difference is detectable between HAC and MOG-immunized serum (p < 0.001). On d1 and d3, there is no significant difference detectable in TAC; unlike d15, where the groups differ significantly from each other (p < 0.05). On d30, there is also a significant difference detectable (p < 0.040), which is no longer detectable on d45*. In (a,b), the PLP staining is shown with much more PLP (brown fibers) remaining in VD + animals. In (c,d), Iba1 positive cells (small brown dots) represent the microglial activation, which is higher in VD − animals (c). D15, VD − animals are shown in (e-g) compared to VD + d15 animals (h-j). Caspase-3 staining (apoptotic cells) appears more frequently in VD − animals (e) than in VD + ones (h). Red arrows indicate apoptotic cells. Neurofilament structures are more preserved in VD + animals (i) than in VD − animals (f). More reactive astrocytes are detectable underneath the catheter puncture in VD + animals (j) compared to VD − ones (g). Scalebars represent 100 µm. In (a,b), the PLP staining is shown with much more PLP (brown fibers) remaining in VD + animals. In (c,d), Iba1 positive cells (small brown dots) represent the microglial activation, which is higher in VD − animals (c). D15, VD − animals are shown in (e-g) compared to VD + d15 animals (h-j). Caspase-3 staining (apoptotic cells) appears more frequently in VD − animals (e) than in VD + ones (h).
Total Antioxidant Capacity and Polyphenols Are Increased in the Sera of VD + Animals
Red arrows indicate apoptotic cells. Neurofilament structures are more preserved in VD + animals (i) than in VD − animals (f). More reactive astrocytes are detectable underneath the catheter puncture in VD + animals (j) compared to VD − ones (g). Scalebars represent 100 µm. Both oxidative stress markers show significant differences between VD + and VD − rats. In (a), the quantification of IHC-positive stained Cu ++ -oxLDL cells is shown, and in (b), the quantification of IHC-positive stained HOCl-oxLDL cells is displayed. There is a conspicuous reduction of both markers in VD + animals detectable on d15 (Cu ++ -oxLDL p = 0.002; HOCl-oxLDL p < 0.001), d30 (Cu ++ -oxLDL p = 0.004; HOCl-oxLDL p = 0.042), and on d45* (Cu ++ -oxLDL p < 0.001; HOCl-oxLDL p < 0.001). In (c), the differences in protective PP in the serum between VD + and VD − rats are shown. There are no significant differences between HAC and MOG-immunized animals. There is always a significant difference detectable between VD + and VD -, except for d30 (d1, d3, and d15 p < 0.001; d45* p = 0.025). In (d), the TAC results are shown. There is a significant difference between HAC sera and Both oxidative stress markers show significant differences between VD + and VD − rats. In (a), the quantification of IHC-positive stained Cu ++ -oxLDL cells is shown, and in (b), the quantification of IHC-positive stained HOCl-oxLDL cells is displayed. There is a conspicuous reduction of both markers in VD + animals detectable on d15 (Cu ++ -oxLDL p = 0.002; HOCl-oxLDL p < 0.001), d30 (Cu ++ -oxLDL p = 0.004; HOCl-oxLDL p = 0.042), and on d45* (Cu ++ -oxLDL p < 0.001; HOCl-oxLDL p < 0.001). In (c), the differences in protective PP in the serum between VD + and VD − rats are shown.
There are no significant differences between HAC and MOG-immunized animals. There is always a significant difference detectable between VD + and VD -, except for d30 (d1, d3, and d15 p < 0.001; d45* p = 0.025). In (d), the TAC results are shown. There is a significant difference between HAC sera and sera after MOG immunization detectable (p < 0.001). Overall, VD + animals have a higher TAC. There is no significant difference detectable between the groups on d3 and d45*, but the groups differ significantly on d15 (p < 0.05) and d30 (p < 0.040) from each other. Circles indicate minimum and maximum outliers. Asterisks indicate significant differences.
Discussion
Overall, our data indicate a positive effect of VD supplementation in preserving cortical structures and regulating oxidative stress in our animal model of PMS. At least in our experiment, the most significant effect of VD supplementation in preventing cortical pathology is less pronounced microglial activation, fewer apoptotic cells, and increased preservation of neurons. Moreover, a tendency towards better preservation of PLP and neurofilament structures was observed in association with VD supplementation. On d30, at the start of remyelination in our animal model [14], all investigated histological markers show a significant difference between the VD + and the VD − group. Our finding of significantly more PLP preservation in VD + animals on d30, where remyelination starts in our model, correlates with the conclusion of other studies that VD has a positive effect on remyelination [22].
Another marker investigated during this study is serum NfL, the most promising neurofilament subunit to track neuroaxonal damage [18]. Our data show a high increase of NfL serum levels after one day of catheter implantation before alignment to baseline levels again after the healing period, which is consistent with the acute surgical trauma [23]. After cytokine injection, right at the opening of the blood-brain barrier and at the start of acute cortical demyelination, there is a significant increase of NfL on d1 and d3 again. On d15, when maximum cortical demyelination has been accomplished, NfL decreases again to levels comparable to HAC. Histological data show a peak of cortical pathology with pronounced demyelination on d15 [14], while NfL peaks much earlier, already on d3. This leads to the conclusion that the NfL increase reflects the actively ongoing tissue damage, not the extent of completed cortical demyelination. In our animal cohort, a similar pattern was detectable in the VD + animals but at a significantly lower level on all investigated days. Since NfL rises upon neuroaxonal damage, we conclude that VD supplementation preserved neuroaxonal cell structures in our animal model, although not fully suppressing the pathology.
Precise mechanisms that drive the disease in patients with progressive MS are currently unknown, but demyelination may be triggered by mitochondrial injury from oxidative stress. In MS, it seems to be mainly driven by oxidative bursts in microglia [4,24]. Reactive oxygen species (ROS), if produced in excess-leading to oxidative stress-are suggested to be mediators of demyelination and axonal damage in both MS and associated animal models [4]. The possible links between MS and an imbalance of oxidant/antioxidant cell function may be supported by increased lipid peroxidation products in blood and CSF, and an abnormal expression of heat shock proteins in oligodendrocytes. Under normal circumstances, the potentially damaging effects of ROS are limited by the endogenous antioxidant defenses in the body. This theory is supported by decreased glutathione and tocopherol concentrations and increased uric acid levels observed in demyelinating plaques of MS patients [15]. Consequently, oxidative stress also plays a role in our animal model. Most studies addressing the relevance of oxidative stress for MS progression have focused on brain intrinsic cells generating ROS. Further results from autopsy studies showed that in lesions of white matter and cerebral cortex, demyelination and neurodegeneration are associated with the presence of oxidized lipids [4]. We investigated two different oxidized lipids (Cu ++ -oxLDL and HOCl-oxLDL), and both were found in the tissue of our animals, with a much higher occurrence in VD − animals. Especially on d30, where the first remyelination events appear, there is a highly significant difference detectable between the VD + and VD − groups. This again leads to the conclusion that there is a connection between oxidative stress and remyelination, and VD supplementation facilitates the latter.
VD supplementation seems to also have positive effects on the preservation of neuroprotective PP. Since PP have immunomodulatory properties, they are more concentrated in animal serum on days right after the cytokine injection in both groups. In VD + animals, polyphenols are significantly lower on d30 compared to VD + animals on d1, d3, and d15, but increases again on d45* in the serum of animals that received a second cytokine injection on d30. The second cytokine injection on d30 could be a possible reason for the lowered polyphenols on this day.
In addition, the TAC is increased by VD supplementation. Compared to PP data, the MOG immunization is detectable via determining TAC with a decreasing effect. In VD −animals, the TAC stays approximately the same on d1 and starts to rise again during the experiment, with one decrease on d30. This drop can be prevented by supplementation with VD. Furthermore, at peak disease (d15), the TAC is significantly higher in VD-supplemented animals, suggesting an overall preservation of TAC by VD. ROS-mediated effects depend on fine-tuning a multicellular and multi-cascade network that is not yet fully understood. ROS-mediated pathways and cellular effects are involved in immune cell priming in the peripheral lymphoid organs. Since the oxidative brain environment is altered in MS patients [4], research on specific pathways and cell interactions in suitable models could help understand the mechanisms that regulate the formation and progression of lesions.
Furthermore, the measurements of antioxidative capacity and oxidative stress markers, and the direct ROS measurement post-treatment could be interesting additional information. Therefore, we used a biomarker induced by ROS in our study, i.e., we measured peroxides in the serum of our rodents. Although the method sensitively measures in the micromolar range, we did not obtain measurable concentrations, mainly since rodents synthesize their vitamin C autogenously. Furthermore, radicals are very short-lived and not easy to track during long-term experimental setup. Also, the handling during animal experiments could artificially increase ROS. Since we could not detect measurable peroxide levels in serum, we noticed oxidative stress ex vivo using antibodies against oxidatively modified molecules and measured the antioxidative capacity.
Surprisingly, VD + animals appeared to have more pronounced astrocytic reactions than VD − animals. Since astrocytes have manifold functions and can either be protective or detrimental, we hypothesize that there are different phenotypes of astrocytes involved [25]. A detrimental astrocytic reaction would not fit to all the other results obtained in this study.
Although we of course cannot propose a definitive mechanism behind our findings, we can form hypotheses. Ample literature is available showing that VD protects the CNS from inflammation through the modulation at different levels, including cytokines, growth factors, cell signaling, response to oxidative stress, BBB integrity, and cellular trafficking (reviewed in detail in Galoppin et al., 2022) [26]. Modified immune responses induced by VD in the periphery may also protect the CNS from the inflammatory insult by local protection of the blood-brain barrier (BBB). BBB endothelial cells express VDR and VD have been suggested to provide beneficial effects in MS by protecting the BBB's integrity [27,28]. Even though in our model the BBB is opened artificially via injection of pro-inflammatory cytokines, VD supplementation may result in a faster and more thorough restoration of the BBB's function, thus alleviating cortical damage. This hypothesis correlates quite well with the findings shown in the work of Galoppin et al. [26].
During development, astrocytes play a major role in the maturation of the BBB and contribute to establishing the immune-privileged state of the CNS by forming a second barrier, called the glia limitans. This structure encapsulates the entire CNS parenchyma and-along with the BBB-prevents the unrestricted entrance of immune cells. As early responders, astrocytes can react promptly to inflammatory stimuli, such as cytokines. Since many studies have shown that astrocytes express VDR, they are most likely able to respond to VD in an autocrine or paracrine manner, and VD supplementation potentially reduces the pro-inflammatory response of astrocytes in the CNS, leading to the better preservation of the cellular architecture in our study [29,30].
Another possible mechanism may involve microglia as abundant immune cells present in active MS lesions playing a critical role in antigen presentation, recruitment of T cells, ROS production, and release of pro-inflammatory cytokines. Through all these actions they may further amplify neuroinflammation. It has been shown that microglia can synthetize calcitriol in vitro and that they express VDR on their surface [24,31]. In our study, microglia activation was significantly reduced in our VD-supplemented animals. Due to the evidence of VD working as a potent antioxidant, cellular preservation might also be supported by overall ROS reduction.
In summary, the possibilities of VD actions are manifold, acting on multiple sites of the inflammatory pathway, and most likely it is a combination of all these modes of action that lead to the overall beneficial effect we find in our animals. Our effect, however, may even be enhanced by the fact that the animals were supplemented with VD starting from when they were three-weeks old and were growing up with sufficient VD levels. It appears likely that the net effect may be less pronounced when VD supplementation is initiated at a later point in life.
Even though a complete translation of the results obtained in animal studies to the pathophysiologic processes seen in humans is not possible, animal models can help overcome the limitations of clinical studies, thereby elucidating cellular mechanisms. Especially for VD studies, this is one crucial aspect due to the difficulties in clinical study design both for the relapsing-remitting MS and for the progressive disease phase, with much more literature being available for the former. Currently, only a few studies are available regarding VD and PMS with a long disease duration, with no detected connections between VD serum levels and clinical outcomes [12,13]. One major limitation of such mechanistic studies, however, is the onset of VD supplementation, which must start very early [1]-also considering the comparably short disease duration. This is, of course, a limitation for the translation of results for the human PMS situation.
To conclude, VD seems to have potential as a supplement in progressive MS, which is why much more research on VD in progressive MS and associated animal models is required.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/nu15153309/s1, Figure S1: Schematic overview of the experimental setup and used samples. Table S1: Primary and secondary antibodies used during this study with detailed information; Table S2: Summary of all the statistical tests performed for each of the figures. | 2023-07-29T15:07:30.051Z | 2023-07-26T00:00:00.000 | {
"year": 2023,
"sha1": "b1fc21fde14b875b45597d53edd555429e612af6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/15/15/3309/pdf?version=1690355820",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b04b28d12f099ab82824a51626a3bf75c204862",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251989208 | pes2o/s2orc | v3-fos-license | The effect of dance movement therapy on improving psychological health: A systematic literature review
Nowadays, psychological or mental health problems are one the problems that require psychological intervention. One of the interventions used to treat psychological health is Dance Movement Therapy (DMT), a variant of Art Therapy. This study aimed to determine the effectiveness of DMT concerning the case variations and age categories. The systematic literature review was carried out using the PRISMA protocol on ten selected journal articles based on predetermined exclusion and inclusion criteria. The results obtained from the review showed that DMT could be implemented in the age range of adolescents to the elderly. DMT can provide effectiveness in psychological problems such as depression, stress, autism, and quality of life. In its implementation, DMT can be combined with other interventions to provide more optimal results.
INTRODUCTION
This Dance Movement Therapy (DMT) is considered one the Art Therapy. As its name suggests, each individual uses dance in this therapy to develop cognitive, emotional, physical, and social abilities (Wahyu et al., 2019). Art Therapy itself in the Canadian Art Therapy Association (CATA) is an art therapy that is a combination of the creative process and psychotherapy, which facilitates self-exploration and self-understanding. The use of DMT has been carried out for quite a long time. In western countries, DMT has been used since the early 1950s (Akandere, 2011).
DMT is an experimental therapy that emphasizes four things, namely (1) emphasis on the present experience in order to gain insight by focusing on the present; (2) works directly with the body; (3) the use of this therapy also facilitates nonverbal expression; (4) "backdoor" to the unconscious. The physical sensations generated provide affective access, which can explore verbal and nonverbal sensations. The body, in this case, is the main instrument, and motion is related to the body. Motion in DMT is a concrete form of emotion that is felt and expressed through dance.
The concrete form of this emotion, when expressed with DMT, reduces the perceived stress. From a physical point of view, movement can increase muscle strength and mobility and reduce muscle tension (Payne, 2003).
A study revealed that DMT could boost the recovery from psychophysical and psychosocial effects caused by physical trauma such as cancer, heart disease, and neurological disorders Insight: Jurnal Ilmiah Psikologi, e-ISSN: 2548-1800Vol. 24 No. 1, Feb 2022 (Akandere, 2011). In art education, dances positively impact children to develop their fantasies, imaginations, and creations freely (Triyanto in Rahmawati et al., 2018). Other benefits found in dance classes in education are good motoric development, social development, and way of thinking and language development in children. This intervention was chosen because it is inexpensive and can be applied to various age groups. The increasing number of studies related to DMT interventions with different characteristics of subjects produce different levels of effectiveness and also diverse research contexts.
DMT program therapy is carried out in a structured procedure consisting of 6-26 sessions conducted throughout weeks-months. Each session in several studies divided the DMT process, namely, warming up, the dyadic movement section, the Baum circle, and the verbal processing section (Mastrominico et al., 2018). Research conducted by (Rahmawati et al., 2018) explains that the process of applying DMT contains the following psychological concepts: (1) helping to overcome stress, (2) coping methods, (3) increasing self-efficacy, (4) social support, (5) helps overcome emotional, and mood problems, (6) helps maintain the cognition system, (7) stimulates imagination, and (8) helps the transformation process.
DMT is an intervention that has been used for more than 80 years (Levine & Land, 2016 conducted a systematic review and meta-analysis of the effectiveness of DMT as an intervention to treat depression, specifically in adults. In addition, there are various other literature reviews on the effectiveness of DMT, specifically for adults with dementia (Lyons et al., 2018) for individuals with autism spectrum disorder (Chen et al., 2022;Takahashi et al., 2019) for parents with mental health disorders (Jiménez et al., 2019;Millman et al., 2021) for breast cancer patients (Fatkulina et al., 2021). These studies are supported by recent studies with other methods on the effectiveness of DMT in people with dementia (Ho et al., 2020); people with autism spectrum disorder (Morris et al., 2021;Scharoun et al., 2014); in depressed adolescents and early adults (Kella et al., 2022); to deal with identity development issues in therapeutic settings (Erickson, 2021).
Based on the description above, it can be seen that DMT is a psychological intervention to overcome various psychological problems. However, no studies specifically examine the problems, The findings of this literature review research which synthesizes various relevant research results related to the three objectives above, will be beneficial. These benefits can be used by therapists, psychologists, and parties who experience psychological problems in considering the selection and application of DMT as a form of psychological intervention. In addition, this research can be used as a reference for the use of DMT as an intervention for psychological problems that children can use to the elderly.
METHOD
This study is a systematic literature review that uses the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) protocol (Shamseer et al., 2015). Data sources were obtained from several journal databases from 2012-2021. The sources are Google Scholar, Elsevier, Springer, NCBI, and SAGE. The keywords used during the search were "Dance movement therapy," "dance therapy," or "DMT." The selection of journals in this study must meet the following inclusion criteria: (1) experimental research, (2) using DMT intervention, (3) having a comparison group, (4) journals written in English, and (5) published in peer-reviewed and full-text journals. In addition, journals must meet the exclusion criteria, namely research using quantitative methods and avoiding literature review and meta-analysis studies.
Search results using these keywords were obtained from 80 studies. After viewing the abstract, about 50 journals can be used. After further reading, that can be used in writing these ten journals. The results of the selection of journals for each inclusion and exclusion criteria obtained ten journals. The ten journals consisted of 742 participants. The sample's age varies from children to the elderly, starting at the age of 12 to 90 years old. The journals found discuss various psychological problems such as autism spectrum disorder, schizophrenia, quality of life, eating disorder, and depression.
RESULTS AND DISCUSSION
This literature review study was conducted based on ten selected journals with several participants (N = 742) summarized in Table 1 (Payne, 2003).
However, there is one journal that wrote that DMT does not have a significant effect on increasing empathy in adults with autism (Mastrominico et al., 2018). The insignificance of the study was caused by several subjects who did not complete the questionnaire given, resulting in a lack of statistical data and information.
Research with the meta-analysis method conducted by Koch et al. (2014) revealed that DMT is effective for anxiety disorders, autism in children or adults, breast cancer, cystic fibrosis, depression, dementia, eating disorders (emotional eating and obesity), schizophrenia and stress.
The use of DMT itself can be applied to various groups ranging from children to the elderly. DMT sessions can be customized depending on the needs. DMT can be not only a single intervention but can also be combined as a supportive intervention. The combination of interventions is relaxation and medical drug consumption for psychological disorders that require treatment, such as schizophrenia or exercise.
In line with the objectives of this study, it can be seen that various psychological problems can be intervened with DMT. For this reason, practitioners in the field of psychological intervention can consider DMT as one of the psychological interventions that can be used to overcome psychological problems faced by clients. Of course, the use of DMT needs to be adjusted to the conditions and needs of the client. Furthermore, this literature review shows that DMT interventions can be used independently or combined with other interventions. Thus, the choice of DMT intervention, whether alone or in a combination of courses, must be adjusted to the client's conditions and needs to obtain the most optimal intervention results. From this literature review, it was also found that there was only one intervention that was ineffective because the client did not complete the assessment thoroughly. In this case, practitioners must ensure that assessment evaluation is necessary to be done as an integral part of every intervention, including DMT. There was a significant increase in well-being in the experimental group (p = 0.007) compared to the control group (p = 0.560).
DMT is a place-based approach for adults with ID to provide support and improve well-being | 2022-09-02T15:20:49.764Z | 2022-04-11T00:00:00.000 | {
"year": 2022,
"sha1": "ef009cc3af230c5029538b82a868866ec00f2f68",
"oa_license": "CCBYSA",
"oa_url": "http://ejurnal.mercubuana-yogya.ac.id/index.php/psikologi/article/download/1913/1064",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0c07793058f3b243cd057e8c54a197b3a9186161",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
31972986 | pes2o/s2orc | v3-fos-license | Themed Volumes: a Blessing or a Curse?
Research in Transportation Business and Management (RTBM) was launched in 2009 with the first volume appearing in August 2011. The format of the journal is somewhat different to that of conventional journals in that each volume is themed focusing on a particular aspect of transportation from a business and management perspective. This paper focuses on the format of the journal and the decisions taken at the time of launch, eventually drawing conclusions about the chosen format and whether it has been an effective format for the competitive space into which the journal was launched. With four years of production data available, the authors conclude that the format has offered both positive and negative aspects, but that overall the launch format chosen was right for the competitive environment faced.
Introduction
Transport is a complex sector influencing all aspects of human life, not least in terms of the transfer of passengers and freight from one location to another.Transport is central to economic activity, Gross Domestic Product (GDP) and growth, and improvements to transport result in a nation's economy becoming more competitive.In addition, it is multifaceted with infrastructure, information and communications technologies, environmental, human factors, and safety implications.Furthermore,
OPEN ACCESS
there are a number of transport modes-maritime, road (both public and private transport), rail, aviation, walking and cycling-all of which interact with each other and growth drivers differently.Given this complexity, there are a large number of scholars in the field and a multitude of perspectives on how to address the challenges faced by public policy planners, managers and citizens.
The Organisation for Economic Cooperation and Development's International Transport Forum (2015) [1] states that global road and rail passenger travel will grow by between 120-230 percent to 2050; the figures depend on what happens to fuel prices and transport policy in an urban context.In terms of the growth in world freight by road and rail, the figures range from 230-420 percent by 2050; the figures depending on the future growth in GDP and freight intensity.As for air traffic passenger volumes, a 5 percent per annum growth is projected in the medium term.According to the World Bank (2014) [2], 15 percent of global greenhouse gas emissions are transport related and 1.2 million people are killed and 50 million injured each year on roads with 90 percent of deaths being accounted for by low-and middle-income countries.Given the plethora of issues to be found in the transport arena, it is not too surprising to find that there is a range of journals devoted to the subject area.One such journal is Research in Transportation Business and Management (RTBM), launched in 2009 with the first volume appearing in August 2011.This paper focuses on the format of a journal and the decisions taken at the time of its launch.
The competitive space the journal was entering already had a number of journals serving several market segments but the editors and publisher of RTBM firmly believed that there was an underserved portion of the market both in terms of content and format.This paper examines the motivation for starting such a journal and the competitive environment it faced at the time of the launch and the journal format decisions, including the decision to launch as a "themed volume only" format.The paper then discusses various original decisions made and whether we believe they worked to enhance the contribution of the journal in the field of transportation research.This is then followed by a discussion of what we see as our key challenges in common with other journals and what specific challenges a themed-volume only format faces.Where relevant we use production and distribution data to evaluate our success.Unlike many other journals, this journal has veered from the standard processes and we frequently discuss our decisions in a context of a shared philosophy in our monthly editorial Skype.This article is based on our experiences to date and as such is more a case study and less empirical than usually found in a scientific journal, but we hope it will prove instructive for those publishers and editors considering this type of format.There was however no journal devoted specifically to transport in a business and management context.This was deemed to be an important but underserved area that RTBM set out to address.In fact at the time of the launch Elsevier published (among others) the transport journals listed in Table 1, which also states their primary focus.In addition there were a number of titles that were seen by the Executive Publisher (for Transport titles at Elsevier) as competitors to the Elsevier stable of transport journals, and these are listed in Table 2.A factor of importance to the development of a business and management oriented transport journal is deregulation.In many jurisdictions, once all modes and selected infrastructure had been deregulated, the focus of research became a business and management one and not a regulatory one.Privatisation furthered this trend.Both deregulation and privatization therefore were significant motivators for the journal, seeking to address these areas from an academic business transportation perspective and thinking strategically rather than operationally about these issues.As can be seen in Table 1, overall there was a lack of a business and management focused transport journal with the exception of the specific air transport management focused journal.The Executive Publisher had been aware of this for some time, and had indeed made an unsuccessful attempt at addressing this with the launch of the International Journal of Transport Management, which only managed to publish two volumes in its three-year life (2002)(2003)(2004).It is not possible to state for sure why the International Journal of Transport Management did not continue other than to say any new journal needs a degree of luck and no small measure of persistence in gaining awareness in the early stages.RTBM was fortunate in being able to be launched at the World Conference on Transport Research in Lisbon in 2010 and finding very competent, senior academics willing to be Volume Editors from the outset; they were able to elicit good quality contributions and set a high bar from the outset.As such, the launch of RTBM provided a focus for articles specifically devoted to business and management within the transport sector, offering a forum for the exchange of new and innovative ideas.
The Motivation for the Journal
Rather than replicate exactly the same format as that of the International Journal of Transport Management, and indeed the majority of journals, RTBM modeled itself on Research in Transportation Economics (RTE) in terms of themed-only volumes.RTE comprised themed volumes including Shipping Economics, Port Economics, Bus Transport-Economics, Policy and Planning, Railroad Economics and Transit Economics.This essentially meant each volume was of a "special issue" type format, a format that proved popular with RTE.Clearly such a format has advantages and disadvantages and as such is a "blessing" and a "curse"-and the conclusions will focus on this assessment.
The transportation field has a large number of journals of both single mode and thematic types, with a particular focus be it economics, geography, safety, modeling or the like.For those undertaking research in the field of business and management, there is little to relate to and the submission of an article based on the strategic management literature would often get the feedback: "Please rewrite; we suggest a political economy theoretical base would be more appropriate for your work."In other words, the research conducted in business and management academic programs were hitting a hurdle against acceptance by journals serving another discipline.RTBM focuses on this underserved group of authors, and, for Elsevier, also provided a relief valve for RTE as too many proposals submitted to RTE for themed volumes did not fit its economics focus.
In saying this, we agree with an anonymous reviewer that the transportation literature is continually changing and there has been a blurring of the boundaries between the disciplines.It is no longer the case that engineers publish in engineering journals, or transport economists publish in economic journals and so on.In other words there is no longer a set of discipline silos but rather an abundance of interdisciplinary research interests and influences.An illustration of this is Transportation Research E: Logistics and Transportation Review, which publishes papers across the spectrum of disciplines and covers all modes along with broad policy, and infrastructure issues.
In addition there was an opportunity for a journal that provided for longer, more complex articles and also provided a publishing outlet for practitioners.The majority of original research articles in the transportation field have a word limit imposed, often from 6000 to 7500 words.While concise papers in accurate and targeted language are appreciated by most readers, it is often not possible for complex topics to be discussed in sufficient depth within tight word limits.From the outset the Editors of RTBM made a decision that a 7500 word limit, as seen for example with the Transportation Research Record, is grounded in the days of counting words to conform with a strict page limit so as to save paper in typeface set articles; the recognition that electronic publishing may free the author to rightsize the article for the topic, and that restrictions on page length do not always serve the reader best became guiding principles endorsed by the Executive Publisher.As a case in point, one of the articles in the themed volume Railroad privatization and deregulation: Lessons from three decades of experience worldwide [3] exceeded the usual word length restrictions of most journals by about a factor of two, but is an effective illustration of what can be accomplished if word limits are relaxed.The author had investigated the regulation of open access provisions for rail operations in Australia, the UK and North America and assessed these countries' regulation of access provisions in privatization and de-regulation.The absence of word limit restrictions allowed the author the space needed to adequately explore the topic in depth and served both rail managers and regulators in North America well.
From the outset, the vision for RTBM was one where authors are challenged to make their research relevant to practitioners as well as to further scholarly investigation.In addition practitioners are encouraged to submit papers, subject to the normal peer review process and a number have indeed done that.
The aim and scope of RTBM is to publish research on international aspects of transport management such as business strategy, communication, sustainability, finance, human resource management, law, logistics, marketing, franchising, privatisation and commercialisation.RTBM welcomes proposals for themed volumes from scholars in management, in relation to all modes of transport.Issues should be cross-disciplinary for one mode or single-disciplinary for all modes.It is keen to receive proposals that combine and integrate theories and concepts that are taken from or can be traced to origins in different disciplines or lessons learned from different modes and approaches to the topic.By facilitating the development of interdisciplinary or intermodal concepts, theories and ideas, and by synthesizing these for the journal's audience, we seek to contribute to both scholarly advancement of knowledge and the state of managerial practice.
To support our target audience of practitioner authors and to explain our vision, we added a downloadable Author Information Pack to the web site information on the journal and made the following request of all authors submitting papers to the journal: Implications for Managerial Practice Identify the implications of your findings and conclusions for future managerial practice.Do not simply restate the results but tell the practicing reader how your results may be applied to future transport projects and the management of them, for example.Contribution to Scholarly Knowledge Identify the contribution that your research has made that other scholars may build on.Consider providing a research agenda for you/others to execute through future research.This section may stand alone or form a subsection, when coupled with the previous one, of a section called Research Implications.
It is our belief that practitioners are actively seeking knowledge that will help them manage their businesses better and we hope they will think of RTBM when seeking input to their decision-making.There has been a move in recent years for academic research, at least in the UK, to be required to reveal its relevance to the business and management community.RTBM provides this opportunity.
Other Original Decisions Made
The majority of scholarly journals have quite large editorial boards, as seen in Table 1.The reason for this is many and varied.They may be designed to spread the reviewing load (although experience is that loads are concentrated), to solicit papers (although in reality this is very rare), to reflect a wide geographic diversity, or to be based on key names in the area.In reality the demand by the journal on each individual is actually quite small.
For RTBM the reverse is true, with six Editorial board members.Because so few of the papers in each volume are unsolicited, the individual volume editor is responsible for assigning appropriate reviewers from their own networks.As the topic of each volume is quite different in terms of scope, modes and approach, Editorial Board Members may be asked to serve as reviewers and then not used again for a number of years.Given these differences, we sought geographic coverage and modal diversity with as few individuals as possible in our Editorial Board.Therefore, unlike most journals, we set the ideal number of advisors at six, limited how many from each major geographic region, and sought senior colleagues to assist us rather than looking to grow the board by adding mid-career individuals.The primary role of the Editorial Board Member is to encourage proposals for volumes from their network of colleagues and we consider their contribution to be successful if that is what they do for us.It is not a usual approach but it serves the volume proposal generation purpose effectively.
One of the issues with a journal of this type is maintaining momentum and we are keen to ensure that we have a steady stream of proposals.This is where we find our Editorial Board to be most helpful.In addition, we encourage members of the editorial board to consider acting as a Guest Editor for a volume based on the Aim and Scope of the Journal.Three of our Editorial Board members have acted as Volume Editors so far, and we see this as something we would like to continue to encourage.In addition, one of the Board members is currently editing a second volume of the Journal.
The World as Seen by Authors
For authors, RTBM is another publishing avenue they would not have had five years ago.It is our opinion that authors seek: (1) journals that are marketed well and read; (2) journals that treat them well with timely feedback; and (3) journals that respect their hard work.In addition, some authors feel they must publish in a journal with an impact factor and that is on a respected or prescribed journal ranking list.Potential contributors do contact us asking about the ranking of the journal, its impact factor and the peer reviewing process, and we discuss the first two later in the next section.
Like many journals, we seek to provide timely feedback and respect.However as with the majority of journals, we are subject to the timely responsiveness of reviewers who are volunteers and are in the main not recognized for a quality review.The authors of this paper have faced this issue repeatedly with many of the special issues they have edited for journals such as Transportation Planning and Technology, Transport Policy, Maritime Policy and Management, International Journal of Shipping Transport and Logistics, and Case Studies on Transport Policy.As authors are "handled" directly by our guest Volume Editors, we can only achieve author satisfaction through making sure that Volume Editors are aware of our keen enthusiasm for a timely response to authors and we have incorporated our philosophy for such into a special document created for each Volume Editor on how we like our volumes (and our authors) to be managed.Regular contact between the Handling Editor (one of two Journal Editors) and the Volume Editor(s) serves to reinforce this.A core philosophy about the treatment of authors is examined by Day (2011) [4] and we subscribe to the belief that authors be treated well, even though the review process may result in rejection.We believe in treating authors (also reviewers and guest editors) as we would wish to be treated ourselves, and by providing timely and adequate feedback in a diplomatic way is how it should be done.We also firmly believe in dealing with rejection by providing as high a quality of feedback as is possible in order that authors may improve their papers for publication elsewhere.
We are also very pleased that the journal offers electronic publishing of articles via ScienceDirect as they are accepted, providing authors very timely access to their work by others; publication is not dependent upon all papers in a volume being ready.
Finally, there is the question of how we handle paper(s) submitted by the guest Volume Editor(s).Volume Editors often are also interested in submitting a paper to the themed volume as demonstration of their expertise in the topic area.This creates a conflict of interest but we believe that a themed volume without the editor as a key expert is also undesirable.As a result, the Journal Editor handling the volume for production is assigned to manage the review process for papers submitted by Volume Editors and the manuscript management system firewall maintains the integrity of the blind review process so the Volume Editor is treated like any other author and the reviewers and processes are not visible to the Volume Editor for that paper.
The World as Seen by Reviewers
Reviewing for journals is often seen by reviewers as a thankless task.Because a review is double-blind, the reviewer does not get recognition when they take the time to do a quality review (multiple hours of effort) versus a simple review (completed too quickly to provide author advice).In many cases, the reviewer may provide sufficient effort that they are almost a silent co-author.The philosophy espoused by Taylor (2003) is central to our approach [5].
Thus in the current time pressured world where too few are asked to review too frequently, it is becoming harder and harder to get quality reviewing completed in a timely way that serves the needs of authors and of editors.As the number of journals proliferate, not least in the area of transportation, there is also a tendency for native English speakers to be asked too frequently to serve as reviewers, as there tends to be a bias in favour of those with the native language skills.As non-English speaking authors tend to be asked to review less frequently, this results in a more onerous workload for English language reviewers.This is exacerbated by the challenge of providing adequate review of specialist topics when many of the authors for a themed volume may already know those specialists in their field.To prevent the incestuous nature of reviewing in a themed volume, we took the decision to ask Volume Editors to assign no more than one review to another author in the same volume and to draw on methodological reviewers from a broader group of specialists using the Elsevier "Find a Reviewer" tool.
We also took the decision to thank reviewers who provide high quality reviews with a reviewer "thank you" and recognition in the early years of production.
The World as Seen by Editors
As Journal Editors we see quality research but also some disheartening concerns.In the four years of publication, we have experienced the following: Authors making similar paper submissions to multiple journals even though they sign the author agreement saying that the work is original; Authors making "out of scope" submissions; a paper on pavement life cycle is not really within the aims and scope of the journal, but it is definitely inappropriate for a volume on the Management of Transport in Remote Regions.Likewise a paper on intermodal terminals for freight does not belong in a volume on cycling. Even though the journal was launched four years ago and with very clear instructions on the website, we continue to receive unsolicited papers from authors who assume the journal has the conventional format. In spite of requests that papers have five authors or less, we see a trend to put all names in the department on the proposal to get better individual author scoring.Beyond five authors we have to question how much of the contribution is gaming of the system of scholarly citations driven by granting agencies and institutional funding.
We are not alone when we see other journals gaming the citation system, as noted by Chorus (2015) [6].We have seen journals require authors to cite author papers from the same journal and we believe this to be ethically wrong but all too frequent.We know that a number of competitor journals do this as a "default" by how the manuscript management system is set up.
We have attempted to identify why so few articles actually pass the bar given our "invitation from known networks" and a web-based Call for Papers process.Consulting the publisher's manuscript submission database, we developed a data set for further evaluation.Working from a full set of all submitted articles since our first year of production (2011), we categorized papers submitted into three categories (Table 3): 1. accepted, defined as published 2. in process, defined as either under review, under revision, or with the editor 3. rejected, defined as all others.This category has been further subdivided into three sub-categories: (a) rejected for cause, defined as a paper that was reviewed and failed to be of acceptable standard during the review process; (b) unable to complete, defined as author declined to revise or revision not completed in time or editor rejected as the volume was closed for production and the author had been unresponsive; and (c) rejected as an inappropriate submission, defined as all of those that were rejected or withdrawn because they either failed to meet the aims and scope of the journal overall, or failed to be appropriate for the themed volume to which they were submitted.To categorize these, we reallocated papers based on Elsevier's status identification.Categories 1 and 2 were direct from the manuscript management system.Category 3a was all those papers that were rejected, having had more than one round of review or were rejected by the editor (but clearly on topic for the submitted volume).The second and third categories were subjective, based on our recollection of events and assessment of the volume topic against the submitted paper title and abstract.Unable to complete the article on time is clearly a curse faced by Journals of this nature.There is a clearly defined production schedule and papers that do not adhere to that schedule are in danger of not being included in the volume.More importantly they cannot be carried over to the next volume since the theme will be totally different.In conclusion, while the majority of rejections are for cause, it is a slim majority.
How Do We Measure Success?
A major success of the Journal has been to identify emerging themes/issues within the transport business and management arena as stated in the aim and scope of the Journal.For examples, published themed volumes include: airport management, accessibility in public transport, management of transport in remote regions, intermodal freight transport and logistics, valuing transportation: measuring what matters for sustainability, port performance and strategy, the marketing of transport services, business travel, cruises and cruise ports: structures and strategies, and operational constraints on effective governance of intermodal transport.
At the time of writing, volumes in process include managing the business of cycling, transportation and trade across international borders and energy efficiency in maritime logistics chains.In each volume the editorial has been seen as an important means of setting the research agenda and for discussing the gaps in research programs for these new and emerging fields.Where there are new fields, like business travel, cruise and cycling, the volumes have been smaller but for emergent fields where there is a pent-up demand by authors for a transportation management venue, the volumes have more than exceeded a 12 articles per volume target.
A second measure of success is growing downloads.With any new journal, it is about word-of-mouth and awareness generating author submissions and downloads of published articles.From 2011 to 2014, seven articles in RTBM have had more than 1500 downloads and 25 had 1000 or more downloads each.Figure 1 demonstrates that the journal was able to move beyond the challenge of paper submissions with a themed volume approach and gained a respectable volume of downloads (more than 97,000 since inception) and download growth month-over-month from the early days after its launch.Figure 2 illustrates that the growth was strong, as would be expected in the growth phase of a new product, and exceeded the growth rate of a mature product, Transport Policy (JTRP).A word of caution is required in terms of downloads.Since the articles are free to download, it is all too easy to download a paper to look at later and then discard.A more robust metric might be to distinguish between downloads for free and those which have to be paid for.Unfortunately this level of information is not available.While downloads are a good measure of awareness (and a revenue benchmark for the publisher), a true measure of success is the number of citations received.After our first four years of production, an analysis of article citation data for articles published in 2011-2014 collected by Elsevier reveals that RTBM has an h-index of 5 based on the production of 175 articles.This means that the top five articles published have five or more citations.This compares with an h-index of 19 (the top 19 have 19 or more citations with 492 papers used to calculate the index) for JTRP.
We see JTRP as an established journal with a solid impact factor and over 20 years to build that success; it has had the time to acquire market presence, is mature and therefore has high awareness, been able to build a much larger production base and be seen by authors as a reputable source for literature review and therefore citation.We view RTBM's current h-index as a good score given the time since launch and the lag between publication and citation being almost a year; we will look to see it grow in the coming years.
Two hurdles that any new journal faces are the need to be accepted into the citation indices managed by Thomson Reuters in order to get an Impact Factor and the ability to get the visibility to be on a journal-ranking list.As for the former, a journal must first demonstrate that it can publish issues in a regular and timely fashion; the evaluation by Thomson Reuters can then take a year or more and a journal cannot apply until it has passed a three-year production threshold.RTBM applied as early as possible and is patiently waiting to hear.In the interim, the website includes a section on the most cited RTBM articles based on Scopus.
To address the latter is more difficult.Journal rankings and the emphasis by business school accreditation organizations and/ or funding agencies can cause authors to ignore the opportunity to publish in RTBM.We agree with McKinnon (2013) [7], who noted several outcomes likely from the over-emphasis of these bodies on journal rankings, which can distort the choice of research methodology, lengthen publication times considerably and encourage disloyalty to the specialist journals in their field; he also noted that such emphasis on journal rankings favours an ivory tower, theoretical approach over practical relevance (and practitioner focus).While we agree that this emphasis deters publication in unranked journals and journals that seek to include practitioners in their readership, both features of RTBM, the use of rankings in many universities' tenure and/or promotion evaluations means all journal editors, not just RTBM's editors must seek to secure a ranking and impact factor or run the risk of the journal being ignored.
To ensure that all potential Volume editors have a greater likelihood of a successful volume, we require a more detailed volume proposal than is common with Special Issues submitted to most journals.As there is an expectation of more articles per published volume, this indicates to us that the potential Volume Editor needs to have extensive personal networks in the topic, the ability to attract authors to write on the topic, and the ability to attract reviewers.So in addition to developing the volume's Call for Papers, we ask that they also identify potential authors and encourage those potential authors to write possible abstracts to include with their proposal.This allows us to judge if the volume will meet our aim and scope, and therefore our target reader.We also clarify the decision process we use in our proposal guidelines; the flow chart is reproduced in Figure 3. Once a successful proposal is chosen a second set of guidelines is provided to the Volume Editor(s) with the aim that it will guide them through their first RTBM Volume's production.This attention to detail, we believe, provides more personal encouragement and manages expectations better.We have no evidence to suggest that contributors, to date, have had any issue with the production process.Since the themed volume operates on a strict production schedule, the time between submission and publication can be quite short and this can be seen as a blessing for the authors.This is particularly the case given the importance of peer reviewed papers as part of the promotion and tenure process.
Conclusions
In conclusion there are a number of blessings and curses in terms of a journal that follows the structure RTBM has adopted.It is important to say however that that some of the issues the journal has encountered over its first few years are probably just normal "teething problems" that would be faced by any journal start-up and not necessarily due to it being a "Themed" journal.In saying this, however, the curse of a journal of this type is the potential loss of journal vision given that each volume involves a different guest Volume Editor.This is not the experience to date but it is a possibility.There is also the issue of training editors on the EES manuscript management system each time a new volume is contracted; this is both time-consuming and difficult for those editors unsure about such systems or those who have never held an editor role before.Another issue faced is the need for Volume Editors to chase authors to submit or resubmit papers.There is not the same level of urgency with conventional journals given their more flexible time-line to production, but with themed volumes there is only one window of opportunity.Finally, we conclude that authors submitting often have a poor understanding of the process and do not carefully check the journal web site before paper submission.
As for blessings, each volume therefore has a focus unlike the normal journal configuration where papers on a particular topic are spread throughout the various issues.This has the advantage of affording researchers in a particular topic area the opportunity to access relevant papers in that area.It also allows for a more integrated holistic approach in the editorial, providing a research agenda or guidance for future research.The RTBM format provides the opportunity for an editor/s to take a fresh view on a theme that they have a particular passion for.Themed volumes in a topical area are likely to result in the volume receiving more downloads resulting in the papers being used by those influential in the field or by practitioners seeking guidance in terms of a particular topic of interest.
RTBM was launched in 2009 and was the result of discussions held with the Executive Publisher for the Transport portfolio at Elsevier.There were a variety of journals published at the time in the area of transportation covering: Specific modes of transport (e.g., Journal of Air Transport Management, Journal of Public Transportation), Methodological approaches (e.g., Transportation Research Part B), Policy (e.g., Transport Policy, Transportation Research Part A), Subject-specific areas (e.g., Journal of Transport Geography, Journal of Transportation Engineering, Journal of Transport Economics and Policy, Maritime Policy and Management, Maritime Economics and Logistics, the International Journal of Transport Economics), and Technologies (e.g., Transportation Research C).
Figure 1 .
Figure 1.RTBM's Monthly Download Track Record (August 2011-February 2015); After deleting "Editorial Board", "Errata" and "Correspondence" downloads, the figure shows downloads by month for the 168 RTBM articles and editorials available online January 2011-December 2014; Source: Elsevier internal data-COUNTER-compliant.
Figure 2 .
Figure 2. Comparison of Growth Rates in Monthly Downloads (August 2011-February 2015); After deleting "Editorial Board", "Errata" and "Correspondence" downloads, the figure compares the indexed growth in downloads by month for the 168 RTBM articles and editorials available online January 2011-December 2014 and JTRP's 455 articles published online over the same period.The indexed growth rate in downloads was set to 1.00 for the first month (August 2011) to put the data on a comparable base given the differences in production size; Source: Elsevier internal data-COUNTER-compliant.
Figure 3 .
Figure 3. Flow of Volume Proposals: From Concept to Authorization to Proceed.
Table 1 .
Elsevier's Existing Portfolio of Transportation Titles in 2009.(1) Impact as of 3 August 2015, information obtained from each Journals website; (2) Transportation Research was founded in 1967.It split into Parts A and B in 1979.Part A was initially subtitled "General" from Volume 13 to Volume 25; (3) Logistics and Transportation Review was originally published at the University of California at Berkeley, and then at the University of British Columbia (Vancouver Canada) before being sold to Elsevier in 1997 and rebranded as Transportation Research E. | 2015-09-18T23:22:04.000Z | 2015-08-20T00:00:00.000 | {
"year": 2015,
"sha1": "bec168039dd0ab769f9b71df04db7a8fa3c7adf7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-6775/3/3/174/pdf?version=1440065522",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "bec168039dd0ab769f9b71df04db7a8fa3c7adf7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Sociology",
"Computer Science"
]
} |
40652091 | pes2o/s2orc | v3-fos-license | Flavobacteriosis in Cultured Freshwater Ornamental Goldfish Carassius auratus
Aquarium keeping is amongst the most popular hobbies with millions of enthusiast’s worldwide (Livengood and Chapman, 2011). Among ornamental fish, goldfish C. auratus is the most common and of international significance. A variety of diseases including bacterial diseases have been reported and characterized in gold fish (Citarasu et al., 2011). Among the bacterial diseases, flavobacteriosis is regarded as a predominant disease of ornamental fish (Moyer and Hunnicutt, 2007). Flavobacteriosis debilitates a wide ecological and phylogenetic spectrum of temperate and tropic freshwater fish (Loch and Faisal, 2015). Various commercially important fish species such as salmonids, eels, carps, goldfish, tilapia and channel catfish are also susceptible to this disease (Suomalainen et al., 2009). Flavobacteriosis in fish are caused by multiple bacterial species within the family Flavobacteriaceae and are responsible for devastating losses in wild and farmed fish stocks around the world (Loch and Faisal, 2015). Members of the genus Flavobacterium are Gram negative rods that range from 0.3 to 0.5 μm in diameter and from 1.0 to 40.0 μm in International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 5 Number 4(2016) pp. 39-46 Journal homepage: http://www.ijcmas.com
Introduction
Aquarium keeping is amongst the most popular hobbies with millions of enthusiast's worldwide (Livengood and Chapman, 2011). Among ornamental fish, goldfish C. auratus is the most common and of international significance. A variety of diseases including bacterial diseases have been reported and characterized in gold fish (Citarasu et al., 2011). Among the bacterial diseases, flavobacteriosis is regarded as a predominant disease of ornamental fish (Moyer and Hunnicutt, 2007). Flavobacteriosis debilitates a wide ecological and phylogenetic spectrum of temperate and tropic freshwater fish (Loch and Faisal, 2015). Various commercially important fish species such as salmonids, eels, carps, goldfish, tilapia and channel catfish are also susceptible to this disease (Suomalainen et al., 2009). Flavobacteriosis in fish are caused by multiple bacterial species within the family Flavobacteriaceae and are responsible for devastating losses in wild and farmed fish stocks around the world (Loch and Faisal, 2015). Members of the genus Flavobacterium are Gram negative rods that range from 0.3 to 0.5 µm in diameter and from 1.0 to 40.0 µm in Isolation of Flavobacterium sp. from the gill of telescopic eye goldfish Carassius auratus with white patches on gill along with its phenotypic, molecular and phylogenetic characteristics are described in this report. The diseased goldfish from a cemented tank (6.75 m 3 ) had gill rot, white patches on gill with excess mucus secretion. On selective cytophaga agar, inocula from the gills yielded yellow pigmented colonies. A bacterial strain (WPGT1) isolated from the gill of goldfish formed a monophyletic group with other strains of Flavobacterium sp. based on phylogenetic analyses. The strain Flavobacterium sp. WPGT1 (NCBI accession number KP997191) was Gram negative long rod and weakly pathogenic to C. auratus. It caused only 14.29% mortality at a level of 3.50x10 6 cells/ml through immersion challenge after skin wounding; while no mortality was recorded in intraperitonially injected goldfish even at 3.50x10 8 cfu/fish. Understanding the pathology and pathogenesis of this emerging pathogen in cultured goldfish would help manage flavobacteriosis. length. These organisms are known for its opportunistic pathogenic role in fish (Bernardet and Bowman, 2011). Ornamental fish are cultured on a large scale in various localities of West Bengal in earthen and cemented tanks. Aquarists are facing varied types of infectious and non-infectious diseases. The present study reports the phenotypic and molecular characterization of Flavobacterium sp. isolated from diseased C. auratus with gill rot and its pathogenicity on goldfish C. auratus.
Isolation and Phenotypic Characterization of Bacteria
During the routine fish disease surveillance in December 2015, the telescopic eye variety of goldfish C. auratus (≈50 g; 14-15 cm) with gill rot, white patches on gill and excessive mucus secretion from a ornamental fish farm located in Jafarpur (Lat. 22˚19'38.9"N; Long. 88˚14'48.4"E), South 24 Parganas district, West Bengal, India were examined. At site, the behavioral abnormalities, gross and clinical signs of diseased C. auratus were recorded. Diseased fish (n=5) as well as apparently healthy fish (n=5) from the affected cemented tank (6.75 m 3 ) were collected and brought to the laboratory within 2 h of collection in oxygen filled polythene bags. Prior to sample collection for bacteriology, the fish were rinsed in sterile saline and wiped with sterile paper towel. Inocula from the gills of infected and apparently healthy goldfish were streaked on to selective cytophaga agar supplemented with neomycin 5 μg/ml and polymyxin B 200 IU/ml (SCA; Hawke and Thune, 1992) and incubated at 30°C for 48 h. The SCA plates of infected fish predominantly yielded yellow pigmented colonies of 1-2 mm size. The SCA plates of healthy goldfish had no yellow pigmented colonies. Representative colonies from infected fish were randomly picked and Gram stained for preliminary observations on lengthy rods. For further studies, a yellow pigmented, round, convex colony on SCA showing lengthy rods was picked aseptically, purified by subculturing on cytophaga agar without antibiotics (CA) and maintained on CA slants at 30°C. Phenotypic characterization was done by VITEK 2 compact system (BioMerieux, France).
Molecular Characterization of Yellow Pigmented Strain WPGT1 Isolated from the Gill
The genomic DNA of the yellow pigmented colony was isolated using Genomic DNA isolation kit (Macherey-Nagel, Germany) as per manufacturer's protocol. The 16S small subunit ribosomal RNA (16S rRNA) was amplified by Eppendorf Master Cycler Pro S using a set of universal prokaryotic primers 8F, 5′-AGAGTTTGATCCTGGCTCAG-3′ and 1492R, 5′-GGTTACCTTGTTA CGACTT-3′ (Eden et al., 1991). The PCR master-mix contained 50 ng of genomic DNA, 10 μM of each primer and 2× PCR TaqMixture (HiMedia, India). Amplification was done by initial denaturation at 95°C for 5 min, followed by 35 cycles of denaturation at 95°C for 30 sec, annealing of primers at 44°C for 30 sec and extension at 72°C for 60 sec. The final extension was at 72°C for 5 min. The PCR product was analysed on a 1.5% agarose gel containing 0.5 μg/ml ethidium bromide in 1× Tris-acetate-EDTA (TAE) buffer.
Sequencing and Phylogenetic Analyses
The PCR amplified product was sequenced at the Genomics Division, Xcelris Labs Ltd, Ahmedabad, India. The edited sequence was compared against the GenBank database of the National Institute of Biotechnology Information (NCBI) by using the BLAST (Basic Local Alignment Search Tool) program (http://blast.ncbi.nlm.nih.gov). Twenty two more gene sequences comprising 10 Flavobacterium spp. and 12 other strains of Gram negative long rods, viz., Flexibacter aurantiacus, Flectobacillus roseus, Chryseobacterium indologenes, Flavobacterium sasangense, Flavobacterium cucumis, Tenacibaculum maritimum and Sphingobacterium thalpophilum were selected from the NCBI GenBank database. Data analysis and multiple alignments were performed by using ClustalW 1.6 (MEGA6). The evolutionary history was inferred using the Neighbor-Joining method (Saitou and Nei, 1987). The bootstrap consensus tree inferred from 1000 replicates is taken to represent the evolutionary history of the taxa analyzed. Branches corresponding to partitions reproduced in less than 50% bootstrap replicates are collapsed. The percentage of replicate trees in which the associated taxa clustered together in the bootstrap test (1000 replicates) is shown next to the branches (Felsenstein, 1985). The evolutionary distances were computed using the Kimura 2-parameter method (Kimura, 1980). All positions containing gaps and missing data were eliminated. Evolutionary analyses were conducted in MEGA6 (Tamura et al., 2011). The nucleotide sequence of Flavobacterium sp. WPGT1 has been deposited in NCBI GenBank under the accession number KP997191.
Pathogenicity of Flavobacterium sp. WPGT1 on C. auratus Juveniles by Intraperitonial Injection and Skin Wounding
Farm grown goldfish C. auratus (weight: 3.85±0.66 g; length: 7.99±1.12 cm) were brought from Piyarapur (Lat. 22⁰ 47'49"N; Long 88⁰ 18'18"E), Hooghly district, India to the laboratory in oxygen filled polythene bags. First the fish were disinfected by placing in 5 ppm KMnO₄ solution for 15 min and maintained in FRP tanks of 500L capacity @ 75 numbers/tank. The weak fish were removed immediately. All fish were maintained in FRP tanks for 20 days and fed daily with balanced basal dry pellet feed (CP9931, CP Pvt. Ltd., Andhra Pradesh, India) twice daily @ 2% of the body weight. Accumulated wastes and faeces were removed once in three days and 50% water exchanged. Fourteen glass aquaria (60 x 30 x 30 cm) were used for injected, skin wounded and control groups. All glass aquaria, after through washing and drying, were filled with clean bore-well water to a volume of 30L each and conditioned for three days. Seven each of the healthy fish were released into the experimental glass aquaria and acclimatized for three days with continuous aeration. All fish were fed with pellet feed and maintained under optimal condition.
Flavobacterium sp. WPGT1 maintained as glycerol stock at -20°C was revived on CA at 30±2°C for 24 h to get young culture. One colony was aseptically picked, transferred to 10 ml of Cytophaga broth (CB; Song et al., 1988) separately and incubated at 30±2°C for 24 h. Mass culture was done in 500 ml of CB at 30±2°C for 24 h and centrifuged at 7500 rpm at 20°C for 10 min. The pellet thus obtained was washed thrice with sterile physiological saline (0.85% w/v sodium chloride) and suspended in 10ml of saline. Pathogenicity of Flavobacterium sp. WPGT1 was tested by two methods, viz., intraperitonial injection and immersion of skin wounded C. auratus in Flavobacterium sp.
WPGT1 cell suspension at predetermined doses in duplicate (Adikesavalu et al., 2015;Abraham et al., 2016). Aliquots (0.1 ml each) of Flavobacterium sp. WPGT1 cell suspensions from 10 0 to 10 -3 dilutions were intraperitonially (i/p) injected in such a way so as to get 10 8 -10 5 cells/fish, respectively. The control fish (i/p) received 0.1 ml each of sterile saline. In case of skin wounded fish, scales of all the fish from each aquaria were scrapped off gently with a scalpel from caudal peduncle to the pectoral fin, i.e., in the opposite direction (skin wounded). The skin wounded fish from each aquarium were immersed in a bacterial cell suspension (1000 ml) containing Flavobacterium sp. WPGT1 at a level of 3.50x10 6 cells/ml for 30 min. The fish along with the suspending medium were then transferred to the respective aquaria and observed for 28 days. The control group was neither skin-wounded nor dipped in Flavobacterium sp. WPGT1 suspension. The challenged and control groups were maintained in the respective aquaria for 28 days. The behavioral abnormalities, external signs of infection and mortality, if any were recorded daily.
Results and Discussion
The isolation of Flavobacterium sp. in diseased C. auratus from a ornamental fish farm with gill rot as well as white patches and excessive mucus secretion on gill (Fig. 1A) during the winter season indicated that it can be an opportunistic fish pathogen capable causing mortalities in immunosuppressed fish at low water temperature. Due to its colonial morphology and phenotypic reaction (Fig. 1B), it was considered to be one of the species of the genus Flavobacterium. Flavobacterium spp. have been reported to be pathogenic to fish causing gill disease with similar symptoms (Loch and Faisal, 2015). Bowman and Nowak (2004) also detected a Flavobacterium sp. from the gills of netpenned Atlantic salmon that concurrently suffered from amoebic gill disease. The phenotypic characteristics of the bacterium as assessed by VITEK 2 system are presented in Table 1. Phylogenetic analysis based on 16S rRNA gene sequences indicated that the novel sequence belonged to the family Flavobacteriaceae, phylum Bacteroidetes, and fell within the evolutionary radiation of the genus Flavobacterium. The 16S rRNA gene sequence of this bacterium was closely related to Flavobacterium sp. KJ461684 with cent percent node value and 98% DNA homology (Fig. 2). Although the gross and clinical signs observed on the diseased fish were similar to columnaris, the molecular characterization of the test strain revealed that the isolated bacterium belonged to Flavobacterium sp. This observation confirmed that the disease was related to a condition called flavobacteriosis.
Challenged fish had white patches on gill, tail rot, body discoloration, scale loss and skin peeling at caudal peduncle site as reported in earlier studies on flavobacteriosis (Jansson et al., 2012;Loch and Faisal, 2015). No mortality was observed in fish i/p challenged with Flavobacterium sp. WPGT1 at a level of 10 8 cfu/fish. The LD 50 value of Flavobacterium sp. WPGT1 was determined to be >3.50x10 8 cfu/fish. The pathogenicity of Flavobacterium sp. WPGT1 on C. auratus as assessed by skin wounding-bath experiment resulted 14.29% mortality in 6 days of challenge. The strain was found to be weakly pathogenic to C. auratus. This result suggested that Flavobacterium sp. can cause mortalities in fish that carry physical or mechanical injuries and confirms the earlier observations (Moyer and Hunnicutt, 2007;Loch and Faisal, 2015). Our results suggested that Flavobacterium sp. may be involved in the pathogenesis of fish as with other potential pathogens in conjunction with adverse environmental or stressful conditions in the culture systems or during the periods of immuno-suppression.
Bengal University of Animal and Fishery Sciences, Kolkata for providing necessary infrastructure facility to carry out the work. | 2019-03-31T13:43:54.043Z | 2016-04-15T00:00:00.000 | {
"year": 2016,
"sha1": "fae6b60670c3db9ff9d75a5a3d594f2ac9733587",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/5-4-2016/Sudeshna%20Sarker,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e86000515c2ac9c5e5567ffefc1ad5b7d2a95134",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
238583002 | pes2o/s2orc | v3-fos-license | LaughNet: synthesizing laughter utterances from waveform silhouettes and a single laughter example
Emotional and controllable speech synthesis is a topic that has received much attention. However, most studies focused on improving the expressiveness and controllability in the context of linguistic content, even though natural verbal human communication is inseparable from spontaneous non-speech expressions such as laughter, crying, or grunting. We propose a model called LaughNet for synthesizing laughter by using waveform silhouettes as inputs. The motivation is not simply synthesizing new laughter utterances, but testing a novel synthesis-control paradigm that uses an abstract representation of the waveform. We conducted basic listening test experiments, and the results showed that LaughNet can synthesize laughter utterances with moderate quality and retain the characteristics of the training example. More importantly, the generated waveforms have shapes similar to the input silhouettes. For future work, we will test the same method on other types of human nonverbal expressions and integrate it into more elaborated synthesis systems.
INTRODUCTION
Current speech-synthesis technology can synthesize speech with high naturalness and in a convenient manner for wellcontained scenarios such as conversational speech or audiobook narratives. To push the limitation of the technology further, many studies focused on balancing between developing a seamless end-to-end (E2E) system [1] and putting back some form of control over synthetic speech [2]. For example, Wang et al. [3] proposed a prosody controlling scheme through manipulating style tokens self-discovered by the sequence-tosequence text-to-speech (TTS) model, while Shechtman et al. [4] introduced an emphasis indicator that allows controlling of the prosody of the generated speech.
While speech is essential to human-machine interaction as it allows the exchange of information, non-speech vocalizations, such as laughter, play an important role in expressing emotions [5]. Many laughter-synthesis systems use the This study is supported by JST CREST grant JPMJCR18A6 and by MEXT KAKENHI grant 21K19808. same methodology as TTS. Specifically, laughter is treated as an extension of speech, thus presented by the same symbolic phonemes [6,7] with additional specified contexts [8]. This setup is convenient but inadequate, as a linguistic interface cannot fully express the dynamic of laughter. Different approaches include training auto-encoder (AE) or variational autoencoder (VAE) [9] models that generate laughter from a latent vector, or assuming non-speech vocalizations are realizations of emotions [10]. Most of these approaches lack the ability to control due to the highly abstract input.
The text in TTS can be seen as the control interface of a synthesis model. In other words, the model learns a mapping between a symbolic input, such as word, character, or phoneme, and a feature-dense output, which can be waveform or acoustic features. By changing the human-comprehensible input, we want to change the generated output. In this paper, we propose a laughter synthesis model called LaughNet and conducted experiments to test the feasibility of using waveform silhouettes as the interface for controlling. The rest of the paper is organized as follows: Section 2 describes Laugh-Net and the motivation behind this research. Section 3 provides details about the experimental setup and results. Section 4 concludes the paper and suggests plans for future work.
LAUGHNET
2.1. Waveform silhouettes as synthesis-control interface Figure 1 illustrates the waveform silhouette used by Laugh-Net. Waveform silhouettes are two-dimensional frame-based features extracted from a raw waveform using a series of overlapping windows. This is similar to how a spectrogram is extracted but instead of short-time Fourier transform (STFT), simple max and min pooling operations are used to obtain the 'shape' of the waveform as we want an abstract representation. The continuous min and max values are quantized into several bins to further abstract the representation. By changing the configurations of the pooling window and quantization, we can adjust the level of abstraction of the waveform silhouettes. A high-level abstract interface enables more intuitive control for humans but is less detail oriented, and vice versa for a low-level abstract interface.
Investigating different levels of abstraction is the main ob- jective of this research, but as a preliminary study, two levels of quantization, as shown in Figure 1, are tested as the input features for laughter synthesis. Our study shares a similar motivation with that of Greshler et al. [11] but instead of changing a low sampling rate waveform to change the generated content, we use an engineered waveform silhouettes as the control interface.
Model architecture
To test our hypothesis, we used HiFi-GAN [12], a generative adversarial network (GAN)-based [13] vocoder [14] that transforms a mel-spectrogram into a raw waveform, as the basis of our laughter synthesis system. As HiFi-GAN is not autoregressive, it can generate waveforms efficiently and with high-fidelity, which enables us to quickly test our ideas. The main difference is that we use waveform silhouettes as the input of the generator instead of mel-spectrograms. Readers LaughNet: y represents silhouette input, x is real natural waveform, whilex is synthetic waveform. Module G is generator, MSD is multiscale discriminator, and MPD is multi-period discriminator. Setup is similar to HiFi-GAN [12] but we use waveform silhouettes instead of mel-spectrogram.
can refer to the original paper [12] for details. We only summarize the main components in this paper. The general structure of LaughNet is illustrated in Figure 2. The generator (G) takes a frame-based silhouette and upsamples it to the designated sampling rate. G consists of many up-sampling layers and several multi-receptive field fusion modules that add features from residual blocks of different kernel sizes and dilation rates. The amount of parameters can be adjusted, but for LaughNet model, we simply use the V 1 configuration described in the original study [12]. In a typical GAN-based fashion, G was trained alternatively with one or several discriminators in a minimax game. The same as HiFi-GAN, our model includes two types of discriminators: the multi-period discriminator which learns to discriminate real and fake segments by processing samples at a particular period; and the multiscale discriminator [15,14] which learns to discriminate by processing samples at a down-sampling scale. To train G, both feature-matching loss [14] and melspectrogram loss [12] are used.
Laughter data
As high-quality conversational laughter data [16] is spontaneous and difficult to obtain, we decided to used acted laughter, which have clear emotion intents and more isolated, for our experiments. Specifically, we used video-gamedevelopment assets purchased from Unity Asset Store 1 is a common practice for small and independent game developers to use these assets in their games. Video game development is also a practical application scenario of the proposed system. The Laugh SFX package we purchased has laughter of several male and female actors. For training, eight utterances, four from male and four from female, were used as the targets. As these sound assets are not designed for research, information about speakers were not provided, we simply selected utterances with perceivable differences in voice characteristics as different targets. Another set of eight laughter utterances were selected as template, or 'source' utterances, the silhouettes of which were used as the inputs for synthesizing laughter utterances. Our laughter synthesis system, in a way, can be regarded as a very special voice conversion (VC) system [17], hence the use of similar terminologies. The experiment and the evaluation were also structured similar to that of VC.
As we need to train a laughter synthesis system with a single example, to achieve a stable and decent performance, we first train the model with a multi-speaker speech corpus. Specifically, we used 24.4 hours of speech from 100 speakers of the VCTK corpus [18]. Prior studies showed that transfer learning from speech can help improve the performance of a laughter-synthesis model [7].
Training and evaluating setups
We used 24-kHz waveforms for the experiments. A sliding window with 1024 samples in length and moving 256 points each step was used to extract silhouettes. Each frame is a two-dimensional feature vector consists of the min and max value within the window. The continuous values were then quantized into a set of values. We split them into equally distributed bins (linear) or using the µ-law algorithm. Three versions of LaughNet were evaluated: linear 256-bin LN256, 8-bit µ-law MU256, and 4-bit µ-law MU016. The hyperparameters used in training were the same as those used for HiFi-GAN [12] which includes both a mel-spectrogram and a feature-matching losses in the generator optimization loss. Mel-spectrograms were extracted using the same slidingwindow configuration as the waveform silhouette. We first initialized these using VCTK speech data for 150000 steps with each training batch consisting of 16 6second segments randomly extracted from training utterances. Waveforms were also randomly scaled by a factor λ ∈ [0.3, 1.0]. This helps increase the variety of waveform silhouettes. We then fine-tuned the models to the target laughter for another 50000 steps. For evaluation, we extracted two 6-second segments from each source utterance at a random position to create a test set of 16 silhouettes. Even though the end goal is creating a system in which users can control the synthetic laughter by directly changing the input, we used silhouettes extracted from source laughter utterances for testing, as this was a preliminary study. Given this arrangement, our experiment was quite similar to voice conversion, as we essentially wanted to change the characteristics of the laughter while retaining the 'shape' of the source waveform. In summary, each version of LaughNet created 8 models by fine-tuning to the 8 target laughter utterances, which were used to generate new laughter utterances from 16 silhouettes extracted from source laughter utterances.
Quality and speaker similarity evaluations
We evaluated three versions of LaughNet by conducting listening tests in which participants were asked to judge the quality and speaker similarity of the laughter samples. We gathered 16 participants, each did 8 sessions, in which 16 quality and 16 similarity questions were included. For the quality evaluation, listeners were presented with a natural laughter sample (NAT) of a target or a sample from one of the synthesis models and asked to judge quality. Figure 3a shows the results in which all of the models received low score with MU016 scoring slightly better than the other two. For the similarity evaluation, listeners were presented with two samples, one from a target and the other from one of the evaluated models. These included natural samples of the same target (NAT) and the natural laughter sample of a random speaker with a different sex than the target (CROSS). Figure 3b shows the results of the similarity evaluation, in which all three LaughNet models had lower scores than NAT but higher than CROSS. Out of all three LaughNet models, MU256 scored slightly better than the other two.
In summary, the synthesized laughter samples had low quality, but express a certain level of individuality as indicated from the similarity-evaluation results.
Waveform silhouettes as control interfaces
Since the purpose of this research was not simply synthesizing new laughter utterances but developing a novel control interface for synthesis systems, we need a method to evaluate the performance of waveform silhouettes as means of control. Table 1 shows the mean squared error (MSE) between the input silhouettes and silhouettes of the generated waveforms. More specifically, the error was calculated using the silhouette before quantization. Simply speaking, if we had an accuracy interface, then the error between the input and output silhouette would be minimal. Interestingly, the two models used µ-law for quantization (MU256 and MU016) had the same error, which is lower than the model using the linear function for quantization (LN256). We can say that MU256 and MU016 have a more precise control interface than LN256.
While MSE provide some objective evaluations over the control accuracy, it is difficult for humans to understand. Figure 4 presents examples of input silhouettes and silhouettes of the synthesized laughter. Subjectively speaking, we can see that the input silhouette successfully guided the shape of the generated waveform. Furthermore, we performed simple manipulation operations (stretch the silhouettes vertical and horizontally) to test the feasibility to use waveform silhouette as a mean of control. These samples were not evaluated subjectively, but readers can listen to them in the associated web page 2 . It is important to note that the accuracy of the control needs to be put in the context of a synthesis system, meaning that while we want an accuracy control, we also want LaughNet to synthesize laughter with the characteristics of the training example. Balancing between the two objectives is a research topic that we want to explore more in the future. One direction to solve this problem is increasing the level of abstraction of the waveform silhouette and allow the synthesis model to automatically determine the details of the waveform 2 Laughter samples are available at https://nii-yamagishilab. github.io/sample-laughnet-waveform-silhouette/
CONCLUSION
We proposed a model for synthesizing laughter utterances using waveform silhouettes as input. The experimental results indicate that LaughNet can not only synthesize laughter with moderate quality and characteristics of the target speaker with just a single training example, but also have a reliable control interface that can dictate the shape of the generated waveform. This research is a pioneering regarding exploring novel interfaces for controllable synthesis systems. For future work, we will extend the same method to other human sounds, such as grunting, spontaneous filler words [19], and affect bursts [5], and integrate it into a more complex system [16]. While these non-speech sounds do not convey linguistic information, they do carry a clear and identifiable emotional meaning. It will be inadequate to develop an emotional-speech-synthesis system or natural human-machine interaction but excluding these expressions. | 2021-10-12T01:34:09.940Z | 2021-10-11T00:00:00.000 | {
"year": 2021,
"sha1": "2d35af3c4f2479c9f366f3c53049100c8c4449c8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2d35af3c4f2479c9f366f3c53049100c8c4449c8",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
140042393 | pes2o/s2orc | v3-fos-license | Comparative study of oxihydrogen injection in turbocharged compression ignition engines
This document proposes for analysis, comparative study of the turbocharged, compression-ignition engine, equipped with EGR valve, operation in case the injection in intake manifold thereof a maximum flow rate of 1l/min oxyhydrogen resulted of water electrolysis, at two different injection pressures, namely 100 Pa and 3000 Pa, from the point of view of flue gas opacity. We found a substantial reduction of flue gas opacity in both cases compared to conventional diesel operation, but in different proportions.
Introduction
Some main directions of study and points of view from the literature related to the studied phenomenon is presented in the introduction chapter: Notes that in the case of naturally aspirated diesel engines, the hydrogen injection into the intake manifold can reach a maximum of 15% energy [1]. And the fact that in the case of medium load testing, an energy share of 10% hydrogen in diesel blends can reduce the smoke index by up to 50% and, in the case of maximum loading regimes, up to 17%.
In the review paper White et al. [2] stated the importance of studies on the use of internal combustion engines adapted to hydrogen-exclusive operation, both with compression ignition and spark ignition with applications in various areas of technology. They have highlighted some of the issues that require resolution to ensure that the H2ICEs engines can be safely exploited in everyday human activities. The various aspects are encountered such as detonation , wide range of flammability, combustion rate, supercharging, pollution problems with nitrogen oxides, use of flue gas recirculation valves to reduce thermodynamic cycle temperatures, maintaining the engine's specific power, achieving good filling efficiency, efficiency of Real Thermodynamic Cycle, Direct Injection of hydrogen and others.
Gomes et al. [3] conducted the experiments on the one-cylinder diesel engine and show that it is possible to operate the hydrogen-powered engine by producing an air preheat system, to obtain sufficiently high temperatures at the end of the compression phase as the air-Hydrogen mixture might self-ignites. They claimed the achievement of the satisfactory mechanical parameters, and a very low pollution level even with nitrogen oxides. The engine operated with an excess of air of 6, achieving a 45% efficiency. The increase of the final pressure in the cylinder at values exceeding those of the diesel engine operation, thus limiting the operation to high loading regimes.
Swaja et al. [4] presented the operation of diesel engines with a hydrogen-rich blend. They have shown that it is possible to operate them with 17% energy substitution rate. In the case of small substitution percentages of up to 5%, it reduces the delay time for self-ignition of the mixture, with reduction of the pressure gradient in the cylinder. It favors a smoother operation of the engine and thus a higher durability.
In the case of high power diesel engines [5], there is a decrease in total fuel consumption by approximately 12.6% at the engine supply with the 60 l/min H 2 /O 2 fuel mixture and respectively a reduction in the level of pollutant compounds in the flue gas composition with 9.5% HC, 7.2% CO, 4.4% CO2, 19.3% particles, respectively an increase in the NOx level by 9.9%.
Considering the multitude of results obtained by the researchers, with conclusions that confirm the different tendencies of variation of the mechanical parameters of the diesel engines as well as the variation of the level of pollutants in the flue gases, the measurements have been made in which we can observe the variation of the opacity of flue gases according to the amount of oxyhydrogen induced in the engine mainfold at two different injection pressures and with the same amount of HHO.
The experimental setup
In Figure For an easier understanding of the operation of the HHO generator (Figure nr 2), we present below a drawing of the HHO generator (Figure nr 3), and a schematic diagram of the assembly of the setup.
Experimental Procedure
The opacity tests have been made in 2 stage, the first stage for the standard diesel fuel mixed with HHO at the pressure p1 and in the second stage for the standard diesel fuel mixed with HHO at the pressure p2 (p2> p1), by applying the following procedure.
In the first phase, the instant but not brutal acceleration was made, up to a maximum speed limited by vehicle regulator, maintaining the speed for 2 seconds, then releases the accelerator pedal, leave a pause of a 2 seconds and repeat twice the same operation. The following phases have been employed: D n -engine fed exclusively with diesel oil, P i -engine fed with diesel oil and HHO mixture at 0,1 l/min, P i+1 -engine fed with diesel oil and HHO mixture at 0,3 l/min and P i+2 -engine fed with diesel oil and HHO mixture at 0,7 l/min, for Stage 1 and Stage 2.
At the same time the opacity measurements of flue gases have been made with the AVL opacimeter connected to the MAHA dynamometer and recording the available data series. The maximum supply pressure with HHO is p 1 = 80 Pa at stage 1 and p 2 = 3000 Pa at stage 2.
The recorded data are processed and the variation of the opacity of flue gases is plotted, depending on the oxyhydrogen flow rate induced in the two situations (for the pressure p 1 respectively p 2 ).
Results and Discussions
After carrying out the proposed experiments and tracing the variation of the flue gas opacity depending on the amount of HHO induced in the intake manifold, it can be clearly noticed that there is a tendency to reduce the opacity of the combustion gases for both p1 and p2 (Figure nr 4). This behavior might be explained by the reduction in the amount of diesel particles that is formed, due to a more complete burning, influenced by the presence of HHO. After the first flame cores occur, due to the self-ignition of the diesel fuel from the pilot injection, the entire mass of the mixture burns more quickly, more homogeneously and with a higher intensity. So the conditions at the beginning of the main injection will be more favorable, the combustion becomes even more homogeneous, and finally more complete.
Conclusion
Induction of oxyhydrogen in the intake manifold of turbocharged diesel engine, at low flow rates (up to 1 l/min), for 2 diferent HHO injection pressure are made. The following phenomena are observed: The substantial reduction of flue gas opacity was found in both cases compared with conventional diesel operation; Decrease the opacity of the flue gases with increasing amounts of added gas in both cases; Flue gas opacity is lower in case of engine operation with diesel fuel enriched with HHO at 3000 Pa than 100 Pa.
At long-term use of an enriched fuel with HHO, the overall effect of reducing the opacity of the flue gases is observed for all cases considered. | 2019-04-30T13:06:25.904Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "eec08305f395552831c7f4108df392a643622450",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/294/1/012012",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7dc52ac464240e689c53eedc28d5aa0a51fceaa4",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
} |
67860580 | pes2o/s2orc | v3-fos-license | The impact of periodontal treatment on inflammatory markers and cellular parameters associated with atherosclerosis in patients after myocardial infarction
Introduction The aim of this study was to analyze whether periodontal treatment affects the cardiovascular risk profile of patients after myocardial infarction (MI). Material and methods The study included 30 patients with chronic periodontitis (ChP). Sociodemographic and medical variables were collected. Patients were provided with scaling and root planing (SRP) 3 months after MI (1st visit). Periodontal examination and blood tests were performed immediately before SRP, then 1 month and 6 months after treatment (2nd and 3rd visit, respectively). Results A statistically significant decrease in blood hsCRP concentration and a decrease in the number of white blood cells (WBC) and neutrophils between the first and the second visit were observed. At 6 months after SRP, the mean platelet volume (MPV) had increased with respect to the value at 1 month after treatment. Multivariate analysis showed that the associations between: 1) change in LDL-C concentration and change in approximal plaque index value (b = –0.546, p = 0.005); 2) change in the number of monocytes and change in the plaque index value (b = 0.616, p = 0.01); 3) change in MPV and change in probing pocket depth (b = 0.567, p = 0.018) are all independent of the classic cardiovascular risk factors. Conclusions The obtained results indicate the existence of a relationship between the state of periodontal tissues on one hand and mediators of atherosclerosis and the number of immunologically competent cells on the other hand.
Introduction
Cardiovascular diseases (CVD) constitute the most common cause of deaths and are associated with the death of 17.1 million people in the world annually [1]. In contemporary society we can observe a continuous increase in the prevalence of classic CVD risk factors such as smoking and passive smoking, hypertension, overweight, obesity and excessive consumption of sodium, which translates into steadily increasing statistics of cardiovascular events [2]. However, the coexistence of classic CVD risk factors does not fully explain the total cardiovascular risk in a given patient [3]. Recently, the importance of inflammatory and immunological processes has been underlined in the context of their impact on the development of atherosclerotic plaque [4].
Without doubt one of the most common inflammatory processes in society is chronic periodontitis. The occur-periopathogens, which can penetrate into the bloodstream through damaged endothelium of capillaries. The concomitant periodontal inflamed surface area (PISA) in patients with chronic advanced periodontitis is 39 cm 2 in size [12]. Among the microbiota, the greatest role is attributed to a. actinomycetemcomitans, P. gingivalis, t. forsythia and P. intermedia [13]. The mentioned periopathogens, whose presence in atherosclerotic plaque was confirmed by polymerase chain reaction, appear to induce thrombocyte activation and aggregation through collagen-like platelet aggregation-associated protein expression, which may be one of the first stages of thrombosis, eventually leading to an acute coronary episode [14]. Moreover, periopathogens -through innate immune mechanisms based on cellular responses -provoke and sustain chronic inflammatory responses ongoing at remote sites [15].
Among the modifiable factors and markers of cardiovascular risk, there are lipid disorders (increased concentrations of triglycerides (TG), total cholesterol (TC), and low-density lipoprotein cholesterol (LDL-C), and decreased concentrations of high-density lipoprotein cholesterol (HDL-C), mediators of inflammatory reaction (C-reactive protein [CRP]) and markers of fibrinolysis (fibrinogen). Only a few studies have evaluated associations between the state of periodontal tissues and the number of immunologically competent cells in blood. In our previously published study we reported the association between dental status and systemic lipid profile and inflammatory mediators in patients after myocardial infarction (MI) [16]. The results of multivariate analysis showed a relationship between LDL-C concentration, fibrinogen concentration and white blood cell count (WBC) on one hand and periodontitis and the number of lost teeth on the other hand.
The aim of the study was to assess the relationship between the state of periodontal tissues and the biochemical and cellular parameters associated with the development of atherosclerosis in patients with ChP after MI as well as verification of the impact of non-surgical periodontal treatment on the change in inflammatory mediators and the number of selected blood cell elements.
Patient recruitment
Subjects for the study were recruited from the pool of patients at the First Clinic and Department of Cardiology of Medical University of Warsaw hospitalized due to a recent acute MI on the second or third day after transfer from the Intensive Cardiac Care Unit to the General Cardiology Ward. All participants of the study were Caucasian Poles. The study group included 30 patients (6 women, 24 men), mean age 53.44 years (±7.65). The conditions for inclusion in the study were: 1) history of MI; 2) age under 65 years; 3) presence of at least 8 teeth (except for third molars); 4) diagnosed chronic periodontitis. MI (STEMI and NSTEMI) was diagnosed in line with the Guidelines of ESC. The following exclusion criteria were applied: 1) neoplastic disease; 2) rheumatic disease; 3) autoimmune disease; 4) chronic liver diseases; 5) chronic kidney disease at 4 th and 5 th stages; 6) stroke.
The study was conducted observing ethical norms resulting from the Helsinki Declaration of 1975, as revised in 2000. The permission of the Bioethics Committee at the Medical University of Warsaw (approval number KB-145/2011) was obtained for conducting the research. The subjects in the study expressed their informed consent to participate in the project by signing an informed consent form. Social and general medical history of participants was taken. They were also subjected to physical, dental and periodontal examinations.
Patient population
In order to carry out the study, a properly constructed questionnaire was used. Based on information collected from the interview, data on age, income, education, atherosclerotic disease in the family and nicotinism were obtained. Income was determined on the basis of income per family member per month: < 800 PLN, 800-1500 PLN, > 1500 PLN. Education was defined as primary, secondary and university. Study participants were categorized as current smokers if they reported smoking at least 10 cigarettes per day continually for at least 5 years, past smokers if they reported smoking in the past and subjects who never smoked cigarettes (never smokers).
Hypertension was defined as systolic blood pressure (SBP) ≥ 140 mm Hg or diastolic blood pressure (DBP) ≥ 90 mm Hg in three consecutive measurements performed at five-minute intervals or if the patient was taking antihypertensive drugs. In the survey, the mean value of SBP and DBP obtained on the limb with a higher mean pressure was noted.
Diabetes was diagnosed if fasting blood glucose concentration was above 126 mg/dl or if the patient was taking appropriate medications.
Measurement of height (in cm) and body weight (in kg) was carried out using a medical scale with an increase meter. Body height measurement was taken in a standing position with an accuracy of 1 cm. Body mass measurement was taken with an accuracy of 0.1 kg. Body mass index (BMI) was calculated by dividing body weight (in kg) by height (in m 2 ). BMI 25-29.9 kg/m 2 was defined as overweight, and BMI ≥ 30 kg/m 2 as obesity.
Dental examination determined the number of teeth present in the oral cavity and the number of extracted teeth (except for the third molars). Periodontal examination was carried out at 4 sites around all teeth: mesio-buccal (MB), buccal (B), disto-buccal (DB) and lingual (L). Only probing pocket depth (PPD) and clinical attachment level (CAL) were determined in order to make a preliminary diagnosis of chronic periodontitis [17].
Patients who met all the inclusion criteria and agreed to be included in the study program were invited for another visit to the Department of Periodontology and Oral Mucosa Diseases of the Medical University of Warsaw. The visit took place three months after the end of hospitalization.
The periodontal examination was carried out by a calibrated researcher. The study was conducted in artificial lighting, using a dental mirror and a periodontal probe calibrated every 1 mm (Hu-Friedy PCPUNC 15). The study did not include third molars. The dichotomous plaque index (PI) according to O'Leary was determined on four surfaces of all teeth (mesial, distal, lingual and buccal). The index was evaluated as the ratio of plaque surface area to all examined surfaces [18]. In order to assess the effectiveness of interdental space cleaning, the dichotomous approximal plaque index on contact surfaces (API) according to Lange was evaluated as the ratio between interdental spaces with plaque to all interdental spaces [19]. The study also evaluated bleeding on probing (BoP) according to Ainamo and Bay, and the exam-ination was carried out at 4 points around all teeth: MB, B, DB and L. BoP was calculated by dividing the number of bleeding points by the number of all examined points [20]. PPD and clinical attachment level (CAL) were assessed at 4 points around all teeth (MB, B, DB, L). PPD was defined as the distance from the gingival margin to the bottom of the pocket as measured by probing (in mm). CAL was defined as the distance between the bottom of the pocket determined by probing and the cementoenamel junction (in mm). The measurements were rounded down to full mm. The number of active (bleeding) periodontal pockets with a depth of ≥ 4 mm was also verified.
Precise diagnosis of periodontitis was made in accordance with the case definitions proposed by the Centers for Disease Control and AAP (Page and Eke definition) as follows: • mild periodontitis as ≥ 2 interproximal sites with CAL ≥ 3 mm and ≥ 2 interproximal sites with PD ≥ 4 mm (not on the same tooth), • moderate periodontitis as ≥ 2 interproximal sites with CAL ≥ 4 mm or ≥ 2 interproximal sites with PPD ≥ 5 mm (not on the same tooth), • severe periodontitis as ≥ 2 interproximal sites with CAL ≥ 6 mm (not on the same tooth) and ≥ 1 interproximal site with PPD ≥ 5 mm [17]. After a thorough periodontal examination, all patients underwent classic non-surgical treatment (scaling and root planing -SRP) within all quadrants using hand and ultrasonic instruments with addition of 0.2% chlorhexidine solution to the coolant. Subsequently, tooth surfaces were polished using rubber bands and polishing paste. Then the patients were given individual oral hygiene instructions that included plaque control in interdental spaces. Patients were motivated to properly clean all dental surfaces. Additionally, for 2 weeks after SRP, patients were recommended to use a 0.1% chlorhexidine mouthwash.
Follow-up visits
Subsequent visits took place at 1 month and 6 months after SRP (Fig. 1). During each visit, blood samples were taken from the patients on an empty stomach to perform determinations of the previously mentioned biochemical and cellular parameters. A full periodontal examination was carried out according to the previously described protocol. If necessary, teeth were cleaned of soft and hard deposits by means of repeated SRP.
Statistical analysis
Descriptive statistics such as means, standard deviations (SD) as well confidence intervals (CI) at 95% confidence level were calculated for dental and biochemical parameters. Comparisons of means of dental and biochemical characteristics between subsequent measurements were conducted using the t-test for paired data. Relationships between changes (differences between subsequent measurements) in biochemical variables and periodontal parameters were evaluated using Pearson's correlation coefficient and multiple linear regression. In multiple regression analysis changes in periodontal variables were considered as independent variables, while changes in biochemical parameters were considered as dependent variables. In the first stage, gender and age of the patients were included as additional independent variables (Model I). In the second stage, other CVD risk factors, such as tobacco smoking, arterial hypertension, diabetes mellitus, BMI, atherosclerosis in the family, income and education, were added (Model II). Statistical significance for all analyses was set at the 0.05 probability level. The analyses were conducted using Statistica 13 software. Table 1 presents the demographic and general-medical characteristics of the study group. Smoking was reported by 22 participants (73%). Twelve individuals (40%) suffered from hypertension, 4 (13%) from diabetes. Over 80% of subjects (24 patients) demonstrated anomalous BMI values. Table 2 includes assessed periodontal indices and parameters as well as changes in the scope of these variables at defined time points. Patients' oral hygiene was unsatisfactory, but a downward trend in PI and API was observed. Subjects had an average of 19 teeth. Moderate ChP was diagnosed in 12 subjects (40%) and the advanced form in 15 persons (50%). Table 3 presents values of TC, TG, LDL-C, HDL-C, hsCRP and fibrinogen concentrations in blood as well as the number of assessed morphotic parameters before periodontal treatment, 1 month after and 6 months after SRP. A statistically significant reduction was observed in blood hsCRP concentration between the first and the second visit as well as a reduction in the number of WBC and neutrophils. Six months after the end of periodontal treatment, MPV increased with respect to the value after 1 month of treatment.
Results
On the basis of univariate analysis (Tables 4 and 5), a correlation between changes in selected biochemical parameters and changes in PI, API, PPD and CAL was observed. Multivariate analysis which was corrected in the first stage for gender and age (Table 6), and then for the other assessed CVD risk factors, showed that the relationships between: 1) change in LDL-C concentration between the 2 nd and the 1 st visit correlated with the change in the API value; 2) change in the number of monocytes between the 2 nd and the 1 st visit correlated with the change in the PI value; 3) MPV change between the 3 rd and 2 nd visit correlated with PPD change are all independent of age, gender, income, ed-ucation, atherosclerosis in the family, smoking, hypertension, diabetes and body weight disorders.
Discussion
Periodontal diseases and CVD have many common risk factors, such as higher prevalence among older men, smokers, with diabetes, body weight disorders, dyslipidemia and lower socioeconomic status, some of which are modifiable and some are non-modifiable. In our own study, frequent nicotinism, hypertension, overweight and obesity in patients after MI was observed. The extensive INTER-HEART study showed that dyslipidemia yields a 49% risk of MI [3]. In the case of patients with stable CHD, raised CRP concentration increases the risk of MI,
sD -standard deviation, Ci -confidence interval, n -number, tC -total cholesterol, tG -triglycerides, LDL-C -low-density lipoprotein, HDL-C -high-density lipoprotein, hsCrP-high-sensitivity C-reactive protein, WBC -white blood cells, neUt -neutrophils, LYmPH -lymphocytes, mono -monocytes, PLt -platelets, mPV -mean platelet volume, * -statistical significance
and in the case of patients with unstable CHD and history of MI, elevated CRP worsens the prognosis and increases the risk of complications. CRP concentration is an independent predictor of cardiovascular risk, even after considering classic risk factors such as hypertension and elevated TC level [21]. On the other hand, elevated level of fibrinogen correlates with atherosclerosis severity and patient mortality [21]. Exact mechanisms of the effect of periodontal treatment on the course of CVD remain unknown, but these processes may be related to both direct control and attenuating local inflammatory reactions (which can be expressed by changes in the BoP index), as well as to regulating the modifiable risk factors common for periodontal diseases and CVD [22]. In this study, 50% of patients were diagnosed with chronic advanced generalized periodontitis. The main etiological factor of periodontitis is periopathogens present in the bacterial biofilm; therefore an essential element of treatment is an improvement of mechanical home plaque control by the patient through regular brushing and cleaning of the interdental spaces. The main goal of the hygienic phase is to meet the requirement of oral health education (OHE), that is a reduction of API below 25% and BoP below 10% [23]. Only after fulfilling this goal is it possible to implement the proper scheme of comprehensive periodontal treatment. The most important aspect of therapy is mechanical instrumentation with hand instruments or mechanical scalers [24]. These procedures are performed in order to remove bacterial biofilm together with mineralized deposits from the tooth surface and to smooth the root cement surface (SRP) in separate sections of dentition at specific time intervals. The treatment strategy based on simultaneous cleaning and preparation of all teeth is aimed at preventing any translocation of periopathogens from untreated sites to the pockets already subjected to non-surgical treatment. This procedure should be implemented in patients with generalized moderate and advanced ChP with a high risk of cross-contamination of pockets due to a large amount of supragingival deposits [25]. Therefore, in this study we decided to use the treatment protocol consisting of a single-visit instrumentation of all teeth. Patients were instructed on optimal oral hygiene, and treatment results were reassessed after one month and after 6 months. Oral hygiene of the subjects was unsatisfactory, but a down- Table 4. Correlations between changes in biochemical parameters from baseline to 1 month and dental status ward trend in API was observed. However, between the second and the third visit, the other periodontal parameters deteriorated. This proves unsatisfactory cooperation from patients. Only 21 subjects came for the third visit, which means that more than 30% of patients declined further treatment. In the case of poor cooperation on the part of patients and inability to obtain satisfactory bacterial plaque control, palliative periodontal treatment should be implemented, with particular attention to remotivation in the field of oral hygiene and chemical control of the supragingival plaque. Biochemical parameters of blood with high diagnostic value in the context of periodontal medicine can be divided into mediators of lipid (TC, TG, LDL-C, HDL-C) and glucose (HbA1c) metabolism, mediators of inflammation (hsCRP, IL-6, TNF-α) and markers of thrombosis (fibrinogen). A certain role is also attributed to the number of immunologically competent cells: leukocytes [26], neutrophils [27], lymphocytes [27,28] and monocytes [28]. Most of these parameters were evaluated in our own study. We observed a statistically significant decrease in blood hsCRP concentration and a decrease in the number of WBC and neutrophils between the first and the second visit. At the same time, improvements in plaque control (decrease of PI, API), reduction of PPD and percentage of active periodontal pockets as well as CAL gain were observed. Similarly, Li et al. [28] demonstrated that treatment of periodontitis leads to a statistically significant reduction in the number of CD34(+) cells in the blood, whereas Bokhari et al. [26] observed a significant decrease in WBC. However, in our research, 6 months after SRP, the mean platelet volume (MPV) increased with respect to the value obtained 1 month after treatment, in parallel with deterioration of periodontal treatment outcomes (increased PI, BoP, PPD and CAL loss). In this study, the authors also assessed the relationship between changes in the level of inflammatory mediators and the number of morphotic blood elements on one hand, and changes in the assessed indicators and periodontal parameters on the other hand. The relationships were evaluated in the logistic regression model considering classic CVD risk factors (to exclude their distorting effect). Multivariate analysis showed that the relationships between: 1) change in LDL-C concentration and change in API value; 2) change in the num- Table 5. Correlations between changes in biochemical parameters from 1 month to 6 months after treatment and dental status
Bleeding on probing index (%)
ber of monocytes and change in PI values; 3) change of MPV and change of PPD are independent of CVD risk factors. It is difficult to explain the association between the decrease in LDL-C and the increase in API, but it is possible that the reduction in LDL-C was caused by the patient's medication, as well as diet or lifestyle changes as elements of secondary prevention of secondary MI. It is even more likely because the changes took place within 4 months after MI, when the patients were under constant care of a cardiologist. The increase in PI correlated with the number of monocytes in blood, and the increase in PPD correlated with increased MPV. Both the number of monocytes and MPV are indicators of inflammation and may partly explain the effect of periodontitis on cardiovascular risk. Wang et al. [29] pointed to MPV as a marker of periodontitis activity. They reported a positive correlation between increase in MPV and decrease of PPD after periodontal treatment (r = 0.377; p = 0.014). Recently, neutrophils have been indicated as a link between periodontitis and MI [21,30]. One study showed that increased serum level of neutrophil markers, myeloperoxidase (MPO) and high matrix metalloproteinase (MMP)-8/tissue inhibitor of metalloproteinase (TIMP)-1 ratio increased the risk of recurrent acute coronary syndrome. Perhaps, if better and more stable effects of periodontal treatment had been obtained in our study, this would have had a greater impact on a reduction in the systemic inflammatory response.
Teeuw et al. [11] conducted a meta-analysis of the influence of periodontal treatment on the biomarker profile of atherosclerosis, on the number of immunologically competent cells and on improvement of vascular endothelial function. Weighted mean difference (WMD) was significant for TC (-0.11 mmol/l, 95% CI: -0.21; -0.01), HDL-C (0.04 mmol/l, 95% CI: 0.03; 0.06), hsCRP (-0.50 mg/l, 95% CI: -0.78; -0.22) and for fibrinogen (-0.47 g/l, 95% CI: -0.76; -0.17) for patients undergoing periodontal treatment, compared to untreated subjects. In addition, statistically significant WMD was observed with respect to concentration of Il-6 and TNF-α. Periodontal treatment improved vascular endothelial function [31]. Interestingly, in patients with a systemic burden (such as CVD, diabetes) changes in the biomarker profile of atherosclerosis and reduction of inflammatory mediators were even more beneficial. WMD for TC was -0.15 mmol/l (95% CI: -0.29; -0.01), for TG -0.24 mmol/l (95% CI: -0.26; -0.22), for HDL-C 0.05 mmol/l (95% CI: 0.03; 0.06) and for hsCRP -0.71 mg/l (95% CI: -1.05; -0.36). This last issue requires more attention. hsCRP is an acute phase protein, strongly associated with an increased risk of ischemic heart disease, ischemic stroke and vascular mortality [32]. Similarly, lipid profile disorders (increased TC, TG, LDL-C and decreased HDL-C) show strong correlations with the occurrence of CVD [33]. In the cited meta-analysis, periodontal treatment in patients with normal body weight, nonsmokers with CVD or diabetes resulted in a statistically significant decrease in blood hsCRP concentration compared to patients without CVD or diabetes, overweight and smokers (ΔCRP: -0.09 mg/l, CI: -0.60; 0.42, p = 0.73). Similar associations were observed in relation to lipid metabolism mediators. Most likely, this was associated with higher baseline levels of hsCRP, TG, TC and LDL-C in patients with CVD or diabetes, compared to patients without systemic diseases. Overweight, obesity and smoking are the main risk factors for CVD [34]. In addition, overweight and obesity have strong correlations with blood hsCRP levels, and weight reduction results in a decrease of hsCRP concentrations [35]. This may indicate that overweight has a direct effect on hsCRP levels in blood, and in the case of periodontal treatment it masks the effect of this treatment on concentration of inflammatory mediators. On the other hand, the lack of significant changes in atherosclerosis biomarker profile may be related to the direct negative effect of both overweight and tobacco smoking on periodontal tissues. Scientific research has shown that both variables worsen the effects of periodontal treatment [36]. Our study revealed that the prevalence of smoking and overweight/obesity was very high, which may partially explain the obtained results. The cited meta-analysis did not include studies assessing the effect of periodontal treatment on the number of immunologically competent cells, due to the small number of studies published in this field, which additionally underlines the value of our work.
Most clinical trials evaluated the secondary endpoints of the effects of periodontal treatment on cardiovascular risk only in a short period of time. The Periodontitis and Vascular Events Study (PAVE) was a randomized clinical trial that analyzed the effect of conventional treatment of chronic periodontitis (SRP) in 303 subjects as a secondary prophylactic regimen (CVD) over 25 months [37,38]. Although the incidence of cardiovascular adverse events in the study and control groups was similar (p = 0.85), SRP significantly decreased the concentration of hsCRP in blood [the adjusted odds ratio for hsCRP levels > 3 mg/l at 6 months after treatment versus no treatment was 0.26 (95% CI: 0.09 to 0.72)], but this effect was not observed in obese patients.
Our research has several limitations that require attention when interpreting the presented data. First, only MI patients were included in the study. The lack of a control group makes it impossible to compare the results of periodontal treatment, and the impact of this treatment on cardiovascular risk in relation to people without a CVD burden. In addition, all subjects were taking anti-coagulant drugs that could affect the BoP values. Measurements of CAL and PPD were performed at four rather than six measuring points, which could have led to an underestimation of the occurrence of periodontitis. Moreover, bias protection was a critical issue. To curb bias, the investigator who carried out periodontal examinations did not take part in the active treatment, However, due to high drop-out rate (30%), the risk of bias is high. Some patients could not be reached (changes in phone number) or refused further participation in the study. Despite the standard treatment protocol of periodontitis, the obtained outcomes were not satisfactory. Between the second and the third visit, a deterioration in all assessed periodontal parameters was observed. This indicates that patients after MI require implementation of more stringent treatment regimens, with a greater focus on oral hygiene instructions and a more frequent schedule of follow-up visits, which should be taken into account in future research. Also, the use of blind trial methodology by the observer would be preferable. Nevertheless, the researcher assessing the state of periodontal tissues was calibrated. Much attention should also be paid to the errors related to the influence of interfering variables and modifying factors on the obtained treatment outcomes, but there have been no studies assessing the relationship between periodontitis and MI in which all potential interfering variables were fully controlled. In our study, potential confounding variables were highlighted and an attempt was made to construct statistical models in logistic regression analysis, controlling the most relevant of these parameters.
Summing up, it can be concluded that there is a need for randomized clinical trials to assess the effect of treatment of periodontal disease as a preventive factor (primary and secondary) of acute coronary syndromes. In addition to treatment including mechanotherapy, the importance of using immunomodulatory medications in periodontal treatment should be verified. In the long term, these studies should assess both primary endpoints (CVD death, angina pectoris, MI, stroke) and secondary end points (modifiable CVD risk factors: blood pressure, lipid profile, blood levels of inflammatory markers: hsCRP; parameters assessing the work of myocardium: ejection fraction). Considering the essence of the research topic, it still remains valid.
Conclusions
The obtained results indicate a relationship between the state of periodontal tissues and mediators of atherosclerosis (hsCRP, LDL-C, HDL-C), the number of immunologically competent cells (WBC, neutrophils, monocytes) and MPV. Patients after MI should be referred to dentists, and preferably to periodontists, to undergo early diagnosis and periodontal treatment [39]. In this group of patients, the treatment program should place a very high emphasis on education in oral health in the context of general health, and often repeat instructions on maintaining optimal oral hygiene. Control visits should be frequent, and breaks between them should not exceed 2-3 months. Obtaining good effects of periodontal treatment may translate into improvement in the field of mediators and markers of in- | 2019-03-11T17:20:17.514Z | 2018-12-31T00:00:00.000 | {
"year": 2018,
"sha1": "e1e82ecbe29d176fe9b9a7d47fbcf0c5127ea292",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-10/pdf-34699-10?filename=The%20Impact.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e1e82ecbe29d176fe9b9a7d47fbcf0c5127ea292",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
62885283 | pes2o/s2orc | v3-fos-license | Earth Science & Climatic Change Spatial and Temporal Variation of Impacts of Climate Change on the Hydrometeorology of Indus River Basin Using RCPs Scenarios, South East Asia
In this study we assessed the spatio temporal impacts of climate change on the hydrometeorology of Indus River basin. A 0.5 by 0.5 degree resolution data of Coupled Model Intercomparison Phase5 (CMIP5) global climate models (GCMs) output of precipitation and temperature (maximum and minimum) and VIC (Vertical Infiltration Capacity- Macroscale hydrological model) simulation results of evaporation and total runoff at the out let of the Arabian peninsula for 2030`s (2035-2064) and 2070`s (2071-2100) under (Representative Concentration Pathway) RCP 4.5 and RCP 8.5 emissions scenarios used. Arc GIS 10.2 extension of ordinary kriging Geostatistical interpolation techniques applied for spatial analysis of precipitation, temperature (maximum and minimum) and evaporation for the River basin. Future projection results as compared to the base period (1971-2005) showed that the, average multimodal monthly precipitation decreases during winter and, spring months and increases during summer months, ranging in between -25% and +43%. Average seasonal spatial precipitation changes resulted various ranges of precipitation distribution for 2070`s of RCP 4.5, average seasonal precipitation decreases in the mid part of the basin up to -20%. Average temperature increase for both future periods (2030`s and 2070`s) and RCPs (RCP 4.5 and RCP 8.5) emission scenarios, maximum temperature change observed in the Himalayas Mountains. All GCMs except MPI projected increase of future average annual evaporation. Average Multimodal GCMs projection results showed that the, average monthly runoff increases more during summer than winter. The increase of runoff at the downstream flow is as a result of snow and glacial melt at the high elevation regions of the Indus River basin. The increase of runoff flow probably has positive impacts in meeting the water requirement of small scale irrigation schemes. Moreover, water can be stored in a reservoir during summer season and distributed to arid areas of the basin. Due to the increased amount of flow during summer, there may be high chance of flooding in plain areas of the basin, therefore a precaution measure have to be taken in order to minimize the possible risks of flooding on agricultural and human welfare of the society.
Introduction
Climate change has a significant effect on the hydrological cycle, impacts of climate change on precipitation and temperature play a major role in disturbing the hydrological cycle [1]. The resulting change in hydrological cycle may have a direct impact on both evapotranspiration and water availability [2]. This disturbance amplifies its impacts on sectors like agriculture, industry and urban water resources development [3][4][5]. In this study we will investigate, future spatial and temporal impacts of climate change variation on precipitation, temperature, evaporation and runoff of Indus River basin. The Indus River basin provides major social economic benefits for millions of people living in south East Asia. For example, the agriculture sector consumes more than 95% of the river flow to ensure food security [6]. Particularly, the economic life of Pakistan considerably relies on the flow of Indus River Basin, which supports a large proportion of irrigated agricultural land and generating electric power [6]. About more than 80% of the Indus flow reaching the Punjab plains is derived from seasonal, permanent snowfields and glacial melt from Himalaya. Monsoon rainfalls and snow from the upper Himalayas significantly contribute to the direct run off in the lower tributary rivers of the Indus flow [7,8]. Therefore the precipitation at high altitude generally affects the glacial mass balance and ultimately the hydrologic regime of the basin. Currently 50-60 percent of the total average flows in the Indus system are fed by snow and glacier melt of the Hindu -Kush-Karakoram (HKK ) part of the Greater Himalayas, the remaining part is from monsoon rains on the plains [9]. Variability in the distribution and timing of snow fall and changes in snow and ice melt may be amplified by climate change which has huge influences on the basin water resources management [10].
Although it is of huge importance to the wellbeing and development of the people, only a limited number of studies have been published so far about the future impacts of climate change on spatial and temporal variation of hydro-climatology of the River basin. A study done by [11], showed that there is a significant climate variation over the basin, but the report used a historical climate data and only to the extreme northern part of the basin.
Description of the study area
The Indus River starts from the Tibetan Plateau (China) and it drains through India, Afghanistan and Pakistan before it entre into the Arabian Sea. The total length of the river is 3,180 km and has a
Abstract
In this study we assessed the spatio temporal impacts of climate change on the hydrometeorology of Indus River basin. A 0.5 by 0.5 degree resolution data of Coupled Model Intercomparison Phase5 (CMIP5) global climate models (GCMs) output of precipitation and temperature (maximum and minimum) and VIC (Vertical Infiltration Capacity-Macroscale hydrological model) simulation results of evaporation and total runoff at the out let of the Arabian peninsula for 2030`s (2035-2064) and 2070`s (2071-2100) under (Representative Concentration Pathway) RCP 4.5 and RCP 8.5 emissions scenarios used. Arc GIS 10.2 extension of ordinary kriging Geostatistical interpolation techniques applied for spatial analysis of precipitation, temperature (maximum and minimum) and evaporation for the River basin.
Future projection results as compared to the base period showed that the, average multimodal monthly precipitation decreases during winter and, spring months and increases during summer months, ranging in between -25% and +43%. Average seasonal spatial precipitation changes resulted various ranges of precipitation distribution for 2070`s of RCP 4.5, average seasonal precipitation decreases in the mid part of the basin up to -20%. Average temperature increase for both future periods (2030`s and 2070`s) and RCPs (RCP 4.5 and RCP 8.5) emission scenarios, maximum temperature change observed in the Himalayas Mountains. All GCMs except MPI projected increase of future average annual evaporation. Average Multimodal GCMs projection results showed that the, average monthly runoff increases more during summer than winter. The increase of runoff at the downstream flow is as a result of snow and glacial melt at the high elevation regions of the Indus River basin. The increase of runoff flow probably has positive impacts in meeting the water requirement of small scale irrigation schemes. Moreover, water can be stored in a reservoir during summer season and distributed to arid areas of the basin. Due to the increased amount of flow during summer, there may be high chance of flooding in plain areas of the basin, therefore a precaution measure have to be taken in order to minimize the possible risks of flooding on agricultural and human welfare of the society. total drainage area of 1,230,000 Km 2 [6]. The climate of the basin is influenced by monsoon, winter monsoon (December-February), the hot period (March -May), the summer monsoon (June-August) and the autumn (September -November), which the transition period to winter season. There is a dramatic rainfall distribution variation across the basin (Figure 1). The mean annual precipitation is less than 100 mm in the lower Indus region, above the Arabian Sea (dry part), and greater than 750 mm in upper part, below the Himalaya Mountains [12].
Hydrology and elevation of Indus River basin
The Indus river flow originates in the mountain head waters of the Karakoram Himalaya, western Himalaya, and Hindu Kush Mountains [6]. The runoff generated from the snow fall and glacier melt from high mountains area which contributes to flow volume of water to the central plains tributaries, where the majority of water resources development are existed and planned for construction in the future [13]. The basin has different stratification of elevations ranging from (-15-7860 meters [m]) ( Figure 2).
Materials and data
Topography data: The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Digital Elevation Model (DEM) was used to delineate the catchment boundary studied. The digital elevation data downloaded for the study area.
Hydrometeorology data
Discharge data: VIC (Vertical Infiltration Capacity-Macroscale hydrological model) model ,is a semi distributed hydrological model first described in [14], VIC simulation output data of five GCMs of (Representative Concentration Pathway) RCP 4.5 and RCP 8.5 future period of total runoff at the out let of the Arabian Peninsula and simulation output of evaporation of grid cells of the Indus River basin obtained from ESS (Earth Science System) department, Wageningen University, The Netherlands used for analysis to examine hydrological impacts of climate change under RCP 4.5 and RCP 8.5 emission scenarios.
GCMs climate data
Future climate projection from the Coupled Model Intercomparison Project5 (CMIP5), the inter comparison project launched by the international climate research community in 2010. The CMIP5 projections of climate change are driven by concentration or emission scenarios consistent with the RCPs, Representative Concentration Pathways described in [15]. The IPCC Expert Meeting in September 2007 identified a new greenhouse gas concentration scenario "Representative Concentration Pathway (RCP)" and established frameworks and development schedules for the Climate Modelling (CM), Integrated Assessment Modelling (IAM), and Impact Adaptation Vulnerability (IAV) communities for the fifth IPCC Assessment Reports in which 130 researchers and users participated [16]. According to IPCC, currently the AR5 (Assessment Report phase 5) is under finalizing stage for climate model scenario based on the representative concentration path ways (RCP). The developing of representative concentration pathways (RCPs) will help to advances in processing of coordination of new integrated socioeconomic, emissions, and climate scenarios. Each RCP includes the concentration pathways and corresponding emissions and land use pathways, which is uses as input to climate model for developing global climate projections. The new scenarios projections have near term up to 2035 and long term covers up to 2100 and an extended one up to 2300 [17].
There are four different types of representation concentration path ways developed and named on based on their radiative forcing power capacity [17] (Table 1).
For this study we used the outputs of five different climate models (MPI-ESM-LR, IPSL-CM5A-LR, HadGEM2-ES, EC-EARTH-DMI and CNRM-CM5) of bias corrected under a midrange emissions scenario (RCP4.5) and a high emissions scenario (RCP8.5). The data series used starts from January 1, 1971 till 2100 except Hadgem which extends u to 2099.
Ordinary kriging
ArcGIS Spatial analysis of the Geostatistical tool was selected to implement the Kriging interpolation techniques. Geostatistical Analyst is an extension of ArcMap used to generate surfaces from point data. The software is powerful, immense, user-friendly package and easy for application. Surface fitting using Geostatistical Analyst involves exploratory spatial data analysis, calculation, and modelling surface properties of nearby locations, and surface estimation and assessment of results [18].
There are various types of Kriging. Ordinary Kriging is the most widely used and acceptable method. The ordinary kriging interpolation method is selected. Ordinary Kriging is the best outperformed geostatic method for spatial interpolation of precipitation and temperature [19]. It assumes that there is no linear trend exists in the data, rather it model the autocorrelation using a semivariogram, which allows attribute values to be estimated on unsampled locations [18].
In this study we used ordinary kriging spatial interpolation techniques using Arc GIS 10.2 application software. Spatial variation of future impacts of climate change on precipitation, temperature, and evaporation at 2030`s and 2070`s of CMIP5 under RCP 4.5 and RCP 8.5 emission scenarios analysed using ordinary kriging techniques. For temporal analysis statistical techniques applied.
The basic equation of semivariogram is formulated below in equation.1 [20]. Z (X i )=Point location of X at i.
h = distance between two points location
The larger the value of γ the less similar the location points.
The model fitted is defined by its type and model coefficients; include the nugget variance, structured variance, sill, range, and gradient.
Spatio-temporal impacts of climate change on the Hydrometeorology of Indus River basin
The spatial and temporal impacts of climate change on hydromteorological variables for 2030`s and 2070`s under (Representative Concentration Pathway) RCP 4.5 and RCP 8.5 emissions scenarios of the CMIP5 (Coupled Model Intercomparison Project5) output for Indus River basin analysed and studied.
Climate change impacts on precipitation
Mean monthly precipitation change of the multimodal GCM output projection for 2030`s and 2070`s under both emission scenarios exhibited that precipitation increases during the main rainy season (summer) and decreases during winter season as compared to the base line period . Study done by [21], decrease in winter precipitation also observed from long years observation climate data analysis on the upper Indus basin part. The increment is more pronounced at 2070`s than 2030`s (Figures 3a and 3b). Generally, average annual precipitation change for future horizon of the five GCMs output also indicated a positive change under both RCP emission scenarios. CNRM and Hadgem showed high positive change under all periods and RCPs emissions, while MPI and IPSL projected a decrease of magnitude as compared to the base line period.
The spatial distribution of seasonal precipitation for the historical period shows that there is less precipitation at both extreme sides of the basin, at the very lower tip peripheral part of the basin less distribution of precipitation associated with high temperature and evaporation but at the other end part of the basin is because of the accumulation of solid snow and ice and less temperature and evaporation nature of the area.
Average seasonal precipitation change for 2070`s of RCP 4.5 emission scenario projection of the five different GCMs output show a positive change in major parts of the basin particularly in the mid part of the basin where there are different water development project are existed and planned in the future. However, a decrease in average seasonal precipitation projection observed almost in all part of the basin under MPI and IPSL GCMs projections, ranging from -20% to 0% as compared to the base line period (Figure 3c). A study done by [22] indicated that spatial precipitation distribution in Indus River basin is non uniform and higher summer precipitation increase in upper part and a decrease in lower part projected using Hadley Centre coupled model output of A1B emission scenarios for 2071-2098. This is result is closely similar to our findings, almost all the GCM models projected increase in precipitation at the end of the century in the upper part and a decrease in lower part of the basin. Other study done by [23] using CMIP5 output of RCP 4.5 and RCP 8.5, also indicated that in the future the upper part of the basin become wetter and wetter specially during summer season but in their report they used only the upper part of the basin so called Karakoram-Himalalya.
Minimum temperature
Likewise, multimodal results of average monthly and annual minimum temperature, showed similar pattern, and changes like maximum temperature for future periods under both RCPS emission scenarios for Indus River basin (Figures 5a and 5b).
Average seasonal (JJA) minimum temperature of spatial change projection of the five GCMs exhibited a positive change in magnitude and direction in all parts of the basin ranging from +1 o C to +4 o C as compared to the base line period. High positive average seasonal minimum temperature change is expected in the northern and southern part of the basin. It has been indicated indifferent studies; temperature change in the future in the Himalayas region (high elevation) is occurring three times the global average increase [24]. In Figure 5c, there will be high temperature change in Himalayas part than the southern part of the basin. More over the result presented in (Figure 5c) is in line to the IPCC assessment report, mean annual temperature increases by 3 o C at mid century on the Asian mass land and by 5 o C at end of 21 st century [24]. In other study [25] over the Tibetan Plateau of Himalaya using CIMP5 output of 24 GCMs using RCP 8.5 and RCP 2.6 emission scenarios, indicated that temperature will increases over the plateau in the 21 st century.
Climate change impacts on evaporation
Generally, multimodal average monthly evaporation increases during winter season and decreases during summer season in both future periods and RCPs emissions. All GCMs except MPI projected that average annual evaporation increases at 2030`s and 2070`s of RCP 4.5 and RCP 8.5 emission scenarios as compared to the base period. Maximum change observed in Hadgem GCM.
Average seasonal spatial distribution of evaporation during base line period shows that, less seasonal evaporation observed at the lower and upper part of the basin, due to water and energy limitation respectively. Maximum evaporation rate exhibited in the central part of the region. Overall, five GCMs showed similar pattern seasonal spatial distribution of evaporation over the basin.
All the five GCMs projected different spatial pattern of seasonal (JJA) change of evaporation for 2070`s of RCP 4.5 as compared to the base line period . Ecearth GCM exhibited a positive change in all parts of the basin ranging from 0 to 20%. CNRM GCM projected more or less a decrease in evaporation in major parts of the basin (Figure 6a-6c). This study finding of high increase in summer runoff is highly correlated to the rise of temperature in the upper part of the basin. A study done by [26], suggested that a 1 o C mean temperature arising from climate change would result 17% of runoff in the upper River basins flow. Even the increase of summer runoff from the glacial melt may contribute a lot for the Indus Irrigation system but in the future the glacial coverage may retreat in size due to melting of temperature rise [26].
Discussion and Conclusion
Our analyses show that climate change potentially has a significant impact on the hydro meteorological variables of the environment, due to green house gas emission in to the atmosphere. This creates warming of the globe due to the rise of temperature, which disturbs the hydrological cycle results in water resources management crises. Especially in reducing agricultural production and deteriorating environmental ecosystems. Average multimodal GCMs analysis of temperature projection showed that, average monthly and seasonal maximum and minimum temperature increases for both future periods (2030s and 2070s) and RCPs (RCP 4.5 and RCP 8.5) emissions scenarios. All five different GCMs projection results indicated that, average maximum and minimum temperature increases in the future. Average seasonal of five GCMs spatial temperature change for 2070`s of RCP 4.5 resulted maximum increase in temperature in the Northern part of the basin (Himalayas terrain) ranging from +3 o C to +4 o C.
The change in temperature has a direct contribution to change of evaporation, average multimodal GCMs analysis shows, average monthly evaporation increases during winter and decreases during summer season. Even if temperature increases during summer season, unlikely evaporation decreases, this may be due to the large coverage of glacier and snow accumulation in the northern part of the basin, which increases the albedo of the glacier surface and limits the evaporation rate from the surface [27]. However, average annual evaporation of five GCMs except MPI GCM projected, average annual evaporation increases in the future under RCP 4.5 and RCP 8.5 emissions scenarios.
The changes in precipitation and temperature have a large impact on hydrological regime characteristics of flow. These effects have been noticed in different parts of the world Rivers, where flow is unpredictable and drought risks are increasing. However, this study results show that average multimodal monthly runoff increases for 2030`s and 2070 `s of RCPs (RCP 4.5 and RCP 8.5) emission scenarios. High amount of runoff flow expected during summer season. Generally, almost all five different GCMs models projected that, average annual runoff increases for future periods. The direct reason for the increase of runoff is associated with the increase of precipitation over the basin but the annual amount of precipitation is relatively low in major parts of the basin. Snow and glacier surface melt from high altitude part of the basin during summer season has a significant contribution to high runoff volume. Average flow increase may help and secure water development structures located beneath the glacier regions. On the other hand, water structures developed further away may face difficulty in getting the required flow and agricultural production may be affected [28]. Normally, the results show that, runoff increases in the future. High flow is expected during summer in snow fed and glacier melt watersheds of the basin which is vital for sustainable agricultural activity in the region through surface and underground irrigation as described in [29].
The result shown in this study of climate impacts on hydro climatic variables of future projection change are similar to the result presented by [30] and [31]. The two papers used CMIP3 output results but we used CMIP5 output. The result presented in percentage change of precipitation, temperature, evaporation and runoff are in line to the global projection of future changes by IPCC (Integovemental Panel on Climate Change).
Climate change analysis using General Circulation Model (GCMs) have different uncertainties and give different results in different part of the world. Particularly at high elevations due to complexities nature of elevations, General Circulation Models (GCMs) are unlikely to have high degree of precision in forecasting [27]. Moreover, biases in rainfall in climate model remains very high. Because of this we use bias corrected data. This bias correction adds additional uncertainty to our results [32]. Therefore, there may be some additional uncertainty in the analysis results.
In conclusion, climate change has large impacts on the hydroclimatology variables of the basin, which plays great role in the hydrological cycle. Runoff increases particularly during summer. Therefore, it is very advisable to develop large capacity water schemes that can hold the incoming high floods and harvest water for dry season to support agricultural productivity to insure food security of the alarming rate increase of the population in the region.
This study has used different climate model projections, it is important to note that quantitative prediction contains significant uncertainties. However, the result obtained and pointed out in this research gives possible direction to develop and plan critical policies and strategies to address and manage the anticipated impacts of climate change risks on water resource schemes and agricultural productivity for better well being millions of population living in the Indus River basin. | 2019-04-24T13:13:37.166Z | 2014-12-10T00:00:00.000 | {
"year": 2014,
"sha1": "dfb3a4f4417c54604856c54b59771a43aef9dc1a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2157-7617.1000241",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "51d28817c9b7a4966c45780b83bc1e682d1b2c6e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
16656858 | pes2o/s2orc | v3-fos-license | Health system and societal barriers for gestational diabetes mellitus (GDM) services - lessons from World Diabetes Foundation supported GDM projects
Background Maternal mortality and morbidity remains high in many low- and middle-income countries (LMIC). Gestational Diabetes Mellitus (GDM) represents an underestimated and unrecognised impediment to optimal maternal health in LMIC; left untreated – it also has severe consequences for the offspring. A better understanding of the barriers hindering detection and treatment of GDM is needed. Based on experiences from World Diabetes Foundation (WDF) supported GDM projects this paper seeks to investigate societal and health system barriers to such efforts. Methods Questionnaires were filled out by 10 WDF supported GDM project partners implementing projects in eight different LMIC. In addition, interviews were conducted with the project partners. The interviews were analysed using content analysis. Results Barriers to improving maternal health related to GDM nominated by project implementers included lack of trained health care providers - especially female doctors; high staff turnover; lack of standard protocols, consumables and equipment; financing of health services and treatment; lack of or poor referral systems, feedback mechanisms and follow-up systems; distance to health facility; perceptions of female body size and weight gain/loss in relation to pregnancy; practices related to pregnant women’s diet; societal negligence of women’s health; lack of decision-making power among women regarding their own health; stigmatisation; role of women in society and expectations that the pregnant woman move to her maternal home for delivery. Conclusions A number of barriers within the health system and society exist. Programmes need to consider and address these barriers in order to improve GDM care and thereby maternal health in LMIC.
Background
Although maternal mortality and morbidity have received increased attention in the last two decades it still remains a huge public health challenge in many countries. According to the World Health Organization (WHO), approximately 1000 women die from preventable causes related to pregnancy and childbirth every day with 99% of these deaths occurring in low-and middle-income countries (LMIC) [1]. Haemorrhage, hypertensive disorders, obstructed labour and infection/sepsis are among the leading global causes of maternal mortality [2]. Gestational diabetes mellitus (GDM) directly or indirectly increases the risk of all the above conditions but is rarely mentioned among the causes of maternal mortality and morbidity. Hyperglycaemia may affect <1-19% of pregnancies in LMIC [3][4][5][6][7][8][9][10][11][12][13][14] and is one of the most common medical conditions affecting pregnancy. Hyperglycaemia during pregnancy, (GDM and pre-gestational diabetes) increases the risk of maternal-and peri-natal mortality, obstructed labour, spontaneous abortion, still birth and macrosomia [15]. In countries where appropriate care for obstetrical emergencies is lacking, GDM may have particularly severe consequences for the health and well-being of the mother and child. GDM therefore represents an underestimated and unrecognised impediment to optimal maternal and neonatal health in LMIC.
Studies have shown that it is possible to reduce the risk of adverse pregnancy outcomes for women with GDM if proper management is initiated and tight glycemic control obtained [16,17]. Cost benefit of screening women for GDM particularly with the recent introduction of the International Association of Diabetes and Pregnancy Study Groups (IADPSG) guideline with the likelihood of more women being identified and requiring care is being hotly debated; these calculations are dependent on the intervention, the underlying prevalence, opportunity cost, local costing etc. Models that take into account not only the immediate pregnancy outcomes but the potential for future prevention of type 2 diabetes in the mother and offspring show cost saving or a very favourable cost effectiveness ratio [18]. Addressing GDM through early detection and proper management therefore constitutes an opportunity to improve maternal health. In the absence of an international consensus, multiple different guidelines on screening and diagnosis of GDM have existed for a long time. This may be changing with the publication of the IADPSG recommendations. While an international consensus on screening and diagnosis for GDM is welcome, it fails to take into account feasibility and applicability in low resource settings to ensure wider usage. The barriers and challenges to screening and diagnosis and the applicability of various tests have been described by us in a recent paper [19]. It is recommended that women with GDM be screened for diabetes earliest around six weeks postpartum [20][21][22]. After delivery most women with GDM return to normal glucose regulation, but continue to have a high risk of future diabetes, and some will be found to have overt diabetes, impaired fasting glucose or impaired glucose tolerance. To be able to identify these women and provide them all with appropriate treatment or preventive care is another opportunity and challenge. To be able to plan appropriate strategies to address these issues will require better understanding of the barriers currently hindering detection and treatment of GDM. This paper seeks to investigate societal and health system barriers hindering such efforts based on experiences gained from GDM projects supported by the World Diabetes Foundation (WDF).
Methods
From 2002 to 2010 WDF granted support to 253 projects. In order for a project to be included in this study it had to address GDM and begun implementation of activities before March 2011. Eleven projects from eight different LMIC qualified and were included in the study. Questionnaires were sent to the project partners and the partners were asked to participate in an interview. As one project partner was implementing two of the included projects a total of 10 partners participated. All 10 responded to the questionnaire and interviews were conducted face-to-face with three partners, over the phone with six partners and via email with one partner. Participation in the study was voluntary and before the interview the purpose of it was explained to the respondents and consent to participate in the interview obtained. Permission to audio record the interview was also requested from and given by the nine project partners that were interviewed face-to-face or via telephone. Upon enquiry with the Danish Biomedical Research Ethics Committee we were assured that this study was exempt from ethical approval as it was a questionnaire and interview study without the use of human biological material.
The questionnaire was designed to obtain information about the projects, e.g. whether the project was implemented in public or private health facilities (see Additional file 1), with the intention to get a better understanding of the projects and thereby the context of the qualitative data. The interview-guide employed for the interviews was semi-structured and had mainly open-ended questions delving into barriers and challenges related to screening, diagnosis, treatment and follow-up of GDM (see Additional file 2). The list of questions and appropriate probes were drafted by KKN and AK based on literature search, previous reported challenges mentioned by WDF project partners and broad issues, e.g. barriers in the health system, which we wanted to cover in the interviews. Finally, some specific information from the questionnaire also triggered questions during the interview.
Content analysis was used to analyse the interviews, which were recorded and transcribed immediately after they were conducted, making the analysis an ongoing activity as it allowed us to be more aware of emerging themes that we could probe further during later interviews. The interviews and questionnaires were then searched for meaning units and coded by developing categories. The categories were reviewed to make sure that no categories were describing the same phenomena, and subsequently organised into core themes.
Results
Five of the projects are implemented in India, two in Latin America and the Caribbean, two in Sub-Saharan Africa, and one in China and Sudan, respectively. Two projects, Kenya and Cuba, are implemented by either the Ministry of Health or national government institutions. The remaining projects are implemented by local NGOs, local or international research institutions, hospitals or private initiatives; however, all projects are collaborating with national or state health authorities. Six of the projects are solely implemented at public/government health centres. The other five projects are implemented at both public/government and private (including faith-based) health centres. See Tables 1 and 2 for more information about the projects.
Health system and societal barriers to GDM detection and treatment
A number of health system and societal barriers were described by the informants. An overview of the barriers is given in Figure 1.
Health system barriers Lack of trained health care providers and high staff turnover
A main barrier within the health system was the shortage of trained health care providers. This barrier involves two dimensions: having enough health care providers to take care of the patient load and having health care providers with adequate training to provide quality care.
The first issue is the absolute critical shortage of health workers. The WHO estimates that 4.3 million more health workers are required to meet the health Millennium Development Goals (MDGs)-a global compact to reduce child mortality, improve maternal health, and combat AIDS, malaria, and other diseases by 2015 [24]. But even this alarmingly high figure significantly underestimates the global need for human resources because the WHO only accounts for shortages in 57 countries that miss the minimalist target of 2.28 doctors, nurses, and midwives per 1,000 in the population [25]. This shortage in health resources is further compounded by prioritisation based on perceptions of importance of a particular health issue and here GDM often loses the draw as respondents reported that health care planners/ providers do not consider GDM important enough to prompt action as it is not part of national disease surveillance reports. When resources are limited only issues that are part of surveillance systems receive priority and get resourced.
Traditionally this is an area where the predominant diseases are the communicable diseases and much is required for malaria and HIV/AIDS. You will find that our clinics are so geared for promoting prevention of maternal to child transmission that trying now to introduce the issue of GDM takes time.
Respondent from project in Kenya
Two of the respondents from India mentioned that in their area it is not so much the number of available health care providers, but more an issue of not having enough female health care providers, especially female doctors, as many women do not want to discuss issues related to reproductive health with male doctors.
The second issue is lack of awareness and inadequate training of health care providers on GDM and the links between non-communicable diseases (NCDs) and maternal health. This is another important impediment for detection and treatment of GDM. Especially for those women who require insulin management this can be problematic as some respondents reported that health care providers in general are not sufficiently trained or confident enough in prescribing insulin.
Our health care personnel in the country are still afraid of insulin. They still try to stay far away from insulin, so when they reach a stage where they have to prescribe insulin it becomes a problem. Only few doctors would be used to prescribe insulin.
Respondent from project in Cameroon
Also lack of knowledge among health care providers about proper diet and meal plans for women with GDM were reported by respondents.
Retaining health care providers, who have received training in the area of diabetes and GDM, can be challenging as turnover of staff is quite high in some places particularly when there are limited human resources and if the learning has not been passed on to other health care providers.
It is mainly a problem with staff. Trained staff that might be posted elsewhere, so you are not sure that people you've trained will still be there one, two, three years later; so how to ensure that the message will go across to the whole teamthat is a bit challenging.
Lack of standard protocols
Another barrier mentioned is the lack of standard protocols for diagnosis and management of GDM. Consequently, some of the projects have developed such protocols themselves; yet, some of the respondents also report challenges with the development, dissemination and/or implementation of such protocols. One project for example initially intended to base their protocol on international guidelines, but discovered that many women were unable to provide the required information, making it very complicated to screen based on risk factors as recommended in the guidelines of some organisations e.g. American Diabetes Association 2010, Fifth International Workshop-Conference on GDM 2007 and National Institute for Health and Clinical Excellence 2008 [22,26,27]. These challenges have been explored in greater detail by us in an earlier paper [19].
Lack of consumables and equipment
Lack of test consumables and equipment were also highlighted as health system barriers for screening, detection and management of GDM. Materials needed for this include laboratory equipment, glucose solution, glucometers, equipment for monitoring foetal development as well as instruments, i.e. computers and software, for record keeping and administration. Without the necessary equipment and consumables it is next to impossible to ensure proper care and follow-up.
Field staff can be trained, field staff can be motivated, field staff can be encouraged and become willing to do it, as long as they have the way and tools to do it. It is no good asking field staff to do something that they don't have the equipment to do, the time to do or the knowledge to do.
Financing of health services and treatment
Another issue is the lack of health financing for screening and treatment, i.e. when the patient is obliged to pay a fee for screening and/or treatment services and consumables. However low the cost, paying out of pocket is a barrier to access care and adhere to treatment for many women with GDM in LMIC. Not only is the cost of medication a barrier but even the cost of following the recommended diet can be challenging for many.
The other obstacle will always be whether changing a diet is economically feasible. I think we should really pay a lot of attention to that when we are dealing with GDM.
Respondent from project in Jamaica and Panama
In addition, one of the respondents from India noted that although services within the government health care system may be offered free of charge or at subsidised rates, the lack of trained health care providers in reality leaves some women with GDM with no choice other than to seek care at private health facilities with considerably higher costs and this option is not possible for women with GDM belonging to the poorer segments of society. Thus, the cost of the treatment as well as fee for services, i.e. consultations and tests, in some contexts constitutes a barrier for proper treatment; health financing mechanisms therefore not only need to address access and costs, but also quality and comprehensiveness of the programmes.
Lack of referral systems, feedback mechanisms and followup systems
Another issue mentioned is lack of functioning referral systems and feedback-mechanisms especially in cases where treatment is not offered at the primary health care level but at more specialised clinics. Women, who are referred to other clinics for care, may be lost between the referring and the reference health facility, if neither of the two follows up on whether she actually attends the other institution. Similarly, when there is no feedback, the health care providers at the primary health care level either often do not refer patients or if unwilling to deal with GDM, may refer all cases to specialist centres. Moreover, even when women are treated at the same clinic where screening and diagnosis take place, continuous follow-up before, during and after delivery poses a challenge when no follow-up system is in place. This is particularly true with regard to post-partum follow-up and care, when the woman no longer has diabetes (but both the mother and child carry a very high risk of future diabetes) and therefore is neither seen by the obstetrician nor the diabetes specialist and is considered lost in the system. Nonetheless, both the mother and child may be visiting the same health facility for the well-baby clinic or immunization programme but the system fails to identify them to provide continued counselling because of lack of communication between different departments.
Distance to health facility
Transportation to the health centre, both in terms of the cost and the distance can also constitute a barrier to early detection, diagnosis and treatment of GDM according to respondents. The latter in particular can be affected if the woman lives far from the health facility as the woman then is required to attend the facility regularly for monitoring. Travelling long distances under difficult travel conditions during advanced pregnancy has many challenges and often requires that the women be accompanied by an escort further adding to the cost and feasibility. Therefore it is not surprising that women in most LMIC particularly from rural areas reportedly have much fewer antenatal visits.
Perceptions of female body size and weight gain/loss in relation to pregnancy
In some countries societal or cultural issues can hamper treatment of GDM. Some respondents reported on local perceptions of the desirable body size and shape of women, not being conducive to motivating them to improve eating habits and lose weight.
In Jamaica for example the ideal body size is big. . . So when you are dealing with people who have a body image which means that being large and heavy is quite acceptable and maybe even attractive then it is very difficult to try and get people to change their diet.
Respondent from project in Jamaica and Panama
Moreover, the issue of eating habits or losing weight during pregnancy may be particularly sensitive in some areas.
People are not comfortable about the idea of not gaining enough weight during pregnancy. They just feel it means you are sick, that you have some sort of disease. So they would want to put on some weight during pregnancy, and when I say 'some weight' the understanding of 'putting on some weight' can vary a lot. So the idea of putting on weight during pregnancy is something important to them. In urban areas it won't be the same, but in semi-urban and rural areas they are not even expected to lose weight after giving birth so they are sometimes overfed by the family after delivery just to keep as big as they were during pregnancy.
Respondent from project in Cameroon
Notions like these are not only problematic for treatment of GDM, but also for the postpartum prevention of future onset of type 2 diabetes.
Practices related to pregnant women's diet
Other aspects related to diet were also brought out by the respondents. For instance, one respondent from India noted that it is customary to encourage pregnant women to eat sweets and certain calorie dense, high fat snacks in order for them to have enough energy, and people bring such food as gifts when they visit. Thus, being on diet where such things are banned can be a damper on the celebrations of the pregnancy and child birth within the family and raise issues about the health of the young woman thereby curbing her motivation to eat healthy.
Moreover, it was stated that it would not always be considered appropriate for women in India to have special low-calorie food for herself as she is expected to eat the same as the rest of the family and not attract much attention to herself and her needs.
Societal negligence of women's health
Moreover, cultural notions about women and the importance of their health also emerged as a barrier. Some respondents explained that sometimes the woman's family may not consider her health to be important enough to spend the extra money on healthy foods or treatment. This may especially be the case after delivery as the health of the woman is no longer seen as influencing the health of the baby.
The health of women in India is the most neglected, under-looked and deficient system of the whole country. People cannot be bothered. They are just not bothered about the health of women whether it is diabetes or anything else.
Respondent from project in Punjab, India
Lack of decision-making power among women regarding their own health In many cultures the woman herself does not make the decisions even those concerning her own health -those decisions are generally made by her husband and/or inlaws, and if they make the decision that she should not attend antenatal care or not have a specific test performed it is very difficult for her to demand the test.
Whether a woman should go for antenatal check-up or not is a decision taken by her husband, if she goes there and she finds that there is some problem, what kind of treatment, which doctors she should consult etc.all these decisions are being taken by the male counterpart.
Fear of stigmatisation
Another impediment noted by some of the respondents is that it can be highly stigmatising for a woman to be diagnosed with GDM and the consequences of this for her can be intimidating.
That fear inside her that everything will go wrong in her life. Even if it is a risk or recommendation from a doctor, a call from a doctor that 'you are at risk of getting diabetes' or 'the child will get affected with some borderline hyperglycaemia' would probably ruin her family life -her husband would not look upon her nicely or her mother-in-law will always be sarcastic in her remarks.
Respondent from project in Punjab, India
Therefore, some women refuse the test simply because they fear the consequences of its result. Yet, even among women who are diagnosed with GDM postpartum testing for overt diabetes is a challenge because of such fears as a diagnosis of overt diabetes can be devastating for her in financial, emotional and social terms.
Role of women in society
A number of more practical aspects were also mentioned as barriers to GDM detection, diagnosis and treatment. Many of these are related to the woman's role in society -having to take care of the children and doing other chores related to the household. Being too busy to have time to attend antenatal care and GDM testing was therefore cited as another barrier to ensure early detection of GDM. The issue revolves around both the time consumed on the test and the time spent on transport to and from the health centre. This is an even bigger issue in terms of long term follow-up of women with GDM to address future prevention of diabetes. Even if one tries to establish follow-up mechanism through the well-baby or vaccination programme it may not work because the child may be brought to the clinic by somebody elsea grandparent because the women is required to deal with the household chores. Creating outreach home visit based services are therefore very important in these contexts.
Expectations that the pregnant woman move to her maternal home for delivery Some respondents noted that in their area, women tend to move to their maternal home before delivery, adding a further barrier to care delivery and follow-up as the health care provider in the new area may not have the full records, may not be well versed with the case or may not have the training to deal with GDM.
Discussion
In this study a number of barriers to improving maternal health related to GDM were identified, including lack of trained health care providers -especially female doctors; staff turnover and lack of standard protocols, consumables and equipment; financing of health services and treatment; lack of or poor referral systems, feedback mechanisms and follow-up systems; distance to health facility; perceptions of female body size and weight gain/ loss in relation to pregnancy; practices related to pregnant women's diet; societal negligence of women's health; lack of decision-making power among women regarding their own health; stigmatisation; role of women in society and expectations that the pregnant woman move to her maternal home for delivery.
According to our knowledge only a few studies have previously investigated barriers to management or postpartum follow-up of women with diabetes during pregnancy [28][29][30][31][32], and none of these are from LMIC. Although these studies were conducted in a setting very different from our study there are certain similarities between the findings of these studies and ours. Hence, Bennet et al., Collier et al., Mersereau et al. and Razee et al. reported lack of concern about women's healtheither because they feel healthy or because they have less time for self-care due to the demands of the baby or other responsibilitiesas a barrier to GDM management or postpartum follow-up [28][29][30][31]. Fear of being diagnosed with overt diabetes was also identified by Bennet et al. as a barrier, although the reason behind this fear was not grounded in fear of stigmatisation, but more the prospect of having to follow a strict diet and regularly having to attend medical check-ups [28]. Collier et al. also identified cost of health services, diabetic supplies and healthy foods as barriers [29]. Finally, difficulties in accessing care and cultural issues impeding healthy diet and physical exercise were also identified by Razee [29][30][31].
Considering that WHO in the 2006 World Health Report concluded that there is a global shortage of almost 4.3 million doctors, nurses, midwives and support workers [24] it is not surprising that turnover and lack of trained health care providers is mentioned as a barrier for GDM services. Seven of the projects included in this study are implemented in countries where WHO assess there is a critical shortage of health service providers [24]. India is one of these countries and as only around 10% of doctors in South-East Asia are women [24] it is not surprising that respondents from India noted lack of female doctors as a particular problem.
However, as indicated by the respondents it is not only a problem of numbers it is also a problem of skills and training. Lack of knowledge has also been found to be a problem for the management of type 2 diabetes in LMIC [33,34]. Thus, to ensure that women with GDM receive proper treatment, training of health care providers need to be initiated or scaled up. Lack of standard protocols on GDM diagnosis and management was also identified as a barrier to early detection and proper management of GDM. The lack of such protocols may reflect the limited attention that GDM has received in many LMIC, the lack of international consensus on the diagnostic criteria for GDM as well as existing protocols in their current form not being feasible to implement in many LMIC [19].
In addition, findings from this study also illustrate that health services and systems are disorganised and inadequately financed and can work as barriers for achieving specific health-related outcomes, in this case GDM detection and treatment. This is far from new, but health system planners and policy-makers should take these structures and aspects into account when initiating GDM services.
A substantial number of the barriers are societal or culture-related e.g. expectations that the woman transfers to her maternal home to deliver. While their relevance may vary, such barriers remain important according to our findings. Yet, they are largely beyond the realm of the health sector and therefore have to be addressed outside it through awareness and policies. Issues related to women's role in society and how much emphasis is given to their health and well-being seems to be of particular concern and the findings from this study indicate that much still remains to be done to ensure women's empowerment including the right to control all aspects of their health as stated in the Beijing Declaration adopted at the UN Fourth World Conference on Women in 1995 [35].
Finally, in this study all participants were WDF project partners and many of them are also practicing health care providers. In order to further illuminate the issue it would be important to undertake studies where women and their families are interviewed about barriers and facilitators for GDM services. Such a study should also focus on barriers within the control of the individual in addition to health system and societal/cultural barriers.
Conclusion
In this paper we examined barriers to GDM detection and treatment in health systems and society. In order to provide effective GDM services to improve maternal health in LMIC, programmes have to consider and address these barriers. | 2017-06-24T06:44:15.596Z | 2012-12-05T00:00:00.000 | {
"year": 2012,
"sha1": "c0ffae8b2b2fbaa8d191af28183415bc0989c9b9",
"oa_license": "CCBY",
"oa_url": "https://bmcinthealthhumrights.biomedcentral.com/track/pdf/10.1186/1472-698X-12-33",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e42988dff45a8e415a3bfe21c8dd966c20be2277",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3649846 | pes2o/s2orc | v3-fos-license | In silico prediction of neuropeptides in Hymenoptera parasitoid wasps
Parasitoid wasps of the order Hymenoptera, the most diverse groups of animals, are important natural enemies of arthropod hosts in natural ecosystems and can be used in biological control. To date, only one neuropeptidome of a parasitoid wasp, Nasonia vitripennis, has been identified. This study aimed to identify more neuropeptides of parasitoid wasps, by using a well-established workflow that was previously adopted for predicting insect neuropeptide sequences. Based on publicly accessible databases, totally 517 neuropeptide precursors from 24 parasitoid wasp species were identified; these included five neuropeptides (CNMamide, FMRFamide-like, ITG-like, ion transport peptide-like and orcokinin B) that were identified for the first time in parasitoid wasps, to our knowledge. Next, these neuropeptides from parasitoid wasps were compared with those from other insect species. Phylogenetic analysis suggested the divergence of AST-CCC within Hymenoptera. Further, the encoding patterns of CAPA/PK family genes were found to be different between Hymenoptera species and other insect species. Some neuropeptides that were not found in some parasitoid superfamilies (e.g., sulfakinin), or considerably divergent between different parasitoid superfamilies (e.g., sNPF) might be related to distinct physiological processes in the parasitoid life. Information of neuropeptide sequences in parasitoid wasps can be useful for better understanding the phylogenetic relationships of Hymenoptera and further elucidating the physiological functions of neuropeptide signaling systems in parasitoid wasps.
Introduction
Parasitoid wasps (Order: Hymenoptera) are one of the most species-rich groups of animals, potentially accounting for more than 20% of the insects found globally [1]. Studies on insect parasitoids are important to characterize their biodiversity, understand their evolution and, in some cases, apply their parasitic abilities for practical purposes such as biological control of agricultural pests.
In the present study, the publicly accessible sequence data were mined for identifying putative precursor sequences of neuropeptides in parasitoid wasps, and mature bioactive peptide sequences were predicted using a well-established in silico workflow (e.g. Huybrechts et al. [10], Veenstra [12], Christie [14], Xu et al. [15], Christie et al. [16], Christie [24]). Totally more than 500 neuropeptide precursors were found from 24 parasitoid wasp species belonging to six superfamilies: Chalcidoidea, Ichneumonoidea, Cynipoidea, Chrysidoidea, Orussoidea and Platygastroidea. All of these superfamilies except Orussoidea belong to Suborder Apocrita. All parasitoid taxa from Orussoidea and most of parasitoid taxa from Chrysidoidea are ectoparasitoids or cleptoparasitoids; all parasitoid taxa from Cynipoidea and Platygastroidea are endoparasitoids; and both ectoparasitoids and endoparasitoids are included from two of the largest superfamilies (Chalcidoidea and Ichneumonoidea), as reviewed by Whitfield [25]. Mining of neuropeptides of parasitoid wasps might be helpful for further understanding their physiological roles.
Phylogenetic analysis and sequence alignment
ClustalX software [29] was used to perform multiple sequence alignments, by using the slowaccurate mode with a gap-opening penalty of 10 and gap-extension penalty of 0.1, and applying the default Gonnet protein weight matrix. Alignments were visualized using Bioedit v7.0.5.3. Phylogenetic trees of precursor sequences were constructed in MEGA v6.06 [30] by using the neighbor-joining method and bootstrap analysis with 1000 replicates. Sequence logos of manually aligned homologous neuropeptide sequences were generated using the online tool WebLogo (http://weblogo.berkeley.edu/logo.cgi) [31].
Results and discussion
The in silico mining of neuropeptides of parasitoid wasp species Mining from the publicly accessible databases led to the in silico prediction of a total of 517 precursors from 24 parasitoid wasp species belonging to six superfamilies: Chalcidoidea, Ichneumonoidea, Cynipoidea, Chrysidoidea, Orussoidea and Platygastroidea (S1 Table). All the neuropeptide precursors with the predicted putative mature peptide structures are shown in S1 Table 1. Hauser et al. [23] found 30 precursor genes encoding neuropeptides of N. vitripennis. Allatotropin of N. vitripennis was previously identified by Veenstra et al. [32]. Additional neuropeptide genes of N. vitripennis were confirmed in this study, such as CNMamide (CNMa), FMRFamide-like (FMRFa), ITG-like, orcokinin B and an orthologous gene of ion transport peptide-like (ITPL) (S1 Table, For comparison of the neuropeptides of parasitoid wasps and other insect species (e.g., A. mellifera, L. migratoria, R. prolixus, T. castaneum, B. mori and D. melanogaster; Fig 1), phylogeny analysis of the identified precursor genes and sequence alignment of the predicted mature peptides of insect neuropeptides were conducted (S2 Fig; Figs 2 -5). Although it should be avoided for conducting phylogenetic trees of insects neuropeptide precursors for high diversity of precursor sequences (except mature peptides) in different insects in most cases (e.g., [33][34]), phylogenetic trees with neuropeptide precursors performed in this study showed that most of neuropeptide precursors from different parasitoid wasps were grouped together, probably because higher conservations occur among neuropeptide precursors from the very close taxa. The numbers of predicted neuropeptide genes from parasitoid wasp species were lower than those in other insect species (Fig 1). Some neuropeptides that are conserved in other insects were still highly conserved among parasitoid wasps, such as adipokinetic hormone (AKH), arginine-vasopressin-like peptide (AVLP), crustacean cardioactive peptide (CCAP), CCHamide, myosuppressin, and SIFamide (Panels ii, vi, x, xi, xxii and xxxiii in S2 Fig). All peptide sequences of AVLP and CCAP from different insect species were 100% identical (Panels vi and x in S2 Fig). Moreover, some interesting patterns regarding evolutionary orphysiological perspectives were found in this study.
Fig 2. Phylogenetic tree of AST-CC precursors and alignment analysis of AST-CC sequences in parasitoid wasps and other insect species.
Chrysidoidea sequences in phylogeny trees are indicated with blue circles; Ichneumonoidea sequences are indicated with light green squares; Chalcidoidea sequences are indicated with red triangles; Cynipoidea sequences are indicated with light blue rhombuses; Orussoidea with empty squares. Numbers above branches indicate phylogenies from amino acid sequences and only values above 50% are shown. Identities in alignments are highlighted in dark (100%) and in grey (80%~100%). (PK), are known to encode three kinds of insect PRXamide peptides: periviscerokinins (PVKs), pyrokinins (PKs) and trypto-PKs [12,[36][37][38]. The encoding patterns of CAPA/PK genes were found to differ among Hymenoptera species and other insect species (Fig 4). CAPA precursors from Ichneumonoidea, Chalcidoidea and A. mellifera encode only one or two PVKs, whereas those in Argochrysis armilla and ants (e.g. Camponotus floridanus) encode two PVKs and one PK peptide. CAPA precursors in R. prolixus, B. mori and D. melanogaster encode PVKs and trypto-PKs; in contrast, those in L. migratoria and T. castaneum encode PVKs, trypto-PKs, and one PK peptide. PK precursors from Hymenoptera species, B. mori and T. castaneum encode trypto-PK and PK peptides, whereas L. migratoria, R. prolixus, D. melanogaster encode only PKs (Fig 4).
Distinct patterns of neuropeptides between different groups of parasitoid wasps
Interestingly, sulfakinin (SK) was only found in cleptoparasitic wasps (Chrysidoidea: Argochrysis armilla and Chrysis viridula ; Fig 1; Panel xxxiv in S2 Fig), and was not found in any other wasp groups based on BLAST results in NCBI. After the receptor genes for SK were checked based on BLAST analysis in NCBI, no gene encoding SK receptor was found from any parasitoid wasp species except from Chrysidoidea wasps. SK was first isolated from Leucophaea madera and was shown to stimulate hind gut contractions [39][40]. SKs are multifunctional neuropeptides found in many insects (e.g., A. mellifera, Camponotus floridanus, L. migratoria, R. prolixus, T. castaneum, B. mori and D. melanogaster; Fig 1) and are involved in food uptake [41]. It seems that the obvious lack of SK in endoparasitoid taxa could be related to the distinct food patterns in the parasitoid life. However, further studies are warranted to determine whether the absent of SK in endoparasitoid taxa is related to the distinct parasitoid life form or limited transcriptome data.
In particular, a few neuropeptides (e.g., elevenin, kinin, natalisin, neuropeptide-like precursor 1 (NPLP1), and trissin) were not found in any species of Chalcidoidea, but were present in other parasitoid wasp species and other insect species (Fig 1; Panels xvi, xxi, xxiii and xxvii S2 Fig). Among them, three neuropeptides, kinin, NPLP1, and trissin, were reported to be involved in insect feeding progress [3,18,42]. Insect kinins are small neuropeptides that function as myotropic, neuromodulatory, and diuretic hormones in the Malphigian tubules of insects [3]. NPLP1 was identified in the salivary glands of R. prolixus [18], suggesting that it plays a role in the hormonal control of salivary secretion. Trissin is dominantly expressed in the frontal ganglion of B. mori [42], indicating its possible role in the regulation of foregutmidgut contractions and food intake. Whether the lack of these neuropeptides in parasitoid wasps is related to their distinct feeding patterns or limited transcriptome data needs to be investigated in the future.
Elevenin was first identified as a neuropeptide from the abdominal ganglion of the gastropod mollusk Aplysia californica [43]. Similar neuropeptide precursors have been identified from many insect species [12,13]. At present, only one report is available regarding the physiological role of elevenin [44]. In the planthopper Nilaparvata lugens, elevenin is known to regulate body color via a G protein-coupled receptor NlA42, which is expressed in the abdominal integument; this might indicate the direct action of elevenin on the melanization of the cuticles of N. lugens [44]. In the present study, elevenin was not found in any Chalcidoidea species, as well as in D. melanogaster and B. mori (Fig 1; Panel xvi in S2 Fig). Phylogenetic analysis of insect elevenin precursor genes showed a significant divergence between Hymenoptera and other insects (Panel xvi in S2 Fig). Sequence alignment of insect elevenin peptides showed high variations among different insect species, which only share a C-terminus motif CRGXXX and two cysteine residues (Panel xvi in S2 Fig). However, the elevenin gene sequences were highly conserved within the same subfamily of parasitoid wasps, e.g., Microgastrinae, Opiinae and Chrysidini, that they can be used a molecular marker for species identification between different subfamilies of small parasitoid wasps (e.g., of Braconidae family; Panel xvi in S2 Fig). Like mentioned before, information reagrding insect elevenin is limited, and hence the determination of whether elevenin was not found in Chalcidoidea wasps because of their distinct evolutionary/physiological progress, or the remarkable diversity in these wasps, or limited transcriptome data is not possible. Natalisin was first identified as a functional neuropeptide associated with sexual activity and fecundity in insects [45]. In the present study, natalisin was found in five Ichneumonoidea species and other Hymenoptera species (e.g., Athalia rosae and Camponotus floridanus), but not in any other parasitoid species except Ichneumonoidea (Panel xxi in S2 Fig). High variants in copy numbers and peptide sequences of natalisin occur between Hymenoptera species and other insects, as well as among different species from Ichneumonoidea, suggesting that natalisin was not found in some parasitoid wasps because of the remarkable diversity of natalisin in parasitoid wasps and limited transcriptome data.
Several neuropeptides showed vast sequence differences between between Ichneumonoidea and Chalcidoidea, the two major superfamilies of parasitoid wasps. Short neuropeptide F (sNPF) was first identified in Aedes aegypti [46]. The main functions of sNPF is likely the regulation of feeding behavior [47]. sNPFs are widespread among parasitoid wasps. sNPF precursors were found in 17 species of six superfamilies (Figs 1 and 5). This neuropeptide is conserved in parasitoid wasps and possesses a C-terminal motif-RSPSL/YRLRFamide (Fig 5). Two distinct peptides were predicted from sNPF precursors in all six Chalcidoidea species, whereas only one sNPF peptide was found from each of the other 11 parasitoid species. All the predicted precursors of parasitoid wasp species except for the six Chalcidoidea species possess the same or similar C-terminal motifs as A. mellifera_sNPF (-SQRSPSLRLRFamide; Fig 5). High variations in the N-terminal sequence of sNPF peptides were found between Chalcidoidea species and other Hymenoptera insects.
Eclosion hormone (EH) and ecdysis triggering hormone (ETH) are two of the major components of the peptidergic circuit controlling ecdysis in insects [48]. ETH peptide is highly conserved among Hymenoptera species (Panel ix in S2 Fig), whereas EH peptides remarkably differed among Hymenoptera species, especially between Ichneumonoidea and Chalcidoidea (Panel xv in S2 Fig). EH is a long peptide hormone with 6 cysteine residues forming three disulphide bridges in most insects. However, only 4 cysteine residues were found in the EHs of Chalcidoidea species. A low level of identity was found for putative EH sequences between Nasonia vitripennis and Fopius arisanus (Panel xv in S2 Fig), with an identity score of 45%, which was calculated using GeneDoc.
The phylogenetic and alignment analyses of the above neuropeptides (elevenin, kinin, natalisin, NPLP1, trissin, sNPF, and EH), suggested that some of these neuropeptides not found in Chalcidoidea or having considerably diverged between those in Chalcidoidea and other Hymenoptera species, might be related to different evolutionary or physiological patterns in Chalcidoidea species. However, further studies are warranted to explore the relationships of sequence patterns and functional roles of these neuropeptides in parasitoid wasps.
Conclusions
In the present study, publicly accessible databases and a well-established workflow were used for the prediction of neuropeptide sequences. In all, 517 precursors from 24 parasitoid wasp species were identified. Among them, five neuropeptides, i.e., CNMa, FMRFa, ITGlike, ITPL and orcokinin B, were identified for the first time in parasitoid wasps, to our knowledge.
Comparisons of neuropeptides among parasitoid wasps and other insect species revealed some interesting patterns regarding the evolutionary or physiological perspectives and might be useful for investigating the phylogenetic and divergence relationships among the Hymenoptera and other insect groups. Phylogenetic analysis of C-type ASTs suggested the divergence of AST-CCC within Hymenoptera. Further, the encoding patterns of CAPA/PK family genes were different between Hymenoptera species and other insect species. Some neuropeptides that were not found or were considerably divergent in some superfamilies of parasitoid wasps might be related to distinct feeding habits or other physiological processes in some parasitoid groups. Sulfakinin was not found in any parasitoid wasp species except cleptoparasitic wasps. A few neuropeptides (e.g., Elevenin, kinin, NTL, NPLP1, and Trissin) were not found in any species of Chalcidoidea but were present in other parasitoid wasp species and other insect species. Several neuropeptides (e.g., sNPF and EH) sequences show considerable difference between Chalcidoidea and other Hymenoptera insects. However, further studies are warranted for determining whether these patterns are due to the distinct parasitoid life or limited transcriptome data.
Analysis of neuropeptidomes in parasitoid wasps can be useful for better understanding the phylogenetic evolution of Hymenoptera and for conducting in-depth analysis of the physiological roles of neuropeptide signaling systems in parasitoid wasps. Cynipoidea sequences are indicated with light blue rhombuses; Orussoidea with empty squares; Platygastroidea with empty rhombuses. Numbers above branches indicate phylogenies from amino acid sequences and only values above 50% are shown. The numbers of the paracopies carrying the motif are shown by the repeat numbers, and the numbers in parentheses means the numbers of the paracopies predicted from a partial precursor. Identities in alignments are highlighted in dark (100%) and in grey (80%~100%). (PPTX) | 2018-04-03T04:35:46.820Z | 2018-02-28T00:00:00.000 | {
"year": 2018,
"sha1": "f7e45aff21ddf6307e937984fe0aa3afadbccdf8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0193561&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f7e45aff21ddf6307e937984fe0aa3afadbccdf8",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
101069884 | pes2o/s2orc | v3-fos-license | Soil pollution with heavy metals in industrial and agricultural areas: a case study of Olkusz district
Soil contamination of areas covered by industrial plants and farms is one of the major environmental problems whose weight is underestimated in Poland and Europe. Such regions are usually not as exposed to direct pollution as highly urbanized industrial areas. On the other hand, they are usually less strictly monitored than protected areas. The District of Olkusz, an example of such a region, is characterized by well-developed agriculture, regressing local industry and growing tourism industry. However, it borders with Silesia, a heavily industrized area. The study reports the condition of arable soils in Olkusz District in terms of their contamination with lead, cadmium, zinc and copper. The atomic absorption spectrometry (AAS) method was used to determine the concentrations of the metallic elements. The parameters like pH, content of the clay fraction and content of organic matter have been also taken in consideration to assess the bioavailability of the metals. The analytical results showed that, despite the decreasing impact of the local industry, levels of concentration of all the studied metals are significantly higher than their average concentration in Polish soils. Moreover, all the calculated Pearson correlation coefficients between concentrations of the metals were above 0.9, which means they correlate each other strongly. The impact of the local pollutants (mainly Bukowno smelter) in connection with the proximity of the Silesia and the high vulnerability for contamination of the soils precludes agricultural use of the ground in at least half of the cases.
introduction
One of the main environmental impacts of industry is the progressive change in the chemical composition of ecosystems located around emission sources. Continuous release of heavy metals from anthropogenic sources causes significant changes in the biogeochemical cycle of those elements. Toxic metals, including cadmium or lead, can easily penetrate the crops and be incorporated in the food chain. Their presence in living organisms causes widely described inhibition of important enzymes in the metabolite pathways, which leads to many metabolic diseases (Waalkes 2000, satarug et al. 2003, Choi et al 2012. In addition, the accumulation of heavy metals in plants causes their stress reaction, which entails noticeable changes in the chemical composition, mainly through the accumulation of amines -betaine, putrescine, etc. (Bergmann et al. 2001, solanki, Dhankhar 2011. However, also microelements like zinc and copper appearing in high amounts in a diet and compounded by their ability to accumulate can cause diseases like anaemia or damage to kidneys and the liver (haar, BayarD 1971, Das et al. 1997, Bergmann et al. 2001. All the metals mentioned above are emitted to the environment mainly by industrial combustion processes (Cd, Pb) and by the mass use of pesticides and fertilizers -Zn, Cd, Cu (haar, BayarD 1971, kim, Fergusson 1994, Das et al. 1997.
The main objective of this study was to determine the content and its variability of heavy metals such as cadmium, lead, copper and zinc in soils lying in Olkusz District. The correlation between concentrations of particular metals was also taken into consideration. The above problems are crucial not only in the context of the well-developed regional agriculture, but also because of the significant economic transition in the region, which puts emphasis on the development of services, including tourism.
The District of Olkusz is situated in the south of Poland, in Małopolska Province. It is an example of an area occupied by industries and farming, and exposed to heavy metal contamination due to the activity of local industrial plants, of which the largest are the Bolesław Mining and Metallurgy Company and the Emalia Enamelware Factory. Another reason is the influx of toxic substances from industrial plants of the neighbouring Silesian conurbation (DuDka et al. 1995, Verner et al. 1996, ullriCh et al. 1999). In addition, the analyzed area is covered with an admittedly moderately developed road grid, which nonetheless carries heavy traffic. Despite these facts, the share of farmland in the district is approximately 46% of the total area (Bieńkowska et al. 2005, taraDejna et al. 2011. It is also worth mentioning that the soil of north-western districts of Małopolska (including Olkusz District), despite the region's most severe potential exposure to contamination, is not monitored by the Regional Inspectorate for Environmental Protection in Kraków, and the last screening of chemistry of Polish arable soils was conducted in 2010 -2012 by the Chief Inspectorate for Environmental Protection (Pająk 2008, sieBielec 2012.
material and methodS
In 2010-2013, determinations of zinc, lead, cadmium, copper in arable soils in selected areas of Olkusz District were conducted. The sampling sites were located within a 10-km radius from Olkusz ( Figure 1). All the sites lay near national and local roads (5 to 10 m from the roadside).
All the samples were collected from arable land, which is not the subject of any form of nature protection. The samples were collected to the depth of 0.3 m below the land surface (topsoil). In every sampling site, up to 15 subsamples were collected from a square of approximately 20 × 20 m in size, and aggregated to obtain a bulk sample, weighting up to 1 kg. The bulk soil samples were air-dried, crushed and sieved through a sieve with the mesh size of 2 mm. Afterwards, every primary sample was divided into four equal parts. Three quarters of each sample were discarded and the remaining quarter called a laboratory sample was used for analysis. Approximately 3 g of each sample weighed to the nearest 0.0001 g was mineralized in aqua regia according to ISO norm no. 11466:1995ISO norm no. 11466: (iso 1995. The resulting solution was assayed for the total content of heavy metals (Zn, Pb, Cd, Cu) with flame atomic absorption spectrometry according to ISO norm no. 11047:1998ISO norm no. 11047: (iso 1988, using a Perkin Elmer apparatus AAnayst 300. Each sample was submitted to three determinations, and the average value as well as the relative standard deviation of each sample were calculated. In order to validate the method for accuracy and precision, certified reference material (CRM044-50G TRACE METALS -SILT LOAM 1) was analysed in an analogous manner for the corresponding elements. The recoveries were as follows: zinc -84%, copper -98%, lead -102%, cadmium -105%.
In addition, some soil parameters were determinated as well: the pH was measured in 1 mol dm -3 potassium chloride solution, according to ISO norm no. 10390:2005 (iso, 2005) using an Elmetron CP-401 pH meter device; the clay fraction content was determinated by the Bouyoucos areometric method with the Casagrande and Proszynski modification (Ryżak et al. 2009); the organic matter content was estimated using 30% solution of hydrogen peroxide according to the EPA protocol (sChumaCher 2002).
reSultS and diScuSSion
The analyzed cultivated soils were classified as Luvisols according to the World Reference Base for Soil Resources classification (marCinek, komisarek 2011). The pH of the tested soils showed they were from medium acidic to moderately alkaline (Table 1). The measurements of the grain-size composition showed that the content of clay fraction in most samples did not exceed 10% (there were two exceptions: the sites Bolesław and Braciejowka, where the clay content was 14.2% and 13.6%, respectively). The content of organic matter was lower than 5% in all the samples. The analyses of the acidity, the clay fraction content and organic matter classified thirteen out of the fourteen analyzed samples as group A of soils (vulnerable to contamination), while one soil (No. 2 -Bolesław) belonged to group B (less vulnerable to contamination), according to the division proposed by the Institute of Soil Science and Plant in Puławy -IUNG (kaBata-PenDias et al. 1995). The classification mentioned above enables one to assess the extent of soil contamination with heavy metals taking their bioavailability into account, as well as to calculate the comprehensive indicator (CI) of soil pollution similarly to the calculations performed in the report on the state of the environment in Małopolska (Pająk 2008). The concentrations obtained for the metals are presented in Table 1.
Different authors present slightly different values of the average content of heavy metals in Polish and European soils. The widely assumed average concentration of zinc in non-polluted soils is approximately 40 mg kg -1 ; for copper this value is about 6.5 mg kg -1 of dry mass (Wilson, maliszeWska-kor-DyBaCh 2000). Other frequently cited authors (lis, PasieCzna 1995) estimate the content of lead and cadmium at 25 and 0.5 mg kg -1 of dry mass, respectively. In every sample studied, those average values were exceeded by the determined content of zinc and lead. The same was true about cadmium in 13 out of 14 samples and copper in 9 out of 14 samples. 357 However, there are also two standards in Poland describing concentration of heavy metals in soil: -Regulation of the Ministry of Environment on standards for soil quality (żelichowski 2002), -Classification proposed by the IUNG from Puławy (kaBata-PenDias et al. 1995). The national limits for soil are established to be 4 mg kg -1 for Cd, 150 mg kg -1 for Cu, 100 mg kg -1 for Pb, and 300 mg kg -1 for Zn (żelichowski 2002). The highest excess of the ministry's standards for lead and zinc was observed in the north-western part of the region, near the mining and metallurgy company. In the most contaminated sample no. 1, the zinc content was more than forty-fold higher than the standard, while lead and cadmium surpassed the set limits by more than four-fold. The standards for zinc and lead concentrations were also exceeded in the soil samples from Bolesław, Olkusz and Rabsztyn. No excess of the copper concentration was observed, although the content of this metal in sample no.1 was a few times higher than in the other samples (Table 1).
According to the the IUNG classification, the picture of soil contamination with heavy metals is slightly different. This classification categorises soils according to their suitability for farming. It takes into account three more parameters, apart from the concentration of particular metal, namely the pH of soil, the content of the clay fraction and the organic matter content in soil (kaBata-PenDias et al. 1995).
The degrees of contamination (according to the IUNG) in terms of each of the metals studied and the comprehensive indicator (CI) of soil pollution are presented in Table 2.
In the case of zinc and lead, in every sample the presence of these metals was beyond the natural degree of contamination ≥ °I (kaBata-PenDias et al. 1995). Moreover, in more than 85% (12 samples) in the case of zinc and over 57% (8 samples) in the case of lead, the contamination was so high that crop cultivation should be ruled out for at least some vegetables (degree of contamination ≥ °II ). Soil from Bukowno should be completely excluded from A V III V II V 2 Boleslaw B III II III 0 III 3 Olkusz A III III III 0 III 4 Klucze A II I I 0 II 5 Rabsztyn A II / III III III 0 / I III 6 Braciejówka A II II / III III 0 III 7 Kosmołów A II II / III I 0 II / III 8 Sieniczno A II I II 0 II 9 Kogutek A I I I 0 I 10 Przeginia A I I I 0 I 11 Zederman A II II II 0 II 12 Zimnodół A I / II I 0 0 I / II 13 Osiek A II I I 0 II 14 Witeradów A II I / II II 0 II agricultural production (°V of contamination). Similar conclusions refer to the cadmium concentration. Only in one sample the Cd content was natural (°0 of contamination). In 8 sampling points, the content of this metal was ranked in the second and higher degree of contamination. Again, sample no.1 was unsuitable for any agricultural use. In the case of copper, the situation was different. In 12 of the 14 measuring points, the concentration of this element was harmless to agricultural production. In the remaining two cases, the content of copper is only slightly elevated.
The calculated Pearson's correlation coefficients between concentrations of particular metals ranged from 0.93 for Zn-Pb and Zn-Cu to 0.97 for Pb-Cd and 0.98 for Zn-Cd, depicting strong correlation between concentrations of the four metals, regardless of the concentration. The correlation between concentrations of the heavy metals and the amount of the clay fraction and pH of the soils turned proved insignificant in these cases (r ≈ 0.3 and r ≈ -0.25 respectively). Due to the fact that all the samples turned out to be very poor in organic matter, the correlation between the organic matter and heavy metal content may have been affected by a large random error and cannot be considered as a reliable.
One should stress difficulties in estimating the impact of the parent rock on the final concentration of the metals in the topsoil (stuCzynski et al. 2003). However, there is some evidence of the dominant impact of pollution resulting from prolonged exposure of these soils to industrial emissions. First of all, the emitters of industrial and municipal combustion processes, which prevail in the emission of heavy metals in Poland, are situated mainly in the more populated southern part of the country (DuDka et al. 1995, sta-szeWski et al. 2012, DęBski 2013. The spatial distribution of concentrations of the measured pollutants (higher in the west, lower in the southeast of the studied area -Table 1) suggests that quantities of heavy metals deposited in soil by local emitters (mainly from the Bukowno smelter in the Bolesław Mining and Metallurgy Company), in this case, are supplemented by the transboundary influx of contaminants with prevailing S-W and W winds (woś 2010). This observation is coherent with the studies of staszeWski et al.
(2012) based on the condition of soils of Ojców National Park, which is adjacent to the east of the studied area, or with the conclusions of Verner and co-authors (Verner et al. 1996), who claim that local stack emission plays the major role in soil pollution of an area adjacent to the west of Olkusz District. Moreover, studies of the soils in Silesia conducted in the 1990's revealed an average two-to three-fold higher metal concentration in the topsoil than in the subsoil, supporting the hypothesis about a non-lithogenic source for the contamination (Verner et al. 1996, ullriCh et al. 1999).
concluSionS
The soils studied in this work can be classified as moderately and highly polluted with heavy metals, especially with zinc, lead and cadmium. A slight decrease in the concentrations of Zn, Pb and Cd may be observed when comparing the current data with the historical results from the early 1990s from the region around the village Bukowno (Verner et al. 1996). However, they still exceed most of the Polish standards. The heavy metal contamination is much more severe than in similar urban and rural regions in Europe, e.g. in Belgium (De temmerman et al. 2003), northern Serbia (ŠkRBić, ĐuRiŠić-MlaDenović 2013), northwest Croatia (sollitto et al. 2010) or even in central and western Poland (grzeBisz et al. 2002, WaroszeWski et al. 2009, jawoRska, DąBkowska-naskRęt 2012. The reported values of heavy metal concentrations are even higher than recorded in such urban areas as the Kraków agglomeration (Pająk 2008). As mentioned above, the worst pollution was observed in the western, more densely populated, part of the region and in the north of Olkusz. The high concentration of heavy metals in acidic soils (which make up more than half of the tested samples) is particularly dangerous because of the possible transformation of metals into ions, which are more easily assmilated by crops. This concerns mainly such mobile ions as Cd 2+ and Zn 2+. Our studies have shown that the strongest impact on soil pollution is still exerted by the local industries, although the impact of Silesia, a region lying on average about 30 kilometres west of the sampling points, on the soil contamination is noticeable due to prevailing western and southwestern winds in southern Poland. Statistically significant correlations between the concentrations of all the analyzed metals indicate the same or similarly located sources of pollution. In conclusion, each of these pollutants can be suggested as an indicator reflecting the present effect of industrial and non-industrial emission on the arable soil of the region.
The study case of soils of Olkusz District, which is an example of both industrial and agricultural area, shows that such types of regions are as vulnerable to contamination as industrial and highly urbanized areas. The results also raise concerns because, despite the unquestionable persistent pollution, the analyzed region maintains well-developed agriculture, arboriculture and beekeeping. Therefore, continuous monitoring of this area including the dissemination of information about the risk is required, especially as the region is gradually transforming from industrial economy to services, including tourism. | 2019-04-07T13:04:28.806Z | 2015-04-01T00:00:00.000 | {
"year": 2015,
"sha1": "c48d68d48094f7c1b9401ad790b197e54f848ef8",
"oa_license": "CCBYNCSA",
"oa_url": "https://ruj.uj.edu.pl/xmlui/bitstream/handle/item/20775/miskowiec_laptas_zieba_soil_pollution_with_heavy_metals_2015.pdf?isAllowed=y&sequence=1",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "0dfa25f347631b9cc399a243e3f471cde1053e08",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
26437779 | pes2o/s2orc | v3-fos-license | The Effect of Membrane Material and Surface Pore Size on the Fouling Properties of Submerged Membranes
We aimed to investigate the relationship between membrane material and the development of membrane fouling in a membrane bioreactor (MBR) using membranes with different pore sizes and hydrophilicities. Batch filtration tests were performed using submerged single hollow fiber membrane ultrafiltration (UF) modules with different polymeric membrane materials including cellulose acetate (CA), polyethersulfone (PES), and polyvinylidene fluoride (PVDF) with activated sludge taken from a municipal wastewater treatment plant. The three UF hollow fiber membranes were prepared by a non-solvent-induced phase separation method and had similar water permeabilities and pore sizes. The results revealed that transmembrane pressure (TMP) increased more sharply for the hydrophobic PVDF membrane than for the hydrophilic CA membrane in batch filtration tests, even when membranes with similar permeabilities and pore sizes were used. PVDF hollow fiber membranes with smaller pores had greater fouling propensity than those with larger pores. In contrast, CA hollow fiber membranes showed good mitigation of membrane fouling regardless of pore size. The results obtained in this study suggest that the surface hydrophilicity and pore size of UF membranes clearly affect the fouling properties in MBR operation when using activated sludge.
Introduction
Membrane bioreactors (MBRs) are a new technology that combines several typical operations (primary sedimentation, activated sludge aeration and sedimentation, and tertiary media filtration) into a single treatment step [1].MBR has excellent potential for use in a wide range of applications including municipal and industrial wastewater treatment, solid waste digestion, and odor control.Moreover, this technology has many attractive advantages over conventional wastewater treatment processes due to its small footprint, high effluent quality, high volumetric loading, and reduced sludge production [2].
Membranes (flat sheets and hollow fibers) and membrane systems for MBRs are mainly produced by Japanese manufacturers such as Kubota, Tory, Asahi Kasei, and Mitsubishi-Rayon and North American companies such as GE and Koch Membrane Systems.Hollow fibers are mostly used in large plants (>10,000 m 3 /day) because of their high surface areas and excellent mass transfer properties, while flat sheet membranes are employed for small plants (<5000 m 3 /day).Thus, the use of hollow fiber membranes in MBR processes has grown recently [3,4].
For these reasons, the use of MBRs for research, technology, and commercial applications is rapidly advancing across the world.The global MBR market had an estimated value of around US $425.7 million in 2014, and the market is rising at a compound annual growth rate (CAGR) of 13.2% and is expected to reach US $777.7 million by 2019, while the Asia-Pacific market segment reached US $121.5 million in 2014 and should reach US $270.6 million by 2019, with a CAGR of 17.4% between 2014 and 2019 [5].It can be seen that MBR technology has shown remarkable growth in the Asia-Pacific region, including Japan.However, membrane fouling is a major hindrance to the wider application of MBRs as it results in a sharp decrease in the quantity of produced water, as well as an increase in the energy demand and transmembrane pressure (TMP) [6].
Up to now, many studies on membrane fouling in MBR processes have focused on sludge characteristics, such as mixed liquor suspended solids (MLSS) and extracellular polymeric substances (EPS) [6][7][8][9].Additionally, MBR fouling is affected by operating parameters such as solids retention time (SRT), hydraulic retention time (HRT), dissolved oxygen (DO), food-to-microorganism ratio (F/M), and permeate flux [9][10][11].These parameters can indirectly affect membrane fouling by altering the sludge properties [6].Various methods have been proposed to reduce membrane fouling in MBRs, including chemical and physical cleaning, the use of different membrane geometries (e.g., flat sheet, hollow fiber, and tubular membranes), effective membrane module and reactor design (e.g., membrane packing density and aerator position), membrane surface modification (e.g., hydrophilicity and roughness), and biological antifouling strategies such as quorum quenching and enzymatic disruption [12].Among these strategies, membrane geometry and surface modification to reduce biofouling have attracted much attention [13,14].In particular, membrane characteristics, such as membrane material, pore size, and hydrophilicity are important factors for membrane fouling because they affect interactions between the membrane surface and the mixed liquor in the bioreactor.It is believed that hydrophilic membranes allow greater membrane fouling mitigation than do hydrophobic membranes [15,16].However, contradictory results have been reported.For example, Matar et al. [17] studied the effect of hydrophobic and hydrophilic membrane surfaces in MBR, and their results showed that fouling behavior in MBR processes are much less dependent on hydrophilicity than previously thought.Choi et al. [18] reported that pore structure had a greater influence on the development of membrane fouling than the hydrophilicity of the membrane surface.As such, it is still debatable whether or not the hydrophilicity of the membrane affects membrane fouling mitigation.In our previous work, we reported the effect of different polymeric membrane materials (polyvinylidene fluoride (PVDF), cellulose acetate butyrate (CAB), and polyvinyl butyral (PVB)) on the relationship between membrane pore size and the development of membrane fouling in a MBR.The membrane fouling of a PVDF hollow fiber membrane decreased as the membrane pore size increased, whereas CAB membranes with smaller pores showed less fouling than those with larger ones [19].
The effects of membrane hydrophilicity on fouling claimed in the reported papers are summarized in Table 1.The objective of this study is to assess the role of membrane surface hydrophilicity and surface pore size in MBR systems.Many research groups have investigated the development of fouling on the surface of microfiltration (MF) membranes, as shown in Table 1.In our study, we fabricated ultrafiltration (UF) hollow fiber membranes with various pore sizes using three different membrane polymeric materials: PVDF, polyether sulfone (PES), and cellulose acetate (CA).The development of membrane fouling on each membrane was investigated by batch filtration testing with activated sludge using a laboratory-scale MBR.
Materials
CA (Mw = 30,000, Sigma-Aldrich, St. Louis, MO, USA), PES (Mw = 65,000, Ultrason ® E6020P, BASF, Tokyo, Japan), and PVDF (Mw = 322,000, SOLEF6020, Solvay, Tokyo, Japan) were used as polymeric membrane materials.Dimethylacetamide (DMAc) and lithium chloride (LiCl) were purchased from Wako Pure Chemical Industries.Polystyrene latex particles with diameters of 20, 50, 100, and 200 nm were purchased from Duke Scientific Corporation (Palo Alto, CA, USA).A LIVE/DEAD BacLight Bacterial Viability kit was purchased from Thermo Fisher Scientific (Waltham, MA, USA) and used to stain bacteria.A Pierce BCA Protein Assay kit was purchased from Thermo Scientific.Deionized water (18.2MΩ•cm, Merck Millipore, Billerica, MA, USA) was used to prepare all aqueous solutions and for rinsing.All chemicals were used without further purification.
Hollow Fiber Membrane Preparation
Hollow fiber membranes were prepared using a batch-type wet spinning machine via non-solvent induced phase separation (NIPS) [23].The preparation conditions are summarized in Table 2. CA and DMAc were mixed for 12 h at 25 • C, and then left in the solution tank for 2 h to remove air bubbles.Hollow fibers were extruded from a spinneret with inner and outer tube diameters of 0.8 and 2 mm, respectively.Eventually, they were transferred into a coagulation bath, which induced phase separation and caused membrane formation, and the membrane was wound on a take-up winder.The prepared hollow fiber membranes were kept in pure water until they were characterized.PES and PVDF membranes were prepared in the same way.
Air Bubble Contact Angle Measurement
The air bubble contact angle of each membrane was measured using a contact angle goniometer (Drop Master 300, Kyowa Interface Science Co., Saitama, Japan).A sample was placed in a glass cell filled with deionized water, an air bubble (1 µL) was then released below the sample, and the contact angle between the air bubble, and the surface was measured automatically upon contact [24].
Polystyrene Particle Rejection Test
To estimate the pore sizes of the membranes, a polystyrene particle rejection experiment was performed using the cross-flow method with a single hollow fiber membrane.The polystyrene particles were monodispersed polystyrene latex particles with diameters of 20, 50, 100, and 200 nm.The feed solution was prepared by adding the latex particles to an aqueous nonionic surfactant (0.1 wt %, Triton X-100), which was forced to permeate through the membrane at a pressure of 0.05 MPa.The polystyrene particle concentrations of the feed and the filtrate were measured using a UV-Vis spectrophotometer (U-2000, Hitachi Co., Tokyo, Japan) at a wavelength of 385 nm.Particle rejection was calculated using the following equation: R (%) = (1 − C p /C f ) × 100.Here, R was the rejection, and C p and C f were the polystyrene particle concentrations of the permeate and the feed, respectively.We used the particle diameter when the rejection was 90% as the pore size of each membrane.
Batch Filtration Test Using Single Membrane Module
Batch filtration tests were conducted to understand the fouling propensity and flux behavior of the membranes in a laboratory-scale MBR.A schematic diagram of the experimental apparatus is shown in Figure 1.Single membrane modules with a membrane fiber length of 7 cm were prepared using CA, PES, and PVDF hollow fiber membranes.The single membrane modules were installed in a small filtration apparatus with an effective volume of approximately 120 mL.The inside of the filtration apparatus was filled with wastewater supplied by the Shizuoka Prefecture industrial wastewater treatment plant in Japan.The membrane modules were operated under a constant flux of 0.23 m 3 /m 2 /day.The MLSS, polysaccharide, and protein concentrations were approximately 4000 mg/L, 14 mg/L, and 150 mg/L, respectively, while the SRT was set and maintained at 15 days.An aqueous solution of 0.85 wt % NaCl was fed into the reactor to prevent concentration of the MLSS.The batch filtration tests were carried out using raw activated sludge without treatment.A detailed procedure has been reported in our previous research [19].
Air Bubble Contact Angle Measurement
The air bubble contact angle of each membrane was measured using a contact angle goniometer (Drop Master 300, Kyowa Interface Science Co., Saitama, Japan).A sample was placed in a glass cell filled with deionized water, an air bubble (1 μL) was then released below the sample, and the contact angle between the air bubble, and the surface was measured automatically upon contact [24].
Polystyrene Particle Rejection Test
To estimate the pore sizes of the membranes, a polystyrene particle rejection experiment was performed using the cross-flow method with a single hollow fiber membrane.The polystyrene particles were monodispersed polystyrene latex particles with diameters of 20, 50, 100, and 200 nm.The feed solution was prepared by adding the latex particles to an aqueous nonionic surfactant (0.1 wt %, Triton X-100), which was forced to permeate through the membrane at a pressure of 0.05 MPa.The polystyrene particle concentrations of the feed and the filtrate were measured using a UV-Vis spectrophotometer (U-2000, Hitachi Co., Tokyo, Japan) at a wavelength of 385 nm.Particle rejection was calculated using the following equation: R (%) = (1 − Cp/Cf) × 100.Here, R was the rejection, and Cp and Cf were the polystyrene particle concentrations of the permeate and the feed, respectively.We used the particle diameter when the rejection was 90% as the pore size of each membrane.
Batch Filtration Test Using Single Membrane Module
Batch filtration tests were conducted to understand the fouling propensity and flux behavior of the membranes in a laboratory-scale MBR.A schematic diagram of the experimental apparatus is shown in Figure 1.Single membrane modules with a membrane fiber length of 7 cm were prepared using CA, PES, and PVDF hollow fiber membranes.The single membrane modules were installed in a small filtration apparatus with an effective volume of approximately 120 mL.The inside of the filtration apparatus was filled with wastewater supplied by the Shizuoka Prefecture industrial wastewater treatment plant in Japan.The membrane modules were operated under a constant flux of 0.23 m 3 /m 2 /day.The MLSS, polysaccharide, and protein concentrations were approximately 4000 mg/L, 14 mg/L, and 150 mg/L, respectively, while the SRT was set and maintained at 15 days.An aqueous solution of 0.85 wt % NaCl was fed into the reactor to prevent concentration of the MLSS.The batch filtration tests were carried out using raw activated sludge without treatment.A detailed procedure has been reported in our previous research [19].
Membrane Morphology Observations
A field emission scanning electron microscope (FE-SEM) (JSF-7500F, JEOL Co. Ltd., Akishima, Japan) was used to observe the surface morphology of the membranes.The membranes were fractured into small pieces in liquid nitrogen and then dried overnight in a freeze dryer (FDU-1200 EYELA, Tokyo Rikakikai Co. Ltd., Tokyo, Japan).The dried samples were coated with osmium tetroxide (OsO 4 ) prior to SEM analysis in order to minimize sample damage due to the electron beam and to obtain clear images.
Microbial Floc Attachment Test
To evaluate the propensity of microbial flocs to attach to the surface of each membrane, batch attachment tests were carried out.Thin films (thickness: 500 µm) without any pores were prepared by casting films of CA, PES, and PVDF with an applicator and drying them.The films were soaked in 10 mL of a mixed liquor suspension for 1 h, and then rinsed with a pre-sterilized 0.85 wt % aqueous solution of NaCl to remove any suspended solids that had accumulated on the film surface.After that, the bacteria attached to the film surface were stained with a LIVE/DEAD BacLight Bacterial Viability Kit (Life Technologies, Carlsbad, CA, USA).The stained films were then observed using a confocal laser scanning microscope (CLSM, FV1000D, Olympus, Tokyo, Japan) at pH 7.0.
Properties of Hollow Fiber Membranes
Figure 2 shows SEM images of CA, PES, and PVDF hollow fiber membranes prepared using the NIPS process.As shown in Figure 2b, skin layers formed on the outer surfaces of the CA, PES, and PVDF hollow fiber membranes, while the cross-sections shown in Figure 2a indicate that macrovoid structures formed in all the membranes.These macrovoids are typical structures for membranes prepared via NIPS [25].The three hollow fiber membranes had pores on the inner surfaces, as shown in Figure 2c.Thus, these hollow fiber membranes had asymmetric structures.The pure water permeabilities of the three membranes are shown in Table 3, with CA, PES, and PVDF hollow fiber membranes showing water permeabilities at one bar of 215, 331, and 233 LMH, respectively.Thus, the water permeability properties of the three membranes were similar.
Membrane Morphology Observations
A field emission scanning electron microscope (FE-SEM) (JSF-7500F, JEOL Co. Ltd., Akishima, Japan) was used to observe the surface morphology of the membranes.The membranes were fractured into small pieces in liquid nitrogen and then dried overnight in a freeze dryer (FDU-1200 EYELA, Tokyo Rikakikai Co. Ltd., Tokyo, Japan).The dried samples were coated with osmium tetroxide (OsO4) prior to SEM analysis in order to minimize sample damage due to the electron beam and to obtain clear images.
Microbial Floc Attachment Test
To evaluate the propensity of microbial flocs to attach to the surface of each membrane, batch attachment tests were carried out.Thin films (thickness: 500 μm) without any pores were prepared by casting films of CA, PES, and PVDF with an applicator and drying them.The films were soaked in 10 mL of a mixed liquor suspension for 1 h, and then rinsed with a pre-sterilized 0.85 wt % aqueous solution of NaCl to remove any suspended solids that had accumulated on the film surface.After that, the bacteria attached to the film surface were stained with a LIVE/DEAD BacLight Bacterial Viability Kit (Life Technologies, Carlsbad, CA, USA).The stained films were then observed using a confocal laser scanning microscope (CLSM, FV1000D, Olympus, Tokyo, Japan) at pH 7.0.
Properties of Hollow Fiber Membranes
Figure 2 shows SEM images of CA, PES, and PVDF hollow fiber membranes prepared using the NIPS process.As shown in Figure 2b, skin layers formed on the outer surfaces of the CA, PES, and PVDF hollow fiber membranes, while the cross-sections shown in Figure 2a indicate that macrovoid structures formed in all the membranes.These macrovoids are typical structures for membranes prepared via NIPS [25].The three hollow fiber membranes had pores on the inner surfaces, as shown in Figure 2c.Thus, these hollow fiber membranes had asymmetric structures.The pure water permeabilities of the three membranes are shown in Table 3, with CA, PES, and PVDF hollow fiber membranes showing water permeabilities at one bar of 215, 331, and 233 LMH, respectively.Thus, the water permeability properties of the three membranes were similar.The rejection properties of the prepared hollow fiber membranes were evaluated using polystyrene latex particles with diameters of 20, 50, 100, and 200 nm. Figure 3 shows the polystyrene particle rejection curves for the hollow fiber membranes.All three membranes showed similar rejection tendencies, and high rejections were obtained even for 30 nm polystyrene latex particles.Therefore, these membranes were classified as UF membranes.According to the criteria used, the CA, PES, and PVDF hollow fiber membranes showed mean pore sizes of <30, <30, and <20 nm, approximately, respectively.These three membranes with similar water permeabilities and pore sizes were used for the following fouling filtration tests.The rejection properties of the prepared hollow fiber membranes were evaluated using polystyrene latex particles with diameters of 20, 50, 100, and 200 nm. Figure 3 shows the polystyrene particle rejection curves for the hollow fiber membranes.All three membranes showed similar rejection tendencies, and high rejections were obtained even for 30 nm polystyrene latex particles.Therefore, these membranes were classified as UF membranes.According to the criteria used, the CA, PES, and PVDF hollow fiber membranes showed mean pore sizes of <30, <30, and <20 nm, approximately, respectively.These three membranes with similar water permeabilities and pore sizes were used for the following fouling filtration tests.
Fouling Behavior in Batch Filtration Tests
As mentioned in the introduction, whether or not the membrane hydrophilicity affects the mitigation of membrane fouling is still open for debate.In many cases, membranes with different hydrophilicities, water permeabilities, and pore sizes were used for the comparison of fouling properties.Since the water permeability and pore size can also affect the fouling properties, membranes with similar water permeabilities and pore sizes must be used to more accurately examine the effect of membrane hydrophilicity.In this work, three types of hollow fiber membranes (CA, PES, and PVDF) with similar water permeabilities and pore sizes were used to evaluate the effect of material hydrophilicity.The air contact angles of CA, PES, and PVDF were 121°, 115°, and 94°, respectively (Table 3).Thus, the hydrophilicities of the materials increase in the order CA > PES > PVDF. Figure 4 shows the membrane fouling results obtained for the three membranes used for MBR water treatment of raw activated sludge.When using the CA membrane, TMP was almost constant and was below 5 kPa throughout the test.On the other hand, the development of membrane fouling with the PES and PVDF membranes increased remarkably.After 60 min, the TMPs decreased in the order of PVDF > PES > CA, which is in agreement with the hydrophobicities of the membranes.Thus, it is clearly shown that hydrophilic membrane materials are preferable for reducing membrane fouling.Major membrane foulants are believed to be EPS from bacterial cell lysis, microbial metabolites, and unmetabolized wastewater components [26], including proteins, polysaccharides, nucleic acids, and other polymers [26][27][28].Of these major foulants, we believe that hydrophobic
Fouling Behavior in Batch Filtration Tests
As mentioned in the introduction, whether or not the membrane hydrophilicity affects the mitigation of membrane fouling is still open for debate.In many cases, membranes with different hydrophilicities, water permeabilities, and pore sizes were used for the comparison of fouling properties.Since the water permeability and pore size can also affect the fouling properties, membranes with similar water permeabilities and pore sizes must be used to more accurately examine the effect of membrane hydrophilicity.In this work, three types of hollow fiber membranes (CA, PES, and PVDF) with similar water permeabilities and pore sizes were used to evaluate the effect of material hydrophilicity.The air contact angles of CA, PES, and PVDF were 121 • , 115 • , and 94 • , respectively (Table 3).Thus, the hydrophilicities of the materials increase in the order CA > PES > PVDF. Figure 4 shows the membrane fouling results obtained for the three membranes used for MBR water treatment of raw activated sludge.When using the CA membrane, TMP was almost constant and was below 5 kPa throughout the test.On the other hand, the development of membrane fouling with the PES and PVDF membranes increased remarkably.After 60 min, the TMPs decreased in the order of PVDF > PES > CA, which is in agreement with the hydrophobicities of the membranes.Thus, it is clearly shown that hydrophilic membrane materials are preferable for reducing membrane fouling.
Major membrane foulants are believed to be EPS from bacterial cell lysis, microbial metabolites, and unmetabolized wastewater components [26], including proteins, polysaccharides, nucleic acids, and other polymers [26][27][28].Of these major foulants, we believe that hydrophobic fouling materials such as proteins and microorganisms affect the development of fouling on hydrophobic membrane surfaces.The pore sizes of CA, PES and PVDF membranes were 30, 30, and 20 nm, respectively.
Visualization of the Biofouling on the Film Surfaces
To understand the attachment properties of the microbial flocs on the prepared films, batch attachment tests with activated sludge were performed.Figure 5 shows the CLSM images of the surface of each polymer film after the attachment test.The amount of fluorescent microbial flocs on the surface of the CA membrane was significantly less than seen for the PES and PVDF membranes.It is hypothesized that the hydrophilicity of the CA membrane play an active role in reducing microbial floc adsorption, which is expected to be beneficial for the antifouling properties of the membrane, as shown in Figure 4.
Effects of Membrane Pore Size on Membrane Fouling
To investigate the effects of membrane pore size on fouling behavior, MBR experiments were conducted using membranes with different pore sizes.Figure 6 shows the relationship between membrane pore size and fouling rate as determined in the batch filtration tests.In this figure, the fouling rates are denoted by the TMP after 60 min of filtration.For the CA hollow fiber membrane, the effect of pore size was unclear.This is because the fouling of the hydrophilic membrane was reduced, so the effect of the pore size was not significant.However, for PVDF and PES hollow fiber membranes, the fouling propensity decreased sharply with increased pore size.This is in agreement with other research results [18,21,29,30].Marel et al. [31] reported that PVDF with large pores showed greatly reduced membrane fouling in submerged membrane bioreactors.They carried out fouling The pore sizes of CA, PES and PVDF membranes were 30, 30, and 20 nm, respectively.
Visualization of the Biofouling on the Film Surfaces
To understand the attachment properties of the microbial flocs on the prepared films, batch attachment tests with activated sludge were performed.Figure 5 shows the CLSM images of the surface of each polymer film after the attachment test.The amount of fluorescent microbial flocs on the surface of the CA membrane was significantly less than seen for the PES and PVDF membranes.It is hypothesized that the hydrophilicity of the CA membrane play an active role in reducing microbial floc adsorption, which is expected to be beneficial for the antifouling properties of the membrane, as shown in Figure 4.The pore sizes of CA, PES and PVDF membranes were 30, 30, and 20 nm, respectively.
Visualization of the Biofouling on the Film Surfaces
To understand the attachment properties of the microbial flocs on the prepared films, batch attachment tests with activated sludge were performed.Figure 5 shows the CLSM images of the surface of each polymer film after the attachment test.The amount of fluorescent microbial flocs on the surface of the CA membrane was significantly less than seen for the PES and PVDF membranes.It is hypothesized that the hydrophilicity of the CA membrane play an active role in reducing microbial floc adsorption, which is expected to be beneficial for the antifouling properties of the membrane, as shown in Figure 4.
Effects of Membrane Pore Size on Membrane Fouling
To investigate the effects of membrane pore size on fouling behavior, MBR experiments were conducted using membranes with different pore sizes.Figure 6 shows the relationship between membrane pore size and fouling rate as determined in the batch filtration tests.In this figure, the fouling rates are denoted by the TMP after 60 min of filtration.For the CA hollow fiber membrane, the effect of pore size was unclear.This is because the fouling of the hydrophilic membrane was reduced, so the effect of the pore size was not significant.However, for PVDF and PES hollow fiber membranes, the fouling propensity decreased sharply with increased pore size.This is in agreement with other research results [18,21,29,30].Marel et al. [31] reported that PVDF with large pores showed greatly reduced membrane fouling in submerged membrane bioreactors.They carried out fouling
Effects of Membrane Pore Size on Membrane Fouling
To investigate the effects of membrane pore size on fouling behavior, MBR experiments were conducted using membranes with different pore sizes.Figure 6 shows the relationship between membrane pore size and fouling rate as determined in the batch filtration tests.In this figure, the fouling rates are denoted by the TMP after 60 min of filtration.For the CA hollow fiber membrane, the effect of pore size was unclear.This is because the fouling of the hydrophilic membrane was reduced, so the effect of the pore size was not significant.However, for PVDF and PES hollow fiber membranes, the fouling propensity decreased sharply with increased pore size.This is in agreement with other research results [18,21,29,30].Marel et al. [31] reported that PVDF with large pores showed greatly reduced membrane fouling in submerged membrane bioreactors.They carried out fouling filtration tests with flat sheet PVDF membranes with different pore sizes (0.03, 0.1, and 0.3 µm) to investigate the influence of pore size on fouling behavior.The membrane resistance sharply decreased with increasing pore size.Our previous research on PVDF MF membranes showed a similar trend [19].The development of membrane fouling on PVDF membranes decreased in the order 0.02 > 0.25 > 0.4 µm.However, the results for the CAB hollow fiber membrane showed the opposite tendency [19].This indicates that fouling behavior can be affected by complicated factors such as surface chemical properties, surface morphology, pore size, hydrophilic/phobic properties, and solution chemistry.These factors should be considered together for a definite understanding of the fouling behavior in MBR processes.
Water 2016, 8, 602 9 of 11 filtration tests with flat sheet PVDF membranes with different pore sizes (0.03, 0.1, and 0.3 μm) to investigate the influence of pore size on fouling behavior.The membrane resistance sharply decreased with increasing pore size.Our previous research on PVDF MF membranes showed a similar trend [19].The development of membrane fouling on PVDF membranes decreased in the order 0.02 > 0.25 > 0.4 μm.However, the results for the CAB hollow fiber membrane showed the opposite tendency [19].This indicates that fouling behavior can be affected by complicated factors such as surface chemical properties, surface morphology, pore size, hydrophilic/phobic properties, and solution chemistry.These factors should be considered together for a definite understanding of the fouling behavior in MBR processes.
Conclusions
In this paper, we investigated the effect of membrane hydrophilicity and pore size on the development of UF membrane fouling in a MBR process.The membrane hydrophilicity was found to affect the membrane fouling.The hydrophilic CA membrane showed good mitigation of membrane fouling, while dramatic biofouling occurred on the PVDF and PES membrane surfaces due to interaction between hydrophobic foulants such as proteins and microorganisms and the hydrophobic membranes.For CA hollow fiber membranes, membrane pore size had no significant effect on fouling mitigation.In contrast, larger pores were found to contribute to the mitigation of membrane fouling for PVDF and PES hollow fiber membranes.
Figure 1 .
Figure 1.Schematic diagram of the experimental apparatus used in lab-scale batch filtration test.
Figure 1 .
Figure 1.Schematic diagram of the experimental apparatus used in lab-scale batch filtration test.
Figure 3 .
Figure 3. Polystyrene particle rejection results for the prepared membranes.
Figure 3 .
Figure 3. Polystyrene particle rejection results for the prepared membranes.
as proteins and microorganisms affect the development of fouling on hydrophobic membrane surfaces.
Figure 4 .
Figure 4. Transmembrane pressure (TMP) changes during batch filtration tests with activated sludge.The pore sizes of CA, PES and PVDF membranes were 30, 30, and 20 nm, respectively.
Figure 5 .
Figure 5. Confocal laser scanning microscope (CLSM) images obtained for the surfaces of thin films of (a) CA; (b) PES; and (c) PVDF after use in batch attachment tests with activated sludge.The surfaces of the films were stained with LIVE/DEAD stain.
Figure 4 .
Figure 4. Transmembrane pressure (TMP) changes during batch filtration tests with activated sludge.The pore sizes of CA, PES and PVDF membranes were 30, 30, and 20 nm, respectively.
as proteins and microorganisms affect the development of fouling on hydrophobic membrane surfaces.
Figure 4 .
Figure 4. Transmembrane pressure (TMP) changes during batch filtration tests with activated sludge.The pore sizes of CA, PES and PVDF membranes were 30, 30, and 20 nm, respectively.
Figure 5 .
Figure 5. Confocal laser scanning microscope (CLSM) images obtained for the surfaces of thin films of (a) CA; (b) PES; and (c) PVDF after use in batch attachment tests with activated sludge.The surfaces of the films were stained with LIVE/DEAD stain.
Figure 5 .
Figure 5. Confocal laser scanning microscope (CLSM) images obtained for the surfaces of thin films of (a) CA; (b) PES; and (c) PVDF after use in batch attachment tests with activated sludge.The surfaces of the films were stained with LIVE/DEAD stain.
Figure 6 .
Figure 6.Relationship between membrane pore size and TMP for membranes using different polymer materials.
Table 1 .
Membrane properties used in previous work and claims regarding the effect of membrane surface hydrophilicity on fouling.
Table 2 .
Preparation conditions for hollow fiber membranes.
Table 3 .
Properties of CA, PES, and PVDF hollow fiber membranes.
Table 3 .
Properties of CA, PES, and PVDF hollow fiber membranes. | 2017-05-01T21:06:24.417Z | 2016-12-18T00:00:00.000 | {
"year": 2016,
"sha1": "9b2204f7d22b80ca2f55883e207a8f8e0eb0176f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/8/12/602/pdf?version=1482234073",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9b2204f7d22b80ca2f55883e207a8f8e0eb0176f",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
259360828 | pes2o/s2orc | v3-fos-license | NiCrAl piston-cylinder cell for magnetic susceptibility measurements under high pressures in pulsed high magnetic fields
We developed a metallic pressure cell made of nickel-chromium-aluminum (NiCrAl) for use with a non-destructive pulse magnet and a magnetic susceptibility measurement apparatus with a proximity detector oscillator (PDO) in pulsed magnetic fields of up to 51 T under pressures of up to 2.1 GPa. Both the sample and sensor coil of the PDO were placed in the cell so that the magnetic signal from NiCrAl would not overlay the intrinsic magnetic susceptibility of the sample. A systematic investigation of the Joule heating originating from metallic parts of the pressure cell revealed that the temperature at the sample position remains at almost 1.4 K until approximately 80 $\%$ of the maximum applied magnetic field ($H_{\rm max}$) in the field-ascending process (e.g., 40 T for $H_{\rm max}$ of 51 T). The effectiveness of our apparatus was demonstrated, by investigating the pressure dependence of the magnetization process of the triangular-lattice antiferromagnet Ba$_3$CoSb$_2$O$_9$.
I. INTRODUCTION
Extreme conditions, such as high pressure, high magnetic field, and low temperature, are occasionally required to search for new properties and phenomena in condensed-matter materials. For instance, the ground states of geometrically frustrated magnets (GFMs) infinitely degenerate at low temperatures, and exotic physical phenomena such as a quantum spinliquid state, and quantum phase transitions have been reported under extreme conditions [1][2][3] . In GFMs, a high magnetic field lifts the degeneracy and sometimes induces exotic magnetic phases. High pressure alters the magnetic anisotropy and exchange interactions between magnetic ions in a magnetic material by shrinking its crystal lattice. Recently, the triangularlattice antiferromagnet Cs 2 CuCl 4 , one of the GFMs, was reported to exhibit multiple magnetic-field-induced phase transitions under high pressure at low temperatures 4 . Therefore, experimental techniques that can be used under these extreme conditions are desirable to clarify the physical properties of condensed-matter materials.
The development of measurement techniques under multiple extreme conditions has been undertaken at pulsed high magnetic field facilities. Thus far, the magnetization curves of several magnetic materials measured by a conventional induction method using pick-up coils were reported under pressures of up to 0.95 GPa in pulsed magnetic fields of up to 50 T [5][6][7] . In these studies, a non-destructive pulse magnet and a self-clamped piston-cylinder cell (PCC) made of berylliumcopper (CuBe) or nickel-chromium-aluminum (NiCrAl) were utilized. The magnetization signal was detected by winding pick-up coils with approximately100 turns around the exterior of the PCC (Fig.1(c)). Therefore, the measurement signals were degraded by the low sample filling rate in the pick-up coils and the noise induced by the eddy current in the metallic parts of PCC caused by pulsed magnetic fields. Moreover, the eddy current causes Joule heating, resulting in the temperature rise of the sample. Hamamoto et al. reported the effect of pressure on the metamagnetic transition in CeRh 2 Si 2 above 6 K in pulsed high magnetic fields using a CuBe PCC 5 . The temperature dependence of the metamagnetic transition field on CeRh 2 Si 2 was reported to be almost independent of the temperature, at least below 15 K, but the temperature change of the sample during the magnetic-field sweep was unknown.
In magnetic materials such as GFMs with a low Néel temperature T N , the magnetic properties are often sensitive to temperature changes at low temperatures and the measurements to determine these properties need to be taken below the temperature of liquid helium (∼ 4.2 K). However, it is difficult to use the aforementioned apparatus to study GFMs.
To suppress the Joule heating, the cell body of the PCC was made of NiCrAl alloy with a lower conductivity than the CuBe alloy. In addition, the tensile strength of the NiCrAl alloy (∼ 2.37 GPa at room temperature (RT)) is higher than that of the CuBe alloy (∼ 1.35 GPa at RT) 8 . However, the magnetic susceptibility of the NiCrAl alloy was approximately ten times larger than that of the CuBe alloy 9 . Therefore, the practical use of a NiCrAl PCC is limited to materials with large magnetization magnitudes. To overcome these problems, we developed magnetometry based on a radio frequency (RF) technique using a proximity detector oscillator (PDO) 10,11 .
The PDO is an inductance (L)-capacitance (C) selfresonating LC tank circuit based on the widely available proximity detector chip used in modern metal detectors. This device can detect the magnetic susceptibility and/or electrical conductivity of a sample in pulsed high magnetic fields 10,11 . In this technique, the inductance change of a small sensor coil with tens of turns in the LC tank circuit is measured when a magnetic field is applied. The resonance frequency of the LC tank circuit at zero field was f 0 = 1/2π √ LC. When a sample is placed in the sensor coil, L changes depending on its magnetic susceptibility and/or electrical conductivity of the sample in the magnetic field. Hereafter, we call this technique as LC method. The LC method detects the change in the resonance frequency (∆ f ) corresponding to the change in L. When the sample is a magnetic insulator, ∆ f is proportional to the change in the dynamic magnetic susceptibility (χ = ∆M/∆H), as follows: where V s is the volume of the sample inside the sensor coil, and V c is the inside volume of the sensor coil. According to Eq. 1, the absolute value of ∆ f increases as the sample filling rate increases against the sensor coil (V s /V c ). The sensor coil typically consists of only 5∼30 turns with a diameter as small as 300 µm. Therefore, an effective approach is to place the small sensor coil, including the sample, inside the small interior space of a high-pressure cell, because the sensor coil does not detect the magnetization of the pressure cell. Magnetic susceptibility measurements, conducted under high pressure by utilizing the LC method in static magnetic fields, have been reported 4,12,13 . However, such measurements in pulsed magnetic fields were rarely reported. Recently, Sun et al. developed a diamond anvil cell (DAC) fabricated mainly of insulating composites that minimize Joule heating in pulsed high magnetic fields. They performed magnetic susceptibility measurements of the quantum antiferromagnet [Ni(HF 2 )(pyz) 2 ]SbF 6 in pulsed magnetic fields of up to 65 T under pressure of up to 5.4 GPa by the LC method 14 .
Because of the small sample space in this pressure cell (less than 0.01 mm 3 ), the sensor coil was limited to a diameter of 150 µm and a maximum of four turns, and the sample size was too small, complicating attempts to increase the sensitivity of the measurement by increasing the number of turns.
In this study, we designed a NiCrAl PCC that suppresses the effect of Joule heating on a sample in pulsed high magnetic fields and established a magnetic susceptibility measurement system based on the LC method for use under multiple extreme conditions. Although the PCC generally generates lower pressures than a DAC, the sensitivity of the measurements can be increased by adjusting the number of turns of the coil because of the larger interior space in the PCC. To demonstrate the effectiveness of this apparatus for the study of GFMs, we examined the magnetization processes of the triangular-lattice antiferromagnet Ba 3 CoSb 2 O 9 , a GFM with T N = 3.8 K at 1.4 K. The magnetic susceptibility was measured under high pressure in pulsed high magnetic fields. magnetic fields. The cylinder of the PCC, pressure-clamp bolts, plugs, and piston backups were made of NiCrAl alloy. The pressure in the sample space was determined from the pressure dependence of the superconducting transition temperature of Sn 15 . The pressure cell was inserted into a SQUID magnetometer (Quantum Design, MPMS-XL 7), and the change in the superconducting transition temperature of the Sn manometer was investigated under high pressure. The outer diameter of the cylinder was 8.6 mm, allowing compatibility with the SQUID magnetometer with an inner bore diameter of 9 mm. Moreover, this size was also suitable for insertion into a 4 He cryostat with an inner bore diameter of 10 mm in a liquid-helium bath. The length of the cylinder was 65 mm; therefore, the length of the sample space was 10 mm under maximum pressure.
II. PRESSURE CELL DESIGN AND SETUP
A cross-sectional view of the sample space in the PCC is shown in Fig.1(b). The pressure medium was Daphne 7373 (Idemitsu Kosan Co., Ltd.). The sample space is filled with Daphne 7373 sealed by NiCrAl plugs with O-rings, Teflon rings, and Cu rings. Cu wires (∼ 100 µm) pass through the stepped hole of the lower plug filled with STYCAST 2850FT to prevent leaking pressure medium. At RT, the pressure medium remained in the liquid state up to a pressure of approximately 2 GPa. For this pressure medium, the pressure difference between 4.2 and 300 K is reported to be approximately 0.15 GPa, irrespective of the initial pressures at 300 K 16 . The sample is usually molded to a height of 5 mm and a diameter of 1.4 mm or less. A Teflon tube with inner and outer diameters of 1.6 and 1.8 mm, respectively, and a length of approximately 10 mm covers the sample and the sensor coil to prevent direct contact between the sample and the inner wall of PCC. The Sn manometer is inserted in the Teflon tube. High pressure was applied to the pressure cell through the piston that was clamped using a pressure clamp bolt at RT. In our preliminary experiments, a NiCrAl PCC with inner and outer diameters of 2.0 and 6.0 mm, respectively, generated pressure of 0.8 GPa for a maximum applied force of nearly 300 kgf. The advantage of this arrangement is that the applied force can be increased by increasing the thickness of the PCC cylinder. In practice, setting the inner diameter to 2.0 mm and expanding the outer diameter to 8.6 mm enabled a maximum applied force of approximately 1000 kgf. Consequently, the NiCrAl PCC has achieved a maximum pressure of P = 2.10 ± 0.02 GPa. Figure 2 shows a block diagram of the magnetic susceptibility measurement apparatus for pulsed magnetic fields under high pressure using the PDO. Pulsed magnetic fields were generated using a non-destructive pulse magnet and a capacitor bank installed at the AHMF at Osaka University. The pulse magnet with a bore diameter of 17∼18 mm is immersed in liquid nitrogen to lower the electrical resistance and cool down the magnet after the high-field generation. The pulse magnet was capable of generating pulsed magnetic fields of up to 51 T with a pulse duration of 35 milliseconds (ms). The glass Dewar container consisted of a liquid-helium bath containing the PCC with the sample, a vacuum insulation space, and a liquid nitrogen bath. The sample space can be cooled to 1.4 K at the lowest by evacuating the liquid helium bath with liquid 4 He.
The design of the PDO circuit surrounding the metal shield box, shown in Fig.2, was based on designs in previous reports of Refs.10, 11, and 17. To obtain an intense PDO signal, the sensor coil (L s ) with 40 µm diameter Cu wire was directly wound around the sample to get V s /V c ≈ 1 in Eq.1 and the number of turns was adjusted accordingly. In this study, the sensor coil was wound to ∼25 turns for the small sample (typical size is ∼ 1 × 1 × 5 mm 3 ) that can be inserted into the PCC. The sensor coil placed in the helium bath was connected to the PDO circuit in the metal shield box at RT with a coaxial cable (Lake Shore Crytronics Inc., Ultra-Miniature Coaxial Cable type C) of approximately 1 m. The resonance frequency in the entire PDO circuit, including the sensor coil and coaxial cable depends on the effective inductance (L eff ) composed of L s , L 1 and L 2 ; the mutual inductance L m among the coils; and the connecting coaxial cable (L coax ). The total effective inductance L eff is given by, In this setup, the resonant frequency in zero field ( f 0 ) was 35∼42 MHz. The output signals ( f (µ 0 H) = f 0 + ∆ f ) measured in pulsed magnetic fields were amplified and sent to two-stage frequency mixing ( f 1 , f 2 ), and were filtered to remove high-frequency components. The frequency of the output signal (∼42 MHz) loaded into the digitizer is downconverted to 1.2 MHz. The signal was stored in the digitizer at a rate of 50 MS/s (MS: mega-samples), with one wave consisting of approximately 300 data points, which was sufficient to construct the correct waveform. The average frequency at each point for the descrete magnetic field was made from 3∼5 successive waves. Consequently, the actual sampling rate corresponded to approximately 240∼400 kS/s (kS: kilo-samples). To evaluate the amount of heat transferred from the heated pressure cell to a sample in the presence of high magnetic field, we investigated the temperature change in the sample space in pulsed magnetic fields utilizing a commercially available RuO 2 -tip resistor (KOA Co. Ltd, typical resistance is 560 Ω at RT) as a thermometer. The magnetoresistance of this RuO 2 -tip resistor was calibrated in pulsed magnetic fields below 10 K, and the tip resistor was placed in the sample space filled with Daphne 7373 or on the outer wall of the PCC. The PCC was inserted into the glass Dewar container filled with liquid 4 He (∼1.4 K) as shown in Fig.2. Figure 3(a) shows the temperature changes from the initial temperature T 0 = 1.4 K on the outer wall of the PCC in pulsed magnetic fields as a function of time and the profile of this magnetic fields, which reached a maximum field of 51.0 T, and a duration of 35 ms. The temperature on the outer wall of the PCC rapidly increased as soon as the pulsed magnetic field was generated and exceeded the maximum calibration temperature of 10 K at approximately 20 ms. The thermal equilibrium state between 6 and 15 ms in Fig.3(a) may be a temporary suppression of the temperature increase owing to the endothermic effect of the evaporation of liquid 4 He by Joule heating. Figure 3(b) shows the temperature changes from 1.4 K and 4.2 K at the sample position inside the PCC in pulsed magnetic fields as a function of time. At the maximum field of 51.0 T, the temperature at the sample position remained almost 1.4 K until nearly 6.5 ms (approximately 40 T in the field-ascending process). After approximately 6.5 ms, the temperature increased slowly to reach approximately 8 K at 40 ms (approximately zero T). Since the sample is covered with a Teflon tube (the thermal conductivity of Teflon at 2 K is of the order of 10 −4 (J/cm·s·K) 19 ), and the remaining space is filled with Daphne 7373, the Joule heating from the metal parts of the PCC (the thermal conductivity of NiCrAl at 2 K is of the order of 10 −3 (J/cm·s·K)) is transmitted to the sample position with some delays. Therefore, regardless of the maximum magnetic field, the temperature hardly increased until approximately 6.5 ms, after which it increased slowly. At 40 ms, the temperatures at the sample position were 8, 7, and 6 K for H max = 51.0, 41.6, and 27.1 T, respectively. This is because the sweep rate of pulsed magnetic fields (dH/dt) increases with increasing the maximum field and the Joule heating becomes large accordingly. At the initial temperature T 0 = 4.2 K, the temperature at the sample position gradually increased until about 2.5 ms (approximately 20 T in the field-ascending process), whereupon it increased rapidly. In pulsed magnetic fields of up to 51.0 T, the period of time after which the temperature at the sample position started to increase, was longer at 1.4 K than at 4.2 K. This may be owing to the high thermal conductivity of superfluid helium below 2.17 K that surrounds the PCC immersed in liquid 4 He.
IV. STUDY OF A TRIANGULAR-LATTICE ANTIFERROMAGNET
We investigated the magnetic susceptibility of Ba 3 CoSb 2 O 9 , one of the triangular-lattice antiferromagnets (TLAs), using the apparatus developed in this study. The Co 2+ ions with the effective spin S = 1/2 form an equilateral triangular lattice in the ab plane, indicating both intra-and inter-layer antiferromagnetic exchange interactions 20,21 . Below T N = 3.8 K, the magnetic structure at zero field shows a 120 • spin structure in the ab plane. For H ab plane, as shown in Fig.4(a), successive quantum phase transitions occur from the Y coplanar state to the up-up-down (uud) state, and from the uud state to the V state, followed by the V ′ state 18 . In this experiment, a plate-shaped single-crystal sample of Ba 3 CoSb 2 O 9 was placed inside a sensor coil with 25 turns, which was directly wound in the direction perpendicular to the c axis of Ba 3 CoSb 2 O 9 (inset of Fig.4(b)).
The value of f 0 of the PDO was approximately 37 MHz at 4.2 K. Figure 4 (b) shows the changes in the resonance frequencies versus the applied magnetic field (∆ f -H) for H ab plane at 1.4 K and 10 K under ambient pressure without the PCC. The curves of ∆ f vs H exhibit both field ascending and descending processes. The value of ∆ f consists of both the change in the magnetoresistance of the sensor coil and that of coaxial cable in the presence of the magnetic field as the background 11 . The ∆ f -H curve at 1.4 K indicates distinct frequency shifts corresponding to the changes in the magnetic susceptibility at H c1 = 9.4 T, H c2 = 15.7 T, H c3 = 22.7 T, and H sat = 31.8 T when compared to the ∆ f -H curve at 10 K above T N .
To obtain the intrinsic magnetic susceptibility of Ba 3 CoSb 2 O 9 , we subtracted the fitting function determined from ∆ f at 10 K, for which the difference from the background data is much greater than that at T N from ∆ f at 1.4 K, and then adjusted the data such that the value of the subtracted ∆ f sub above H sat is constant at zero. The comparison between the ∆ f sub -H curve and the field derivative of the magnetization (dM/dH) obtained using the conventional induction method is shown in Fig.4(c). The ∆ f sub -H curve agrees very well with dM/dH obtained by the induction method 18 . The dip between H c1 and H c2 corresponds to the uud phase, which exhibits a magnetization plateau at one-third of the saturation magnetization in the magnetization curve. The cusps at H c3 and H sat are associated with the magnetic transition from the V to the V ′ phase and the saturation field. Figure 5(a) demonstrates the ∆ f sub -H curves of Ba 3 CoSb 2 O 9 for H ab plane at 1.4 K in pulsed magnetic fields of up to 51 T under pressures of up to 1.97 GPa. The ∆ f sub -H curve at ambient pressure in the PCC agrees remarkably well with that without the PCC as shown in Figs. 5(a) and (b), but the noise in the former case exceeds that for the latter. This was probably caused by the poor connection between the sensor coil and the Cu wires passing through the stepped hole of the lower plug. Since pulsed high magnetic fields with the maximum field of 51 T reached approximately 40 T at 6.5 ms from the start of pulsed magnetic field generation, ∆ f sub up to H sat is not affected by the increase in the sample temperature as a result of Joule heating.
With increasing pressure, the peak at H c2 shifted to a higher magnetic field, whereas the peaks at H c1 and H sat stayed almost in place with increasing pressure up to 1.97 GPa. The peak position at H c3 does not change against pressure, but the peak at H c3 became obscure by the background and was too weak to detect above 1.58 GPa. Based on the pressure dependence of H sat , the intra-layer antiferromagnetic exchange interactions did not change significally. Therefore, the expansion of the uud phase may be accompanied by increasing the effects of thermal and/or quantum fluctuations caused by the relative decrease of the interplanar antiferromagnetic exchange interactions, which enhances the two dimensionality in Ba 3 CoSb 2 O 9 . Another possibility may be a tilting of the sample direction against the magnetic field from the ab plane to the c axis caused by the application of pressure 22 .
Detailed clarification of the pressure effect on the magnetism in Ba 3 CoSb 2 O 9 for H ab plane would require an expansion of the pressure region to beyond 2.1 GPa. The PCC in this study was designed as used in a pulse magnet with a bore diameter of 17∼18 mm. We plan to develop a new PCC with a maximum pressure of 4 GPa by decreasing the inner diameter of the PCC utilized in this study. However, this would shorten the time of heat transfer from the inner wall of pressure cell to the sample position, thus enabling the temperature in the sample space to increase at lower magnetic fields than those in the present study. When we use a pulse magnet with a duration of approximately 200 ms based on our future plan, the magnetic-field sweep rate in the field ascending process would be lowered to approximately 1/5 of that of the pulse magnet used in this study. This long duration might suppress the increase of the sample temperature in the PCC, and thus magnetic susceptibility measurements under higher pressures than 2.1 GPa could be conducted in high magnetic fields.
V. SUMMARY
In summary, we developed an apparatus for magneticsusceptibility-measurements in pulsed magnetic fields of up to 51 T under pressures of up to 2.1 GPa. The temperature at the sample position in our PCC changed slightly until approximately 40 T in the field-ascending process in pulsed high magnetic fields up to the maximum 51 T at 1.4 K. We performed the magnetic susceptibility measurements of the triangularlattice antiferromagnet Ba 3 CoSb 2 O 9 in pulsed high magnetic fields under high pressures by the LC method using the PDO technique. We succeeded in observing a change in the resonance frequency that corresponded to the field derivation of the magnetization over the saturation field. | 2023-07-07T22:16:38.806Z | 2023-07-06T00:00:00.000 | {
"year": 2023,
"sha1": "81886da5f68492949dc70cae94ff21a0a2de9bb3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "81886da5f68492949dc70cae94ff21a0a2de9bb3",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
251603303 | pes2o/s2orc | v3-fos-license | Imatinib Optimized Therapy Improves Major Molecular Response Rates in Patients with Chronic Myeloid Leukemia
The registered dose for imatinib is 400 mg/d, despite high inter-patient variability in imatinib plasmatic exposure. Therapeutic drug monitoring (TDM) is routinely used to maximize a drug’s efficacy or tolerance. We decided to conduct a prospective randomized trial (OPTIM-imatinib trial) to assess the value of TDM in patients with chronic phase chronic myelogenous treated with imatinib as first-line therapy (NCT02896842). Eligible patients started imatinib at 400 mg daily, followed by imatinib [C]min assessment. Patients considered underdosed ([C]min < 1000 ng/mL) were randomized in a dose-increase strategy aiming to reach the threshold of 1000 ng/mL (TDM arm) versus standard imatinib management (control arm). Patients with [C]min levels ≥ 1000 ng/mL were treated following current European Leukemia Net recommendations (observational arm). The primary endpoint was the rate of major molecular response (MMR, BCR::ABL1IS ≤ 0.1%) at 12 months. Out of 133 evaluable patients on imatinib 400 mg daily, 86 patients had a [C]min < 1000 ng/mL and were randomized. The TDM strategy resulted in a significant increase in [C]min values with a mean imatinib daily dose of 603 mg daily. Patients included in the TDM arm had a 12-month MMR rate of 67% (95% CI, 51–81) compared to 39% (95% CI, 24–55) for the control arm (p = 0.017). This early advantage persisted over the 3-year study period, in which we considered imatinib cessation as a censoring event. Imatinib TDM was feasible and significantly improved the 12-month MMR rate. This early advantage may be beneficial for patients without easy access to second-line TKIs.
Imatinib is approved in chronic phase CML (CP-CML) at the dose of 400 mg once daily. Second-generation TKIs have been compared against the standard imatinib dose in first-line chronic phase CML, demonstrating faster kinetics as the molecular response without a survival advantage [10,11]. Thus, international recommendations for first-line CP-CML still include imatinib as a first-line therapeutic option [12][13][14][15]. The safety of long-term imatinib therapy is well established [5] compared to other front-line options, such as dasatinib 100 mg once a day (28% of patients will experience pleural effusions by 5 years [10]), nilotinib 300 mg twice a day (13% of the patients will develop cardiovascular events by 5 years [11]) or bosutinib 400 mg daily (7.8% diarrhea grade 3, 19% increased ALT [16]). The recent release of generic imatinib also raised the question of cost-effective strategies. For example, generic imatinib given as frontline therapy, followed, if necessary, by second-generation TKIs, has been shown to be a cost-effective strategy [17,18].
Imatinib dose optimization has been evaluated in several prospective clinical studies testing the use of high-dose imatinib (600 mg to 800 mg daily). A systematic review and meta-analysis of randomized controlled trials comparing frontline treatment with imatinib 400 mg daily versus high doses concluded that these strategies resulted in an increase in toxic effects with a minimal therapeutic advantage [19,20]. Pharmacokinetic studies pointed out the importance of inter-patient variability in imatinib plasma trough concentrations ([C]min), varying by 55 to 106% among patients under a given dosage [21]. Imatinib [C]min correlates with pharmacodynamic responses, and it has been suggested in a retrospective study that the threshold of 1000 ng/mL was associated with an improved molecular response in patients treated with imatinib 400 mg daily [22][23][24].
Therapeutic drug monitoring (TDM) is routinely considered for the management of medications to avoid or control adverse events and to maximize efficacy. We thus decided to initiate the randomized multicentric "OPtimized Tyrosine kInase Monotherapy for imatinib study (OPTIM-imatinib study)" in order to demonstrate, prospectively, the benefits of TDM based on the [C]min assessment in patients with CP-CML receiving imatinib as front-line treatment.
Patients and Synopsis of the Study Protocol
The OPTIM-imatinib study is a prospective, randomized, phase-2 trial conducted in centers of the French CML group (Fi-LMC). Adult CML patients were eligible if they were (i) newly diagnosed in the chronic phase for less than 13 weeks, not previously treated or treated with IM 400 mg daily for less than 13 weeks, (ii) not previously treated with tyrosine kinase inhibitors other than imatinib and (iii) provided signed, written inform consent. Women of childbearing potential had to use an adequate method of contraception. The study was registered as the OPTIM-imatinib trial, ClinicalTrials.gov NCT02896842. All patients gave their informed consent. Imatinib [C]min was centrally determined by chromatography-tandem mass spectrometry 15 days after enrollment, as previously described [23]. Briefly, after a liquid-liquid extraction, imatinib and its deuterated internal standard were eluted on an XTerra RP18 column with a gradient of acetonitrile-ammonium formiate buffer 4 mmol/L, pH 3.2. Imatinib was detected by electrospray ionization mass spectrometry in multiple reaction-monitoring mode. The calibration curves were linear over the range 10-5000 ng/mL. The limit of quantification was set to 10 ng/mL. Patients with a [C]min < 1000 ng/mL were randomized among a dose-increase strategy aiming to reach the threshold of 1000 ng/mL (TDM arm) and standard imatinib management (control arm). Patients with [C]min levels ≥ 1000 ng/mL were observed (observational arm). Imatinib [C]min levels were assessed monthly in the TDM and control arms and every 3 months in the observational arm. All patients started therapy with imatinib 400 mg daily. In the absence of grade ≥ 2 adverse events, patients allocated to the TDM arm were told to increase the imatinib daily dose from 400 mg to 600 mg, and imatinib plasma dosage was remeasured. If the threshold of 1000 ng/mL was not achieved, then patients were told to increase the imatinib daily dose again from 600 to 800 mg. The maximum allowed imatinib dosage was 800 mg daily, and dosages such as 500 or 700 mg daily were accepted. All patients were managed according to the European Leukemia Net (ELN) 2009 recommendations amended with ELN 2013 and ELN 2020 recommendations) for efficacy and toxicity [13][14][15]. The minimum authorized imatinib dosage was 300 mg daily.
Response Definition and Primary Endpoint
The primary end-point was the percentage of patients achieving a major molecular response (MMR) at 12 months, as defined by BCR::ABL1/ABL1 ratio on the International Scale (IS) (BCR::ABL1 IS ) ≤ 0.1% according to the European Leukemia Net recommendations for minimal residual disease quantification [25,26]. Molecular assessments were performed in hospital laboratories of the "French quality control network for BCR::ABL1 quantification" (GBMHM, Groupe de Biologie Moléculaire des Hémopathies Malignes) and centrally validated in the reference laboratory for France (Dr JM Cayuela, Hôpital Saint-Louis, Paris, France). BCR::ABL1 IS levels were tested every 3 months over 12 months, and every six month until the end of the study [13,14].
Pharmacokinetic Analyses and Secondary Endpoints
Imatinib [C]min was determined by chromatography-tandem mass spectrometry as previously described [23]. Secondary endpoints included (i) safety and efficacy analyses at different time-points, (ii) relationship between plasma dose and efficacy or tolerance and (iii) progression-free survival, event free survival and overall survival. Follow-up data (36 months) were collected.
Statistics
Analysis was performed on an intent-to-treat basis. The baseline characteristics were compared by non-parametric tests: either the exact Fisher's test for qualitative variables or the Kruskal-Wallis test for quantitative variables. Confidence intervals (CIs) were calculated at the 95% confidence level. Correlations between plasma concentrations of imatinib ([C]min) at steady state and imatinib daily dosage were assessed using linear regression. Censored endpoints (cumulative cytogenetic and molecular response rates and overall survival) and their associated 95% CI were estimated by the Kaplan-Meier method. Impacts of prognostic factors on censored endpoints were assessed using the Cox proportional hazard model. The proportional hazard assumptions were checked using the scaled Schoenfeld residual test. The primary endpoint of this study was to analyze the rate of the major molecular response at 12 months in the TDM arm (trough plasma level < 1000 ng/mL with adapted therapy). The control arm (trough plasma level < 1000 ng/mL without adapted therapy) was the estimator of the reference rate. A sample size of 80 randomized subjects was calculated for TDM and control arms in order to test the null hypothesis of H0: p ≤ 0.25 and alternative H1: p ≥ 0.40 with one-sided type I error of 5% and 80% power. The observational arm (trough plasma level > 1000 ng/mL) was the estimator of the best expected response rate. No interim analysis was planned. Toxic effects were assessed continuously.
Patients' Characteristics
From September 2010 through March 2014, 139 CML patients were recruited and screened. In six patients, the initial [C]min was not assessed (three stopped imatinib before the dosage and three declined the dosage). Thus, 133 patients were studied. In 86 patients (64.6%), initial [C]min value was <1000 ng/mL ( Figure 1, Table 1). These patients were randomized into the TDM arm (43 patients) and the control arm (43 patients).
[C]min was ≥1000 ng/mL in 47 patients, and they were allocated to the observational arm. Median age at diagnosis was 64 years (27 to 87), and sex ratio (M/F) was 2.09. Sokal score was low and intermediate in 82% of patients. No differences in terms of age, sex ratio or Sokal risk score were observed between patients included in the TDM and control arms. However, the median age of the patients with high [C]min was significantly higher than that of patients with low [C]min (67 y versus 61 y, p = 0.007). Sixty-one patients (51%) started imatinib at 400 mg daily before being included in the study, as permitted by the inclusion criteria. Duration of imatinib before inclusion for these patients was 4 weeks. Clinical characteristics of the patients are depicted in Table 1. Additional chromosomal abnormalities, n (major route) Median time between diagnosis and inclu-5 6 3 0.025
Imatinib [C]min Assessment
Imatinib was lower than the 1000 ng/mL threshold in 64.6% of patients. Table 2 non-significant decrease in [C]min was observed in the control and observational arms. In order to define the optimal threshold for imatinib [C]min, we conducted ROC analysis on data from the 90 patients included in the control and observational arms, and found that a [C]min of 1031 ng/mL at month 1 was related to the achievement of MMR at month 12.
Safety
Overall, 255 all grade adverse events (AE) (including recurrent AEs) were recorded during a 3-year period, 95 in the TDM arm, 69 in the control arm and 91 in the observational arm. Forty-seven (15%) were grade 3-4 AEs (15 in the TDM arm, 13 in the control arm and 19 in the observational arm). Supplemental Table S1 recapitulates the distribution of AEs by category, grade and arm, excluding recurrent AEs. No unexpected AEs were recorded; two patients relapsed from a previously known cancer-one prostatic and one adenocarcinoma. A better tolerance profile was observed for patients included in the control arm (considered as under dosed by the [C]min criteria), compared to patients allocated to the observational arm or randomized in the TDM arm. Supplemental Figure S1 shows that hematological and skin toxicities were not related to [C]min at inclusion, whereas other symptoms, such as musculoskeletal and gastro-intestinal disorders, were equally present in patients from the TDM and observational arms, suggesting that TDM resulted in a translation from an under-dosed to a well-dosed toxicity profile. Ten patients (7.5%) died during the follow-up period, 7 from cancers, 1 from suicide, 1 from natural death and 1 from CML progression in the observational arm.
Discussion
The OPTIM-imatinib study is the first prospective randomized study testing imatinib TDM for patients diagnosed with CML in first-line treatment. The study met her primary objective. Our dose adaptation strategy resulted in a significantly higher rate of MMR at 12 months and higher CI-MMR rates in the TDM arm compared to the control arm. We also confirmed that TDM is feasible. At 12 months, median [C]min 971 ng/mL (95% CI; 830-1242) in the TDM arm was not different to that of patients who were not randomized (963 ng/mL (95% CI; 845-1098)) and significantly higher than that of the control arm (639 ng/mL (95% CI; 494-729)). The resulting mean daily dose of imatinib was, as expected, higher in the TDM arm (603 mg daily at 12 months) compared to 391 mg daily in the control arm. We also validated the well-recognized imatinib threshold of 1000 ng/mL for the achievement of MMR at month 12 (1031 ng/mL in our hands).
Despite these encouraging results, the cumulative incidence advantage observed in favor of the TDM arm during the first 12-month period was not significantly conserved by 36 months. This observation is not in line with previous studies comparing imatinib to second-generation TKIs: these studies reported a conserved advantage in terms of CI-MMR after the achievement of a faster response with the use of second-generation TKIs. In these analyses, patients were censored at the time of study treatment cessation [6][7][8]. We therefore analyzed our patients with systematic censoring in case of imatinib cessation and demonstrated a significant benefit in favor of the TDM strategy in the long run, suggesting Pharmaceutics 2022, 14, 1676 9 of 12 that imatinib TDM may offer a benefit of a similar magnitude compared to a switch to second-generation TKIs [10][11][12].
Previous comparisons between imatinib 400 mg and imatinib 600-800 mg daily were conducted without the use of TDM. Two single-armed studies of IM800 observed higher MMR rates compared to historical controls [27,28]. Four randomized studies tried to demonstrate these observations. The TOPS study compared imatinib 800 mg/d to imatinib 400 mg daily in patients newly diagnosed with Sokal high-risk CP-CML. MMR was reached faster at 3 and 6 months with high-dose imatinib than with imatinib at 400 mg daily. MMR rates were similar between arms at 12 months [29]. A long-term follow-up of the TOPS study showed that MMR rates were identical at 42 months (51.6% vs. 50.2% for 400 and 800 mg/d, respectively), and there was not a survival advantage [30]. Similarly, the German CML IV study also reported higher 12-month MMR rates in patients included in the optimized high-dose imatinib group compared to the standard-dose imatinib group (59% versus 44%, p < 0.001); there was not a long-term advantage [31,32]. The French SPIRIT trial showed MMR rates at 12 months that were significantly higher for imatinib 600 mg vs. imatinib 400 mg [33]. No other differences were recorded at subsequent time points or in a longer follow-up [34]. The last study was a phase-2 randomized study conducted by the SWOG. The primary end-point (MR4 at 12 months) was achieved in favor of the imatinib 800 mg daily cohort after a follow-up of 12 months [35].
High-dose imatinib studies reported around 30% more toxicities with imatinib at 600-800 mg daily compared to imatinib at 400 mg/day. We also observed an increase in grade 1-2 toxicities but not in grade 3-4. Moreover, this increase was not observed for hematological or skin toxicities, whereas other symptoms, such as musculoskeletal and gastro-intestinal disorders, were more frequent when [C]min was around the 1000 ng/mL threshold, such as in the TDM arm and the observational arm. The DESTINY study reported that a dose reduction of imatinib translated to a better tolerance, which is in line with the toxicity profile of our patients having a [C]min < 1000 ng/mL, irrespective of the daily dose [36]. Permanent discontinuation due to toxicity or refusal was similar for all treatment arms.
In our study, the imatinib dose optimization strategy was dictated by a [C]min measurement performed 7-10 days after inclusion. Our 12-month pharmacological follow-up demonstrated that imatinib [C]min was stable over time in patients treated with imatinib at 400 mg daily. With a single [C]min assessment and based on the 1000 ng/mL threshold, we were able to define well dosed patients, who represented only 35.4% of our patient population. The median age of our patients (64 years) may suggest that this observation is applicable to a "real life" CML population. For the remaining patients eligible for a TDM strategy, one assessment allowed us to increase imatinib to 600 mg daily in 72% of them, whereas 22% increased the dosage to 800 mg daily after a second assessment.
In conclusion, only 1/3 of our patients on imatinib 400 mg daily were correctly dosed and may not have required imatinib dose escalation. Two-thirds of the patients were not correctly exposed to imatinib at the standard dose and may have benefited from individualized dose optimization using the TDM strategy. This tailored dose adaption strategy resulted in higher MMR rates at 12 months (67% vs. 39%), a magnitude in line with the results reported with second-generation TKIs. Our results strongly support the use of TDM to optimize and personalize the daily dose of IM front-line therapy.
Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/pharmaceutics14081676/s1, Figure S1: All adverse events categories representing more than 10% of the patients are represented. In this radar graph, the black line represents the safety profile of the control arm as compared with the TDM arm (dashed line) and the observational arm (grey line); Table S1: Safety profile of the OPTIM-imatinib study. | 2022-08-17T15:14:10.410Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "34fcdbfd0cbedbf0931cd5cd1765cd193ae5365f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/14/8/1676/pdf?version=1660284002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f108aa4fb89ac92b857daa44efa7f6389b7e8d8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213791386 | pes2o/s2orc | v3-fos-license | Two-Phase Fluid Flow Experiments Monitored by NMR
We present a newly developed high-pressure nuclear magnetic resonance (NMR) flow cell, which allows for the simultaneous determination of water saturation, effective gas permeability and NMR relaxation time distribution in two-phase fluid flow experiments. We introduce both the experimental setup and the experimental procedure on a tight Rotliegend sandstone sample. The initially fully water saturated sample is systematically drained by a stepwise increase of gas (Nitrogen) inlet pressure and the drainage process is continuously monitored by low field NMR relaxation measurements. After correction of the data for temperature fluctuations, the monitored changes in water saturation proved very accurate. The experimental procedure provides quantitative information about the total water saturation as well as about its distribution within the pore space at defined differential pressure conditions. Furthermore, the relationship between water saturation and relative (or effective) apparent permeability is directly determined.
Introduction
Petrophysical parameters play an important role in many geological applications and are subject of various research projects. For reservoir simulations, laboratory fluid flow experiments are crucial in order to determine parameters, which can be used to calculate fluid redistribution in the subsurface. Regularly, core analysis is done under ambient conditions on dry plugs or completely water saturated samples, i.e., single-phase fluid flow is measured in order to derive intrinsic permeability [1,2]. However, when it comes to the characterization of low permeable material (tight sandstones, shales) below the mD-range, the experimental procedures need to be adapted for low flow rates and high fluid pressures. This is especially difficult for the determination of the effective permeability of a fluid in presence of another fluid, which can be up to three orders of magnitude lower than the intrinsic permeability [3]. In a two-phase fluid flow system, different factors have to be accounted for, which are often strongly coupled and interdependent like e.g. stress dependence, water saturation and capillary pressure. The latter is highly important, as gas flow through low permeable (partially) water saturated rocks is usually controlled by capillary pressure, i.e., as water is drained from the pores with increasing differential gas pressure [4].
In order to derive correlations between effective permeability, water saturation and capillary pressure, commonly, several experiments are carried out in different setups (core-flooding experiments, centrifuge, porous plate, mercury injection [5,6,7,8,9]. Generally, effective permeability experiments are conducted in different steps, each consisting of the pre-saturation and an effective permeability measurement. For each saturation level, the flow setup has to be dis-and reassembled [10,11]. This procedure is very time consuming and bears the risk of sample damage caused by repetitive loading / unloading cycles. Additionally, changes in water saturation during gas flow experiments cannot be directly measured in such a conventional experimental flow cell. In this work, we present a newly developed NMR flow cell, which can be loaded up to a confining pressure of 30 MPa. The apparatus allows the determination of water saturation during ongoing fluid flow experiments. Here, we introduce the experimental procedure as well as its results on an initially fully water saturated sandstone sample. After sample installation, the drainage process was continuously monitored in terms of changing water saturation and effective gas permeability. For more experimental results, obtained with the introduced NMR flow cell, we refer to [12].
Single and Two-Phase Fluid Flow
In a single-phase system, permeability depends only on the rock's intrinsic properties, i.e. pore size distribution and tortuosity [9]. In laboratory experiments, permeability can be determined on cm-sized cylindrical plugs with either water or gas. When using water as the permeating liquid, Darcy's law for incompressible media is used [13]: Parameters are the volume flow rate m /s, the differential pressure Δp Pa across the sample, fluid viscosity η Pa s, cross-sectional area m , sample length Δ m and intrinsic permeability m . When using gas in the experiments, one has to account for gas compressibility (eq. 2) and slip flow (eq. 3).
Assuming the validity of the ideal gas law, integration across the sample length and between the up-and downstream pressures, and , yields Darcy's law for compressible media [9]: The coefficient m is the apparent gas permeability, which is only valid for given fluid pressure conditions. It is well known, that gas permeability increases with decreasing mean fluid pressures ( ), which is due to the increasing molecule/pore wall interactions at low gas densities [14,15,16]. In singlephase flow systems, the so called "Klinkenberg" or "slip flow" correction is routinely done according to the following linear relationship with being the intercept on the y-axis, thus at infinite high gas pressure: In eq. 4, the slip factor Pa is directly related to m, the average or mean pore diameter of the porous medium and to parameters describing the property of the gas phase ( m, the mean free path length of the gas molecule, ≈ 1, the Adzumi constant and Pa). Accordingly, the smaller the pores, the larger the slip flow effect. In twophase systems, this is extremely difficult to measure and so far only studied for pre-defined water saturations [10,11,17,4]. This correction was not done here as it out of the scope of the present work. In a two-phase fluid system, one distinguishes between the wetting and the non-wetting fluid. Both phases can be displaced by the other phase, which changes the saturation profile within the porous medium. Hereby, the terms drainage and imbibition are used, which refer to the displacement of the wetting phase by the non-wetting phase and vice versa. For most siliciclastic rocks, water is considered the wetting phase and gas the non-wetting phase. The mobility of both phases (effective permeability) is strongly coupled with fluid saturation and capillary pressure. Increasing capillary pressure results in decreasing water saturation (drainage process), which in turn results in more available fluid pathways for gas flow, thus an increased effective gas permeability. The smaller the pores, the larger is the capillary pressure, which has to be applied to drain water from the pore space. This relationship is described by the Washburn equation for the simplified model of cylindrical pores [18]: with the capillary pressure, N/m the interfacial tension between the wetting and non-wetting phase (here water and gas), [°] the surface contact angle of the wetting phase and the equivalent capillary pore radius.
In non-oil contaminated reservoir rocks, is usually assumed to be zero. In the present study, the pressures detected on the up-and downstream pressure side during the drainage experiment ( , ) correspond to the pressures of the gas and water phase, respectively. Hence, in the two-phase fluid flow experiments conducted in this study, the assumption is made that equals the differential pressure, Δ = − .
NMR T2 relaxation measurements
In this work, the principles of nuclear magnetic resonance (NMR) relaxation measurements are utilized to support petrophysical flow experiments -especially monitoring the change of water saturation , but also to qualitatively resolve the size of the corresponding water filled pores. The NMR relaxation mechanism results from the interaction of a porous medium with its pore-filling fluid containing a detectable amount of hydrogen protons 1 H (here water). In most laboratory and well-logging applications, the magnetic moments (spins) of hydrogen protons are aligned with a strong static magnetic field and therefore, yield a minute net magnetization. The characteristic precession frequency of the magnetization around the static magnetic field is called Larmor frequency = and depends solely on the strength of and the gyromagnetic ratio of hydrogen. Larmor frequencies of commonly available applications can vary from approx. 0.2 MHz (4.7 mT) to 0.4 GHz (9.4 T) [19,20]. A NMR relaxation measurement is started by applying an energizing electromagnetic pulse with the appropriate Larmor frequency (creating a secondary electromagnetic field ) and thus tipping away all spins from their equilibrium state. After the pulse is switched off, the spins relax back into their equilibrium state. This relaxation process is measured and the resulting NMR signal is given by with the total water filled pore volume, and the volume of pore class relaxing with the characteristic relaxation time , . The sum of individual amplitudes ∑ / → ( = 0) → is commonly referred to as initial amplitude and is directly proportional to the amount of excited hydrogen protons and therewith, a direct measure for water content, and hence, saturation The NMR relaxation process itself is a superposition of three independent mechanisms [19,21]: (i) the bulk relaxation of the pore fluid, (ii) the surface relaxation due to the interaction of pore fluid and rock matrix and (iii) diffusional relaxation caused by spins diffusing through a non-uniform magnetic field. In this work, we assume no internal field gradients and therefore neglect diffusional relaxation. However, in the presence of minerals that exhibit high magnetic susceptibilities, diffusional relaxation must be accounted for [22,23,24]. Furthermore, we also assume the so-called fast diffusion regime where / ≪ 1, with surface relaxivity , characteristic pore size and self-diffusion coefficient of water [25]. This assumption is reasonable considering the pore sizes of the tight sandstone samples used in this study. Therefore, in a water saturated porous media the effective relaxation time is given by By inspection of eq. 7 one can deduce the following straightforward relationship: the larger the pore is, the smaller is the surface-to-volume ratio /, and hence, the longer is the relaxation time and vice versa. The surface relaxivity is a mineral parameter and relates surface inhomogeneities to accelerated relaxation and has dimensions of velocity m/s. Generally, is assumed constant for a particular type of porous media and needs to be determined via calibration [26]. Depending on the knowledge of and considering eqs. 6 and 7, it is possible to directly infer the pore size (or / of the pore) from NMR relaxometry data. To find the individual amplitudes / as a function of relaxation time , (cf. eq. 6), the so-called relaxation time distribution (RTD), a linear system of equations has to be solved. Generally, and because this inverse problem is ill-posed [27], this is achieved by a regularized (smoothed) least-square minimization [28] of the form with = () the NMR data vector, = / the model vector and the forward operator (cf. eq. 6). Here, the smoothness constraint on is applied by a first-order derivative matrix . The regularization parameter is found via the L-curve criterion and chosen such that the inversion misfit is in the order of the data noise while keeping a sufficiently smooth RTD [29,28].
NMR Flow Cell Setup
The experimental setup used in this study consists of multiple components, which allow for the combined measurement of fluid flow and NMR. An overview of the complete assembly is given in Fig. 1. The high-pressure flow cell, which is designed for samples of 30 mm in diameter and 15 − 60 mm in length, is placed vertically in the center of a Halbach magnet. The sample itself is placed between two NMR-inert PEEK (polyether ether ketone) pistons containing conduits for in-and outflow. Grooves on the pistons allow for the even distribution of the gas phase across the sample surface. A rubber sleeve encases the piston/sample arrangement. O-rings prevent influx of the confining pressure oil. The RF coil, which is a 13 mm long copper radio frequency coil (rf-coil), is located within the confining compartment surrounding the rubber sleeve and is centrally positioned around the sample plug. In order to apply a constant confining pressure, , (up to 30 MPa) the compartment is filled with a NMR-inert synthetic oil (Fluorinert TM FC-40), regulated by a HPLC pump (Shimadzu LC-6A). Temperature fluctuations within the flow cell are monitored with a resistance thermometer PT 100 with an accuracy of 0.05 K. On the outside, a wooden cylinder encases the entire setup. The flow cell is connected to a nitrogen gas bottle on the high-pressure side p1 at the base of the setup. The lowpressure side p2 is kept at atmospheric pressure, i.e., the gas flows opposite to gravity. In this system, gravitational forces on the water phase can be neglected as they are much smaller than the applied differential gas pressures of at least 100 kPa. A water reservoir moisturizes the gas stream in order to prevent drying of the sample. The downstream capillary is connected to either a bubble flow meter for gas flux measurements or a graduated pipette to determine single-phase water flux (top of the setup). Additionally, two pressure transducers (Keller, 0.05% FSO) at the inlet/outlet of the flow cell continuously monitor the pressure on both sides.
NMR Data Processing
All NMR relaxation measurements presented in this work are conducted with a low field Halbach NMR setup working at a frequency of 4 MHz [30,31] and using the Carr-Purcell-Meiboom-Gill (CPMG) pulse sequence [32,33]. The shortest possible echo time in our setup is = 320 µ and, depending on the saturation and relaxation behavior of the corresponding sample, up to 2500 echoes were recorded, yielding signal lengths up to 0.8 s.
Initially, each CPMG echo train was averaged until a signal-to-noise ratio of /~200 was reached. Over the course of the drainage experiment and due to the decreasing water saturation (and hence, decreasing signal amplitude), this constraint could not be maintained while keeping an equal NMR measurement interval of 30 minutes. According to Curie's law the magnetization is negatively correlated with temperature, i.e., for water, a temperature decrease of 1 K yields a magnetization increase of ~0.4 % and vice versa. However, in preliminary experiments we found, that even small temperature fluctuations have a much stronger effect on the initial amplitude of the NMR signal as expected. For testing purposes, we installed a 100 % water filled dummy sample (PVC container filled with degassed tap water) in the flow cell and the confining compartment was filled with the NMR-inert oil -the confining pressure was allowed to be atmospheric. Like in a drainage experiment, the temperatures inside the flow cell and the laboratory as well as the initial NMR amplitudes were continuously monitored over a period of 30 hours. From Fig. 2 a strong positive correlation between the NMR amplitude (NMRraw) and the temperature measured within the flow cell ( ) is evident ( = 0.9990, cf. inset in Fig. 2). While the temperature inside the flow cell (solid black line) varied by approximately ±0.1 K, the initial amplitude varied by about ±10 a. u. or 4 %. This value is two orders of magnitude higher than the theoretical increase. As the NMR amplitude correlates linearly with the temperature fluctuation (see inset of Fig. 2), the latter can be used to correct the NMR amplitudes. The temperature-corrected signal (NMRcorr) fluctuates less than 0.2 % around its mean value. For this reason, we conduct this pre-test always on the fully water saturated samples before starting the actual drainage experiment. We found, that a measurement time of approx. 48 hours yielded stable correlations between NMR amplitude and temperature fluctuations. The effect of temperature on bulk relaxation is well understood and can easily be approximated [19]. In the case of our test measurement with 100 % water, the measured bulk relaxation time follows the theoretical relation. Unlike modern NMR devices, our setup is not temperature regulated. We hypothesize that the temperature dependence of the NMR amplitude observed in our measurements is device or setup dependent, e.g., temperature effects on the electronic components of the resonant circuit. With the procedure described above, we are able to account for the temperature fluctuations and derive reliable water saturation information from our experiment.
Single-Phase Flow -Intrinsic Gas Permeability
To qualitatively and quantitatively compare single-phase and two-phase flow results, we use for both types of measurements the NMR flow cell under equal confining pressure conditions. To determine the intrinsic permeability , the dry sample is installed into the flow cell and the capillaries on the high-pressure side are connected to a nitrogen gas source. After application of the confining pressure, the gas permeability experiments are conducted at different mean fluid pressures . Here, steady state experiments are conducted at different upstream pressures , while keeping the downstream pressure at ambient conditions. The volume flow , is measured on the outflow side with the attached bubble flow meter until steady state flow conditions are established. Apparent and intrinsic permeability values are determined from eqs. 2 and 3.
Two-Phase Flow -Effective Gas Permeability
Prior to all drainage experiments, air is removed from the sample in a vacuum desiccator. Thereafter, we saturated the sample with brine (10.6 g/L MgSO4 solution) and installed it into the flow cell. There, a confining pressure of 15 MPa is established. To obtain a sufficient amount of data for the temperature correction described above, repetitive NMR measurements (30 min interval) are performed on the fully water saturated and pressurized sample for a time span of at least 48 hours.
Drainage experiments are performed according to the step-wise procedure usually applied for gas breakthrough determination of sealing lithologies [34]. The drainage experiment is conducted by increasing the gas pressure on the high-pressure side in different intervals ( = 0.3 − 2.5 MPa), while the outflow side is constantly held at atmospheric pressure ( = const.). At low differential pressures, only water is displaced from the sample but no gas flows through it. After exceeding a threshold pressure, which equals the capillary pressure of the smallest pore along the percolation path, the gas phase breaks through and a saturation gradient establishes along the sample, i.e., along the pressure gradient. The whole drainage experiment is monitored by NMR measurements (30 min interval), as well as temperature and pressure measurements (30 s interval). Because of the saturation gradient and the sensitive range of the NMR coil of about 2 cm, we measure an average water saturation over this range. First gas flow being detected on the low-pressure side indicates that the capillary breakthrough pressure has been overcome. At this point, sufficient water has been displaced from the pore space and at least one gas conducting percolation path has been established. The number of percolation pathways increases with increasing gas pressure. Moreover, more pores are desaturated at the high-pressure side than at the low-pressure side, i.e., there is a saturation gradient, which changes with increasing gas pressure. Regularly, the gas outflow rate is detected with a bubble flow meter. After gas outflow and NMR signal (saturation) stabilize, the next pressure difference is applied. From the bubble flow meter and from the NMR data at stationary conditions, we calculate effective permeability of the gas phase, water saturation and relaxation time distribution, respectively. They in turn can be related to the applied pressure difference.
Sample Characterization & Intrinsic Properties
The test plug (R1), used in this study, was provided by the Wintershall Holding GmbH and is a tight reservoir sandstone from a depth of about 4000 m. It is a coarsegrained sandstone from the Rotliegend formation, containing illite as pore filling mineral. Some petrophysical properties of sample R1 are provided in Tab. 1. Porosity was determined via Archimedes' principle. The intrinsic (Klinkenberg corrected) permeability is 4 10 m and the slip factor is 0.2 MPa (cf. Tab. 1). In Fig. 3, the apparent permeability as a function of reciprocal mean pressure is shown (black circles). The Klinkenberg fit to derive the intrinsic permeability is depicted with the gray dashed line.
In Fig. 4 a typical NMR measurement is shown. The NMR transient (solid gray line) and the corresponding multi-exponential fit (solid black line) are depicted in Fig. 4a. The signal consists of 750 echoes with an inter-echo time of 320 µs. The signal-to-noise ratio (SNR) of this particular measurement is 180. Figure 4b shows the corresponding inverted relaxation time distribution (RTD). The RTD was derived as described above using a smoothed least squares fit with 20 relaxation times per decade. The regularization parameter was chosen as such, that the fitting error (rms = 0.07) is equal to the noise level of the data (dashed black line in Fig. 4a). The corresponding solid gray line in Fig. 4b represents the cumulative relaxation time distribution. The vertical black dashed line in Fig. 4b indicates the characteristic relaxation time of the RTD often referred to as logarithmic mean relaxation time T . The RTD for sample R1 is characterized by one pronounced peak at about 4 ms and a minor peak at about 70 ms. Following the standard classification of RTDs [19] more than 95% of the signal originates from pores with relaxation times smaller than 33ms and hence, classifies as bulk volume irreducible (BVI) and clay bound water (CBW). This already indicates the tight character of the sample. A pre-characterization of the intrinsic properties before conducting the actual drainage experiment is generally advisable. Thereafter, the experimental protocol can be adjusted accordingly, i.e., with respect to the choice of initial pressure difference or the amount of pressure steps. If, for instance, samples exhibit a narrow pore size distribution, only a few (densely spaced) pressure steps can be applied before the irreducible water saturation is reached.
Drainage Experiment
The drainage experiment for sample R1 was conducted at 15 MPa confining pressure. The differential pressure ranged from 0.2 to 2.4 MPa and the sample was drained to a S , of 0.13. The temperature over the course of the entire experiment was (19.97 ± 0.21 °C).
During the drainage experiment, we applied five distinct pressure steps (cf. Fig. 5a). At the first differential pressure of 0.2 MPa, no significant saturation decrease and no gas flow could be measured over the course of 24 h. Effective gas permeabilities were measured for the first time once breakthrough occurred and gas flow was established. This happened after approx. 50 h at a differential pressure of 0.4 MPa. In the following, differential pressure was increased approx. every 24 h depending on the equilibration of water saturation and effective gas permeability.
The temperature fluctuation inside the flow cell is depicted in Fig. 5b. The fluctuations show amplitudes of ±0.5 K over the course of the experiment and decrease towards the end of the experiment. Fig. 5c shows the raw and temperature-corrected NMR amplitudes in gray and black, respectively. There are several events (Fig. 5b, c) where the change of temperature clearly influences the NMR amplitudes. For instance, between 24 h and 72 h the temperature varied between −0.5 K and 0.5 K yielding a saturation fluctuation of ±10 %. Figure 5c visualizes the impact of the aforementioned temperature correction. Without correction, water saturation (raw in Fig. 5) increases after the onset of gas flow (≥ 48 h), which is clearly unphysical considering the given experimental setup, but slightly decreases after temperature correction (corr. in Fig. 5). Effective gas permeabilities increased by more than an order of magnitude during the entire measuring cycle from 9.9 10 m to 2.2 10 m at differential pressures of 0.4 MPa and 2.4 MPa, respectively. The increase of effective gas permeability with increasing differential pressure clearly occurs due to the drainage of successively smaller pores. This process can be visualized by contiguously plotting the RTD in a carpet plot, see panel I in Fig. 6. Even though we think that the drainage of successively smaller pores dominates the observed RTD changes, the shift towards shorter relaxation times with decreasing saturation might also be caused by the associated changes in the surface-to-volume ratio of drained pores. For example, water trapped in the corners of a desaturated angular pore contributes to the RTD with considerably smaller relaxation times than the originally saturated pore [35]. Panel I in Fig. 6 shows the relative amplitudes of the RTD (incremental porosity cf. Fig. 4b) ranging from 0.005 % (white) to 0.35 % (black) over the entire duration of the experiment. For better visualization, all amplitudes smaller than 0.005 % are not shown. The orange points in panel I indicate the logarithmic mean relaxation time for every individual relaxation time distribution. It shows the change of the (logarithmic) mean size of the water saturated pores over the course of the experiment. For reference, the corresponding full saturation relaxation time distribution is shown in panel II. In panel III the temperature-corrected saturation () is plotted, which is given by the ratio of the initial NMR amplitudes ( 0) to the first initial NMR amplitude at full saturation ( = 0). During the first 48 h of the drainage experiment, the saturation stays constant and all relaxation time distributions are very similar (cf. Fig. 4b). Like for the saturation, there is no visible change of relaxation times at the first pressure step of 0.2 MPa (after 22 h), i.e., the pressure is not high enough to establish a percolation pathway in the pore network. When increasing the differential pressure to 0.4 MPa (between 48 − 72 h) gas breakthrough is observed and saturation, as well as , decrease only slightly. At 0.7 MPa differential pressure (72 − 90 h), saturation and decrease significantly mainly due to the drainage of larger pores ( 20 ms). After a first strong decrease in saturation and at 1 MPa differential pressure the decrease levels off and all values stay constant for approximately 24 h. The final increase to 2.4 MPa differential pressure yields a rather strong drainage effect and a final water saturation of () = 0.13. Here, the strong drainage relates to pores having relaxation times between 1 − 10 ms. We cross-checked the final saturation by weighing the sample after the experiment and it was found to be () = 0.15, so comparable to the value derived from the NMR measurement.
History Matching
The simultaneous recording of average water saturation (water production) and pressure gradient throughout a drainage experiment allows to history match recorded data by a two-phase flow simulator. A 1D model is used for the simulation assuming a homogeneous rock in all three dimensions. Software packages available are similar to reservoir black-oil simulators [36] and based on immiscible displacement of the fluid phases [37,38]. They fit experimental data at distinct time steps, e.g., pressure gradient and water production, by adjusting capillary pressure and relative permeability as these curves are related to pressure and saturation. Analytical functions as Corey (power law), LET, Log (beta) etc. or measured data can be selected. In this study the software Cydar © [39] was used to history match the data. In order to constrain the number of solutions, the measured relative gas permeability curve was fitted using Corey exponents and considered invariable throughout the history match. The history match was run by simulating the NMR water saturations and pressure recordings and adjusting the analytical curves of the relative permeability of water and the drainage capillary pressure. Figure 7 shows a comparison of the measured and simulated water production, the measured and simulated pressure gradients and the relative permeability curves of gas and water, as well as the drainage capillary pressure data. The capillary pressure function includes an entry pressure ( ) where gas starts to enter the pore space and displaces the water phase. The value of this entry pressure is consistent with the experimental differential pressure where a decrease of the NMR water saturation was observed. The analytical functions simulating the experimental data in 1D are according to [40,41]. = ( − )/(1 − ), (9) = − log , (10) = ( ) , (11) = (1 − ) , (12) with the measured water saturation, the residual water saturation, the pore entry pressure, as fitting parameter, and the relative permeabilities of water and gas respectively, and and the Corey fitting parameters. Figure 7a+b show the history match results for water saturation and differential pressure from the drainage experiment of sample R1. In Fig. 7c, the measured relative permeability (circles) is plotted. The fitted relative permeability for gas (dashed lines) and water (solid lines) are derived from history matching the monitored water saturation, pressure gradient and effective permeabilities. For the presented data, the derived Corey parameters yield a robust estimation of relative gas permeability over the entire water saturation range. Additionally, capillary pressure data can also be obtained from the history match, as shown in Fig. 7d.
Conclusion
With the combined NMR and fluid flow measurements, we overcome the lengthy experimental series of different experiments that are generally used to characterize fluid flow properties of rocks (repetitive pre-saturation tests and permeability experiments). The test measurements proved crucial and resulted in significant improvements for the workflow and thus data quality. The combination of NMR relaxation measurements and two-phase fluid flow experiments allows for the continuous allocation of water saturation () to differential pressure ∆ (respectively capillary pressure ). Moreover, it provides information about the changing water distribution within the pore space, i.e., about the size of the water filled pores, over the course of the experiment. Furthermore, with the our NMR flow cell setup, we directly measure the relationship between water saturation and effective gas permeability () . After accounting for temperature fluctuations, the monitored changes in water saturation proved very accurate. However, for future experiments, it may be advantageous to place the whole setup in a temperaturecontrolled container. The data recorded with the NMR flow cell, can be utilized for the parametrization of the relations between effective permeability, water saturation and capillary pressure by an inverse simulation of the experiment, i.e., by a history match. Hence, the results can readily enter a reservoir simulation. | 2020-02-13T09:23:03.037Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "acc6f9e5139188dbf69de1e70e76eae39dde8a39",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/06/e3sconf_sca2019_03005.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "eca5103b53605a9570003228cdaf57ca15651094",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
157159320 | pes2o/s2orc | v3-fos-license | Policy Making in Populist Context
In this paper, how policy making can be affected by different political contexts will be studied. This study considers negative meaning load for the word populism and assumes populism as a communicative strategy that politicians use in order to get in touch with unrecognized sections of society. Populist Context with concentration on the role of people would help politicians manipulate mass of people and benefit welfare policies as means in attraction of people in order to achieve their goals. And comparison of welfare policies in Iran after 1989 will clarify how Ahmedinejad’s populist policy making affected Iran’s welfare.
N. Bavili
Concentrating on the role of people can be seen both in democratic and dictator systems.And great attention to the will of the people and paying superiority to the people and their wills best confronted by a kind of political analysis called populism.Under populist discourse, the people share the identity, interests and form a collective body "which is able to express this will and take decisions" (Torre, 2013).
Canovan also discusses another aspect that may explain amorphous character of the people.According to Canovan in the English speaking world, the term "people" doesn't only signify a collective unit of analysis.It also refers to the "individual in general" which adds extra elements of ambiguity.It causes that that sovereign people look like a collection of individuals as well as collective body" (Canovan, 1999).
Hossein Abutorabi employs a metaphor to explain about the controversial concept of populism; a drove of sheep and goats follow the dog of the shepherd as they hear the sound of the float.Hossein Abutorabi compares the people who follow by closed eyes or obey without thinking with a drove of sheep and goats.
As a result, Hossein Abutorabi doesn't consider pure people as central idea of populism but he considers people who don't think and obey without thinking are central to theme of populism (Abutorabian, 1999).
Canovan contends that "the people" is particularly open and indefinite.Canovan brings the concept of "the people" from the theoretical realm and out into the political world.Populism is an imprecise concept that may be rendered as bringing "politics to the people" and "the people to the politics".Its intrinsic vagueness is exacerbated by its usage in everyday politics.Populism can be supposed as a knife with two edges which were born in representative and constitution age and it was devastating for democracy (Canovan, 1999).
Margaret Canovan explained that populism had two common characteristics:
First one is the centrality of people and the second one is anti-elitism.In defining vague concepts like populism determining what it is, may be difficult but deciding what it isn't can give some information about it.Populism is a vague concept which can gain different features and characteristics in different situations but we can find common features among different manifestations of it in different parts of the world.Populism as a concept which many researchers consider its defining as a difficult work can be clarified with what the concept is against (Brown, 1996).
Mobilization of People in Populist Context
In the field of sociology, populist cases can provide strategic sites for investigating a range of issues: populist mobilization can be considered as a political project of mobilization; in which large scale of political projects mobilize ordinary and marginalized social sectors."Populist rhetoric" reminds an anti-elite discourse and valorizes the role of ordinary people (Johnson, 2011).
Paul Taggart has defined populism based on six characteristics: First, being hostile to representative politics.Second, having a heartland.Third, lacking core N. Bavili values.Fourth, reaching to a sense of crisis.Fifth, being self-limiting.Sixth, chameleonic (a highly developed ability to change color) (Taggart, 2007).
Populism is widely used and rarely defined.Also, there is flexibility in the way populism is used but the identification of a set of core ideas is possible.Populism can be defined as negative reaction to the institutions of representative politics.
Different Interpretations of Populism
Hossein abutorabi discussed that populism is result of lack of reflection and rationality in society.He mentions that when press and social media do not perform their duties perfectly, rationality doesn't grow up in the society.Since rationality is not considered necessary element in decision making we can imagine the emergence of populism (Abutorabian, 2003).Different writers like Mudde (2004Mudde ( . 2007) ) and Kaltwasser , defined a set of interrelated ideas about nature of politics and society in which the unit of analysis are parties and party leaders.Other politicians like Kazin (1995), Laclau (2005), Panizza (2005), who view populism as a political style, a way of making claims about politics characteristics of discourse and unit of analysis are texts, speeches, and political discourse.Other politicians as Roberts (2006), Wayland (2001), and Johnson (2011) view populism, as a political strategy, define populism as a form of mobilization and organization and unit of analysis is with a focus on structures and using a comparative, historical and case studies (Noam Gidron, Bart Bonikowski, 2004).
Laclau considers populism as a category of political analysis which is vague and imprecise.Laclau considers contrasting components such as a claim for equality of political rights and universal participation for the common people which fused with a sort of authoritarianism.Ernesto Laclau considers populism as an ideology which protects the right of common people against the privileged interest groups (Laclau, 2005).
Margaret Canovan in 1999 in her definition of populism considers it as a problem of modernization in which simple rural people with traditional values who compose majority confront with financial capital.Populism as political movement has the support of mass of working class or peasantry.Moreover she mentions that populists rarely call themselves populists and usually reject the term when it is applied to them by others.
Populism as a Communicative Tool
Several scholars see populism as a tool, strategy, technique, tactics, ideology or a certain style of politics.Jagers and Walgrave (2007), consider populism as a strategy.In their approach, the defining element of populism is an appeal to the people "populous", with which populist parties identify and legitimate themselves.Hawkins (2007), conceptualizes populism as apolitical discourse.Mudde (2011), points out that political populism is then basically reduced to nothing more than political campaigning techniques.Furthermore, two features of po-N.Bavili pulism are held constant among different authors: "the elites" and "the people".
Cas Mudde, defines populism as a thin centered ideology which separates society in two antagonist groups, "the pure people" versus the "corrupt elite".Furthermore, populism calls for policies as expression of general will of the people.Accordingly, populism is anti-elitist and anti-establishment.Taggart (2007), clarifies that sometimes populist concept is confused with a style that seeks to be popular.Populism is widely used and rarely defined.Taggart (2007), considers populism as a negative response to the phenomenon of politics, and then populism is a reaction against modern politics.Williams (1977), defines populism as an ideology which pits a virtuous and homogenous people against a set of elites and dangerous "others" who are together depicted as depriving the sovereign people of their rights, values, prosperity and identity and voice.
Other writers like de Jasper, Hollanders, & Krouel (2004), discuss that populism is an ideology with several constituent elements, all derived from its central aim: to inject the will of the sovereign people into democratic decision making process.As mentioned, Social scientists have offered a variety of definitions of populism over the past half century, but scholars like Kirk A. Hawkins (2007), define populism discursively use different labels .
As declared before, different writers have different understanding of the concept populism.They approach populism from different angles to analyze the political discourses.Every political discourse has its own characteristics and features.But by analyzing these contexts, common points between them can be clarified.Populism can be supposed as a phenomenon which defines tactics used by politicians to win the power without considering the dominant conditions and type of the government ( de Jasper, Hollanders, & Krouel, 2004).
American historian Michael Kazan considers populism a democratic expression of political life that is needed from time to time to rebalance the distribution of political power for the benefit of majority.Through the vehicle of populism, "American have been able to protest social and economic inequalities without questioning the entire system".Behind the veil of the political process stands what really matters: the voice of the people.Misses believes that there is a connection between public opinion and public policy.He declares that politicians who consider public opinion tend to win the power positions but politicians who do not consider public opinion are not successful (Caplau, Stringham, & Mises, 2005).
Political Analysis of Populism
Populism, as a category of political analysis, confronts us with problems.Like all ideologies, populism proposes an analysis to respond to a number of questions: "What went wrong? who is to blame?And what is to be done to reserve the situation?"Populists claim that government and democracy, which should reflect the will of the people, have been occupied, distorted and exploited by corrupt elites.The second preposition is that the elites and "others" are to blame for the current undesirable situations.The third proposition is that people has lost their role in N. Bavili political system.And populism suggests taking the barrier away.And another proposition about populism is that it considers people as homogeneous and virtuous.By contrast, the enemies of the people the elite and "others" are neither homogeneous nor virtuous (Albertazzi & McDonnell, 1988).
Said Hajjariyan (1985) claims that populist government are not result of conflict between two social classes but populism can be assumed a consequence of confrontation between dominant class and under dominance (the people).According to Said Hajjariyan, the central character of populist government is that the executive section of government has developed enormously and Montesquieu's separation of power paradigm has been violated.Furthermore, civil society has lost its independence and integrated to the government (Hajjariyan, 1985).
Populism through a complex mechanism and concentrating on the role of people tries to dominate public opinion and politicians who follow the wills of the people.In other words can be said that public policies which can have direct or indirect effect on the lives of the people.From another point of view, pubic policies can be seen as output of the political system.Public policies come along in different forms, including laws, regulations.Another definition by Krill and Tosun (2008), considers policy making as a strategy to solve social problems by using institutions.Agenda setting, Policy formulation, Policy adoption, Implementation, and evaluation are stages of policy cycle.It should be mentioned that in cooperative political systems policy making process can be an extremely complex.Also, it can be claimed that policy making does not consist of a simple and single stage.
Populism in Policy Making Process
Public policy can be considered result of decision making process: what kinds of interests are considered in policy making process in populist governments?Is interest of people important in populist policy making process?Do populist governments consider the interest of people in policy making process?Decision making in Plato's era is different from the modern and postmodern era.In Plato era, decision making was an easy task moreover; there was a proposition that public guardians make the wisest decisions based on the facts available.But as the world has changed, decision making is not an easy task which an expert could do (Gauvin, 1998).
Decisions making in public era has emerged as subfield of political science in mid-1960s.Public Policy is the study of government decisions and actions.Policy analysis describes the investigation that produces accurate and useful information for decision makers (Cochran & Malone, 2005).
Political and social scientists make distinction between policy making and decision making.Decision making as a component of public policy making deals with the process of making choices.In other words, decision making involves making a discrete choice from among two or more alternatives.Public policy encompasses a flow and pattern of action that extends over time and includes many decisions.Public Policy as followed by government in dealing with problem: first, public policy is purposive or goal-oriented and in its positive sense is N. Bavili based on law.Secondly, policies consist of courses of action which emerge in response to policy demands (Anderson, 2003).
Public policy making can be thought as a strategy for resolving social problems by using institutions.Public policy making doesn't constitute a single stage but it is a process for attaining these goals.Public policy making constitutes long series of actions carried out to solve social problems.Agenda setting, Policy formulation, policy adoption, Implementation and evaluation can be considered as complex process of public policy making (Krill & Tosun, 2008).
There are a number of conceptual models that help to clarify our understanding of relationship between politics and public policies.The major models that can be found in the literature are institutional model, rational model, the incremental model, the group model, the elite model, and the process models are complimentary models.
Elite Model theory of policy making is determined by preferences of governing of elite.The essential argument of elite theory is that public policy is not determined by the demands and actions of the people or masses but by ruling elite whose preferences are in effect by public officials and agencies (Krill & Tosun, 2008).
Thomas Dye provides a summary of Elite theory: Firstly, in this model of analysis the society is divided into few who have power and many who don't .The few who govern aren't typical of masses that are governed.Elites share a consensus on the basic values of social system.Public policy in elite theory doesn't reflect demands of masses (Anderson, 2003).
Opposite to Elite model theory another model can be populist model in which public policy making is not determined by elite.Populist public policy making Model can be assumed the opposite version of Elite model.But all these planned improvements in social security system didn't come into being because during Ahmedinejad's presidency ministry of welfare was closed after two years that being established in order to prevent inflation of bureaucratic system.Ahmedinejad had promised improvement of welfare policies in order to assist poor and vulnerable classes of society in his electoral campaigns.He attracted mass of people by these promises.
Illustration of Populism in Different Contexts
Populist politicians like Ahmedinejad benefited of welfare policies as motivational tactics for mass attraction but after he won election campaign he decreased subsidies by subsidy reform projects.Different health care projects like pregnancy prevention policies were stopped in Health Care Centers and Women development centers were converted to family development centers.Also, welfare ministry was closed in order to reduce financial load of the government.
Populist politicians like Ahmedinejad used welfare policies as motivational strategies in order to get in contact with poor classes of the society and after he achieved his goal and he became president he didn't follow his promises.During presidency of Ahmedinejad welfare policies were decreased.Neoliberal policies were conducted and vulnerable classes left helpless in free market condition.
Legatum Prosperity index shows no improvement in life conditions of Iranians during Ahmedinejad's presidency.
Legatum Prosperity Index As table shows, prosperity rank of Iran has increased during presidency of Ahmedinejad.It means welfare has decreased in these years.In other words it can be mentioned that welfare policies which were conducted in populist context didn't improve life conditions of poor.But they benefited politicians in order to gain power and remove rivals in political competitions.
Conclusion
Dominant discourse in society would determine policy making process.And different political contexts would behave in a way to satisfy the needs of society.
Populism as communicative rhetoric would get in touch with poor people by promises that would improve life conditions by welfare policies.Populism with different promises would defraud mass in order to gain power.Policy making in N. Bavili populist rhetoric doesn't attempt to improve welfare but it aims to control mass by manipulating their needs.
This paper investigates populist policies as political phenomenon which affects country, region and world.This study intends to examine Implementation of Populist Policies during Ahmadinejad government from 2005 to 2013.What were these populist policies?And what kind of affect do these policies have on people's lives?Populist policies during Ahmadinejad as output of political system will be analyzed.In order to show distinctive features of populist policies, comparative analysis of policies will be done.How welfare policies during Ahmedinejad period are different from welfare policies during presidency of Khatemi and Haşemi Rafsanjani?During presidency of Haşemi Refsanjani centralized social welfare policies were implemented in developed and poor regions of countries.Health centers in undeveloped regions of country provided health care for all classes and pregnancy prevention facilities and training were provided for women in all parts of country.Also, during presidency of Haşemi Refsanjani, women development centers were established in order to improve life conditions in different regions of the country.Subsidies, as welfare policies were conducted in different regions of the country.Welfare policies in constructive discourse of Haşemi Refsanjani N. Bavili were implemented vastly in order to reconstruct ruined areas of İran-Iraq war.After Constructive discourse of Haşemi Refsanjani, Khatemi presidency between 1995-2005 years, reformist discourse came into satisfy needs of society.Reformist discourse attempted to reorganize infrastructure in order to improve political, social, and economical life of the people.In reform context, need-based policy analysis was conducted in order to improve social welfare.In the last years of president Khatemi ministry of welfare was established by parliamentary confirmation.It was planned to provide social welfare and social security for all Iranians. | 2018-12-04T03:46:20.346Z | 2017-06-02T00:00:00.000 | {
"year": 2017,
"sha1": "7bb92cd2f81347aca94370f6e1774d96710cd446",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4236/ojps.2017.73034",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7bb92cd2f81347aca94370f6e1774d96710cd446",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
119084683 | pes2o/s2orc | v3-fos-license | Ginzburg-Landau-Gor'kov Theory of Magnetic oscillations in a type-II 2-dimensional Superconductor
We investigate de Haas-van Alphen (dHvA) oscillations in the mixed state of a type-II two-dimensional superconductor within a self-consistent Gor'kov perturbation scheme. Assuming that the order parameter forms a vortex lattice we can calculate the expansion coefficients exactly to any order. We have tested the results of the perturbation theory to fourth and eight order against an exact numerical solution of the corresponding Bogoliubov-de Gennes equations. The perturbation theory is found to describe the onset of superconductivity well close to the transition point $H_{c2}$. Contrary to earlier calculations by other authors we do not find that the perturbative scheme predicts any maximum of the dHvA-oscillations below $H_{c2}$. Instead we obtain a substantial damping of the magnetic oscillations in the mixed state as compared to the normal state. We have examined the effect of an oscillatory chemical potential due to particle conservation and the effect of a finite Zeeman splitting. Furthermore we have investigated the recently debated issue of a possibility of a sign change of the fundamental harmonic of the magnetic oscillations. Our theory is compared with experiment and we have found good agreement.
Gennes equations. The perturbation theory is found to describe the onset of superconductivity well close to the transition point H c2 . Contrary to earlier calculations by other authors we do not find that the perturbative scheme predicts any maximum of the dHvA-oscillations below H c2 . Instead we obtain a substantial damping of the magnetic oscillations in the mixed state as compared to the normal state. We have examined the effect of an oscillatory chemical potential due to particle conservation and the effect of a finite Zeeman splitting. Furthermore we have investigated the recently debated issue of a possibility of a sign change of the fundamental harmonic of the magnetic oscillations. Our theory is compared with experiment and we have found good agreement.
I. INTRODUCTION
In recent years there has been a renewed interest in the interplay between external magnetic fields and superconductivity in type-II superconductors. It is well known that de Haas-van Alphen (dHvA) oscillations are a useful tool for probing the Fermi surface in metals in the normal state. For type-II superconductors the magnetic field is allowed to partially penetrate the sample in the mixed state. One would then expect magnetic oscillations in the mixed state to give information about the quasi-particle dispersion and the magnetic field dependence of the correlations in the ground state. Magnetic oscillations in the mixed state were observed for the first time in the layered superconductor 2H − NbSe 2 over 20 years ago. 1 More recently dHvA oscillations were observed in the organic superconductor κ−(ET ) 2 Cu(NCS) 2 , 2 the A15 compounds V 3 Si 3 and Nb 3 Sn, 4 the borocarbide superconductor Y Ni 2 B 2 C, 5 and the high temperature superconductors Y BaCuO 6 and BaKBiO 7 .
These experiments have sparked a variety of theoretical investigations, not least in order to understand the interplay between oscillations in the quasi-particle spectra and the ground state condensation energy. The transition line H c2 between the normal state and the mixed state was shown to exhibit weak oscillations as a function of the magnetic field. 8,9 For high magnetic fields, clean samples, and very low temperatures H c2 has been predicted theoretically to be a strongly oscillating function. 10 The mixed state is characterized by the interplay between Landau level quantization due to the magnetic field, and Cooper pair formation characteristic of superconductivity. This calls for a theory that takes both effects into account consistently. The theory developed by Maki 11 and Stephen 12 gives a simple picture of the vortex lattice acting as an extra scattering potential on quasi-particles thereby damping the magnetic oscillations. The theory uses semiclassical approximations and, crucially, fails to impose the physical condition that the vortex lattice is the self-consistent mean field of the Cooper pairs. The problem simplifies when the electrons are confined to form pairs within the same Landau level (diagonal approximation) and this case has been treated by several authors. 13,14 Unfortunately the diagonal approximation ignores the fact that the typical excitation is a superposition of an electron and a hole in different Landau levels, but with similar energies. This effect is strongest when the chemical potential µ is either at a Landau level n f = μ hωc − 1/2 = n (n integer) or exactly between two Landau levels n f = n + 1/2. We then have exact degeneracy between an electron state in a Landau level n f + m and a hole in the level n f − m, when n f = n, and between an electron in a level n f + m + 1/2 and a hole in a level n f − m − 1/2, when n f = n + 1/2, respectively. A major effect of the self-consistent pairing field is then to mix these two degenerate excitations strongly. Following the results of the diagonal approximation Dukan et al. 15 have focused on the consequences of a gapless portion of the quasiparticle spectrum. The calculation, which is appropriate for low lying excitations in 3 dimensions, is not applicable for two dimensional systems where the number of gapless points and their dispersion law vary strongly with the magnetic field and it does not take into account the oscillatory behaviour of the ground state energy as a function of the magnetic field. This oscillatory behaviour of the ground state energy has been considered by P. Miller and B. L. Györffy 16 in the ∆ ≫ k b T limit. Norman et al. 17 have studied the problem numerically and have linked the damping of the magnetic oscillations to the broadening of the Landau levels due to the gap. Recently 18 there has been claims based partly on Gor'kov theory and partly on an assumed simplified form for the quasiparticle spectrum that below a certain field H inv < H c2 , the magnetic oscillations should exhibit a rapid 180 • phase shift.
In this paper we develop a new scheme for calculating the Gor'kov expansion terms treating the quantum effects of the magnetic field exactly. In addition we solve numerically the corresponding Bogoliubov-de Gennes (BdG) equations. Using the developed formalism we study the magnetic oscillations in the mixed state of a type II superconductor. We are working in two dimensions since many organic metals are known to show almost perfect 2D behaviour. Exploiting the symmetry of the magnetic translation group of the vortex lattice we have been able to calculate the expansion coefficients in the Gor'kov theory exactly to any order making no restriction on the energy of the center-of-mass of the Cooper pairs. Selfconsistency within this approach then transforms to the much simpler problem of minimising a polynomial of a finite number of variables. This allows us to develop an analytical theory for the thermodynamic potential and thus for the magnetic oscillations close to H c2 which contains no approximations apart from the assumption of a small order parameter. This establishes a rigorous basis for our theory, compared with earlier attempts. It turns out to be crucial to determine the order parameter self-consistently since its oscillatory behaviour when the magnetic field varies is the cause of the damping of the dHvA oscillations. We find that the dHvA oscillations are damped in the mixed state as compared to the normal state, in agreement with what is observed experimentally. This is due to the fact that the contribution from the superconducting order parameter to the magnetic oscillations partly cancels the contribution from the normal grand potential. The superconducting order parameter itself is an oscillating function of the magnetic field, with local maxima occurring whenever we have a Landau level at the chemical potential since electrons can then form Cooper pairs without any cost in kinetic energy. This is the simple physical picture of the damping emerging from our formalism and it complements the interpretation given by P. Miller A similar approach has been taken by Maniv et al. 19 Using the semiclassical and various other approximations, they calculate the Gor'kov expansion coefficients for a 2D metal to fourth order in ∆(r) when the motion of the centers of mass of the Cooper pairs is restricted to the lowest Landau level. However, they obtain 20 that the magnitude of the magnetic oscillations exhibit a maximum below H c2 . This is contradicted by our exact calculation of the expansion coefficients and also by our numerical solution to the BdG-equations.
Recently it has been suggested that the degeneracy of the Landau levels should give rise to non-perturbative terms in the expansion of the grand potential thereby making the traditional Gor'kov theory invalid. 21 We have tested our perturbative theory carefully against an exact numerical solution of the BdG-equations and we do not find any of the predicted non-perturbative effects. The theory based on the Gor'kov expansion agrees very well with the exact solution if we are not too far below H c2 . It is essentially a high temperature expansion in the sense that is an asymptotic series as long as the change in the quasiparticle levels as compared to the normal state is not larger than ∼ O(k b T ). 22 In a two-dimensional metal the chemical potential is an oscillatory function of the magnetic field when the number of particles N is fixed. When higher harmonics are important (i.e. low temperatures and clean samples) the dHvA signal in the normal state for fixed N look qualitatively different from the case when the chemical potential is fixed. It is of interest to see what consequences this difference has for the magnetic oscillations in the mixed state.
Examination of the dHvA oscillations in the mixed state in the two cases yields that the superconducting order for fixed number of particles reduces the oscillations in the chemical potential and that the dHvA oscillations are essentially the same in the two cases apart from a narrow region close to H c2 . Specifically, the rate of damping of the magnetic oscillations is the same when the number of particles is constant and when the chemical potential is constant.
Since the contribution to the magnetic oscillations from the condensation energy is in antiphase with the contribution from the normal grand potential, it has been suggested 18 that this will result in a sign change of the fundamental harmonic of the dHvA oscillations for H ≤ H inv < H c2 . This would happen if the superconducting contribution were to overwhelm the contribution from the normal grand potential deep enough into the superconducting state. Based on an approximate evaluation of the Gor'kov expansion parameters one can calculate an expression for H inv . 18 Using our expressions for the damping, we are able to predict that within the region of validity of the perturbative scheme this effect will not occur.
Hence there is no theoretical reason, within perturbation theory, to expect inversion of the magnetic oscillations. This result agrees with the lack of experimental observation of such an effect. It also agrees with our exact numerical solutions to the BdG equations which show a complete suppression of the magnetic oscillations deep enough into the mixed state. 23 Although there are currently experimental uncertainties about the value of H c2 in the organic superconductors, a comparison with experimental results for the quasi 2D superconductor κ−(ET ) 2 Cu(NCS) 2 yields good agreement between theory and experiment.
The outline of our paper is as follows. Sec. II sets up the formalism for describing the vortex state using both the Bogoliubov-de Gennes equations and perturbation theory. In Sec.
III we compare the results of the perturbation theory with the exact numerical solution. The damping of the magnetic oscillations is discussed in Sec. IV. We give a physical interpretation of the damping. The effect of a finite Zeeman splitting term is discussed and the case of a conserved number of particles as opposed to a conserved chemical potential is considered.
Using approximate expressions for the damping parameters we are able to give a simple analytical expression for the rate of damping of the dHvA oscillations close to H c2 . The spin dependence and the temperature dependence of the oscillations can then be extracted. We then examine the validity of the arguments leading to a sign change of the first harmonic of the dHvA oscillations. In Sec. VI we compare our analytical theory with experimental results. Finally we summarize our results in Sec. VII.
A. General representation and BdG-equations
We consider a pure 2D electron gas in the x−y plane with a perpendicular magnetic field H along the z-axis. In the Landau gauge, A = (0, Hx, 0), the single-particle eigenstates can be chosen to be where φ N (x) = (2 N N! √ πl) −1/2 H N (x)e − 1 2 x 2 with H N being a Hermite polynomial of order N, and l 2 =hc/eH is the magnetic length. The size of the system is L x × L y . Band structure effects are assumed to be adequately described employing an electron effective mass m * .
The B-field is taken to be uniform within the sample thereby ignoring the partial screening by the supercurrents. This approximation holds for strong type-II superconductors (κ ≫ 1) such as the organics, where the penetration depth is much larger than the coherence length.
In the mixed state of a conventional type II superconductor the order parameter forms a vortex lattice. It is therefore advantageous to use a basis set which incorporates this symmetry. We have chosen to use the following set of functions introduced by Norman et al : 17 where Ly define the magnetic Brillouin zone (MBZ). The symmetry of the order parameter restricts the pairing to be between electrons with quantum numbers k and -k. 13 By adjusting a x we can obtain both a triangular (a x = l( √ 3π/2) 1/2 ) and a square vortex lattice (a x = l(π/2) 1/2 ). Throughout this paper we choose to work with the triangular lattice since we expect the free energy to be minimized by this symmetry (except possibly in the re-entrant regime). 17 We are using mean field BCS-theory with a smooth cutoff in the interaction around the Fermi surface, applicable for weak-coupling superconductors. The mean-field Hamiltonian is: − µ ψ σ (r) where the order parameter is defined as: This differs from the conventional BCS-hamiltonian since we have introduced the weight function w(n). It is necessary to have a smooth cutoff in the pairing interaction since we otherwise would get non-physical effects arising from Landau levels abruptly entering or leaving the pairing region. The weight function w(n) is chosen to be Gaussian i.e. w(N) ∝ e −(ξ N /0.5hω D ) 2 where ω D is the pairing width and ξ N = (N + 1/2)hω c − µ. This approach was introduced by Norman et al 17 although they used a different weighting function. It should be noted that the above slightly unconventional definition of the order parameter is necessary. Otherwise the self-consistency condition is not equivalent to minimising the grand potential Ω with respect to ∆(r) (i.e. δΩ δ∆(r) = 0). In the vortex lattice case the order parameter can be characterized by a finite number of parameters ∆ j 17 The ∆ j 's are determined selfconsistently as explained in reference 17. Assuming not only translational but also six fold rotational symmetry of |∆(r)| gives the restriction 14 j = 0, 6, 12, . . . where j ≤ 2N max . N max is the highest Landau level participating in the pairing.
Using the above transformation the corresponding BdG-equations split into a set of equations for each k and they can be solved numerically. Norman et al. 17 have carried out an extensive numerical investigation of the quasiparticle spectrum and the magnetic oscillations in the superconducting state. We have developed a similar numerical scheme to solve the BdGequations. In this way we are able to check our analytical results against an exact numerical solution.
B. Perturbative expansion of the grand potential Since we are interested in the region near H c2 where the order parameter is small, it is natural to consider the Gor'kov expansion of the grand potential. This can be done either through the equation of motion approach originally used or by using the grand partition function for the symmetry-broken self-consistent Hamiltonian: where D(ψ * σ (r, τ )ψ σ (r, τ )) denotes functional integration over Grassman variables. We have definedψ σ (r) = n,k w(n)φ n,k (r)a n,kσ . The 1 V |∆(r)| 2 term corrects for the double counting of the interaction energy in the Hartree-Fock approximation. Expanding the grand potential Ω = − 1 β ln Z in powers of ∆(r) we obtain to eighth order where The kernels are given by and ω ν = (2ν + 1)πk b T /h are the Matsubara frequencies. Maniv et al. 19 have calculated the expansion up to fourth order in ∆(r) using essentially semiclassical approximations. They used a variational form of the order parameter which has no symmetry built in initially but restricts the electrons to condense in the lowest center-of-mass Landau level (∆ j =0 = 0).
As will be shown below, this restricion introduces no serious error within in the region of interest in the phase diagram. Since it is known 17 that the triangular lattice is the minimal energy configuration (except for the re-entrant regime) we have exploited this symmetry to calculate these expansion terms exactly. Because we are using a smooth pairing cutoff in our Hamiltonian we have, instead of the Green's function for the normal state G • σ (r 2 , r 1 , ω ν ), the following function in our kernels: where ξ nσ = ξ n + gm * σ/2m 0h ω c . The only difference from the Green's function for the normal state is that we have included the weight functions w(n) in the sum. Using the symmetry of the vortex lattice the integrals can be solved. We have to fourth order and where f (n 1 , n 2 , n 3 , (13) and The coeffiecient B N M j is defined as The sums over states above are restricted to Landau levels lying within the pairing width around the chemical potential. Using the standard method of evaluating Matsubara sums by contour integration we obtain The second order term Ω 2 which determines the H c2 line agrees, apart from the inclusion of the weight function, with the result of MacDonald et al. 26 and Rajagopal and Ryan. 27 The six and eighth order terms Ω 6 and Ω 8 can also be calculated and they are given in appendix A. We get the form: Thus we have derived the exact quantum mechanical expressions for the expansion coefficients for Ω S − Ω N up to eighth order assuming a vortex lattice. We have not yet restricted the electrons to form pairs with the lowest possible center-of-mass energy (j = 0). The result is a multidimensional polynomial in ∆ j . Going to eighth order permits us to check the convergence properties of the series. We could in principle calculate the expansion coefficients to any order but, as usual, the algebra gets more cumbersome with increasing order, and the minimization condition cannot be solved analytically for such high orders.
C. Self-consistency and minimization of Ω S
The self-consistent determination of ∆(r) ≡ V < ψ ↑ (r)ψ ↓ (r) > is equivalent to minimising the grand potential with respect to ∆(r). 28 In the above formulation, which takes into account the spatial symmetry of the order parameter, this reduces to minimising our multidimensional polynomial with respect to ∆ j . Although this is a standard numerical problem it is necessary to make further aproximations in order to obtain simple analytical results. The instability towards superconductivity is determined by the sign of the expansion coefficients α j . Above H c2 we have α j > 0 for all j. The transition to the mixed vortex state occurs when one of the α j 's becomes negative. The system can then lower its energy by making the corresponding ∆ j nonzero. It has been shown that the instability occurs first in the j=0 channel. 26 So we have α 0 < 0 and α j =0 > 0 for H < ∼ H c2 and therefore ∆ 0 ≫ ∆ j =0 . We can then make the approximation ∆ j =0 = 0, i.e only consider condensation into pairs with lowest Landau level center-of-mass motion. We have checked this approximation by solving the BdG equation numerically when ∆ j =0 = 0 and when all the ∆ j 's are non-zero. In the region of interest there is essentially no difference between the two solutions thus justifying our approximation.
The grand potential now has the Landau form (α = α 0 , γ = γ 0 0 0 0 etc.) and our self-consistency problem is reduced to a simple onedimensional minimization problem which can be easily solved. To fourth order we have a mexican hat potential when we are in the mixed state (α < 0 and γ > 0) and the minimum for the grand potential is obtained for non-zero ∆ 0 . Requiring (19) is a cubic equation and can be solved exactly. To fourth order we Equation 19 yields∆ 0 and therefore ∆(r) and Ω S − Ω N as a function of H. The value∆ 0 which minimizes Ω S − Ω N will be a function of H and T through the coefficients α, γ, κ, η.
Because magnetic quantization has been accounted for exactly, all coefficients and, hence, ∆ 0 and Ω S − Ω N are oscillating functions of H for a given temperature T . The condensation energy Ω S − Ω N oscillates 180 • out of phase with the normal state Ω N close to H c2 . This is the origin of the damping of the magnetic oscillations of Ω S compared to Ω N . The physical reason for this effect is rather simple as will be explained in Sec. IV A.
We now consider the magnetization grand potential for a free 2D electron gas Ω N can be calculated analytically for the case when only two Landau levels are partially occupied. 29,30 For relatively high T , low H or small g-factor, this assumption breaks down but it is then straightforward to calculate Ω N numerically. It should be noted that the chemical potential µ in general is a function of H.
We have in most of this article, for simplicity, kept the chemical potential µ fixed thereby avoiding having to determine µ self-consistently. The oscillatory effect of the chemical potential is most important for low temperatures (T < ∼ 0.2) and very clean samples such that higher harmonics contribute to the magnetic oscillations. In section IV C we will show that even in this case one can to a good approximation consider the chemical potential constant in the mixed state.
III. COMPARISON BETWEEN NUMERICAL DATA AND PERTURBATION EXPANSION
Recently it has been claimed that the degeneracy of the Landau levels should give rise to non-perturbative terms in the expression for Ω S −Ω N making the Gor'kov theory invalid. For finite temperature there should be a non-perturbative ∆ 3 0 -term in Eq.(18) resulting in many interesting thermodynamic effects. 21 It is therefore of importance to establish the validity of the perturbation theory developed in the preceding sections so that we can use it to derive results instead of a cumbersome numerical solution. This is essential in the case when many Landau levels participate in the pairing since the computation time is very long in this regime for the numerical solution. In order to estimate the accuracy of our perturbation expansion, we compare it to an exact numerical solution of the corresponding BdG-equations.
As mentioned earlier, we have set up a code which solves these equations self-consistently.
We have chosen parameters such that ω D /ω c = 5, V hωcl 2 = 8.2 and k b T /hω c = 0.28 when n f = 12. In Fig. 1 we show the order-parameter∆ 0 as a function of the magnetic field. The chemical potential µ is fixed. We have plotted both the numerical, the fourth order and the eighth order solutions. There is good agreement between the numerical solution and our perturbation expansion for both fourth and eighth order. The general behaviour of ∆ 0 is correctly predicted by both the fourth order and the eighth order expansions. In Fig. 2 we have plotted the condensation energy Ω S − Ω N . We are measuring energies in units of hω c . It is apparent that the contribution Ω S − Ω N has local minima for n f integer. Since Ω N has local maxima for n f integer the condensation energy oscillates 180 • out of phase with the contribution from the normal state Ω N . We therefore get partial cancellation of the normal state oscillations and a damping of the dHvA-oscillations. This is seen in Fig. 3 where we have plotted the magnetization M ≡ − (∂ H Ω) µ for both the normal state and the mixed state. When the superconducting order starts to increase at n f ≃ 10, we get significant damping of the dHvA oscillations. Again the agreement with the numerical data is good as long as n f < ∼ 12. Eighth order theory tends to agree better with numerical data than does the fourth order theory indicating that the perturbation expression is valid. Once we go too far into the superconducting state, the perturbation theory starts to disagree with the numerical results, also as expected. We see from Fig. 1 and Fig. 2 that the magnitude of ∆ 0 and Ω S − Ω N is still fairly well described for n f > 12, but both fourth and the eighth order expansions start to pick up spurious oscillations in the order-parameter and in the energy. Ω S − Ω N actually starts to oscillate in phase with Ω N according to the perturbation theory. This gives enhancement of the dHvA-oscillations in the mixed state as compared to the normal state, as seen from Fig. 3. This is an unphysical effect and is absent in the exact solution. Since this enhancement is neither confirmed numerically nor experimentally, we conclude that perturbation theory in the single parameter ∆ 0 breaks down at this point. It can be shown 22 that the Gor'kov expansion is convergent if the change in the quasiparticle energies |E η k − ξ η | is not larger than O(k b T ). We have looked at the numerically calculated quasiparticle energies as a function of n f . As expected and in agreement with Norman et al. 17
A. Physical interpretation
To get a physical understanding of the superconducting damping of the magnetic oscillations, it is helpful to consider the ground-state which gives the dominant contribution to the grand potential for low temperatures. By analogy with the case of no magnetic field, 31 our numerical solution is based on the following canonical transformation: where u η N k is the coefficient of φ(r) N k and v η N k is the coefficient of φ * (r) N −k in the Bogoliubov amplitudes u(r) and v(r) for the η'th solution respectively. The corresponding ground state of our mean field Hamiltonian is then where |Ψ > is a state with all single particle states with energy less that µ − ω D empty and all single particle states with energy higher than µ + ω D occupied . We see that Eq. (23) gives a coherent superposition of states where the pairsâ † N kâ † N ′ −k |0 > are either occupied or unoccupied. When we have a Landau level at the chemical potential µ (n f =integer) it does not cost any kinetic energy to make a superposition of states with either occupied or unoccupied pairs formed by electrons in that level. The instability towards superconductivity is therefore largest when we have µ = (n + 1/2)hω. Since the grand potential of the normal state is at a maximum 32 when µ = (n + 1/2)hω we have that Ω S − Ω N and Ω N oscillate 180 • out of phase. This analysis is true for both constant chemical potential and constant number of particles. In the latter case one works with the Helmholtz free energy but the conclusions are the same. Mathematically the maximum in the damping comes from the fact that when the chemical potential is at a Landau level the sum in equation (11) is dominated by the terms with zero denominators, as an application of l' Hopital's rule on these terms confirms.
Hence α(H) has a local minimum and the superconducting order a local maximum. This is the physical picture of the damping of the magnetic oscillations that naturally emerges from our formalism.
Norman et al 17 interpret the damping of the magnetic oscillations as an effect of the broadening of the Landau levels due to superconducting order. An alternate explanation has been put forward P. Miller and B. L. Györffy. 16 that emphasizes the role of non-diagonal pairing. There is in fact an intimate link between the two approaches that we now elucidate by the following simple calculation: We estimate Ω S − Ω N (for simplicity we consider T = 0) for the two cases when (I) the chemical potential is at a Landau level (n f integer; maximum of the free energy) and (II) when it is exactly between two LL (n f is half an odd integer; minimum of the free energy). In both cases we diagonalize the BdG equations approximately, but insist on using degenerate perturbation theory, because the diagonal approximation breaks down. When n f is an integer, the lowest lying quasi-particle excitations have the orbital character of the n f -st LL, and perturbation theory yields for the quasi-particle energy E n f k = |F n f k |. It can easily be seen that the contributions of the other LL to the ground state energy cancel pairwise within degenerate perturbation theory (essentially because within degenerate perturbation theory level repulsion is symmetric with respect to the unperturbed degenerate level). Therefore the reduction in the maximum of the free energy for case (I) in the mixed state is to lowest order in the pairing self energy.
A similar calculation for case (II), when n f is half an odd integer and the free energy is a minimum, gives an energy shift which is of higher than linear order in the pairing self energy, because degenerate perturbation theory now leads to complete pairwise cancelling for all Landau levels to first order in the pairing self-energy). Therefore the minimum of the oscillation is reduced by substantially less than the maximum, which shows that the damping of the oscillations is a direct consequence of the broadening of the quasi-particle levels accompanied by the mixed orbital character of quasi-particle excitations. in the mixed state is reduced by a factor cos(π gm * 2m 0 ). This is the same reduction as in the normal state and hence the relative damping due to superconductivity is insensitive to spin splitting. This result will be proved in section V. We have confirmed this result by solving the BdG-equations numerically with and without a finite Zeeman splitting. The reduction in the amplitude in both the mixed and in the normal state as compared to the amplitude with no spin splitting corresponds very well to a cos(π gm * 2m 0 ) factor in the region where the mixed state is described well by the perturbation expansion. Deeper into the mixed state the numerical results indicate that the effect of spin is suppressed by the superconducting order. The reduction in the amplitude of the magnetic oscillations due to a finite Zeeman term is less than the cos(π gm * 2m 0 ) factor. This is due to the fact that when the superconducting order increases, the pairing interaction starts to dominate the Zeeman term and the effect of any finite g-factor is suppressed.
So we conclude that within the region described well by our perturbative expansion a finite Zeeman term does not alter the rate of the damping of the magnetic oscillations due to superconductivity. When only the first harmonic is important the effect of the Zeeman term is simply a reduction by a factor cos(π gm * 2m 0 ) for the amplitude of the oscillations in both the mixed and in the normal state. Deeper into the mixed state the superconducting order starts to suppress the effect of the spin splitting and the magnetic oscillations is less affected by a finite Zeeman term. Hence in this region the relative size of the magnetic oscillations in the mixed state as compared to the normal state is larger for finite spin splitting and the damping is less efficient as compared to the g = 0 case.
C. Conserved number of particles
For two-dimensional systems with a fixed number of particles it is well known 32 that the magnetic field dependence of the chemical potential µ(H) has a strong effect on the magnetic oscillations in a normal metal when higher harmonics are important. For low temperatures and clean samples the shape of the oscillations look qualitatively different when the chemical potential is fixed as compared to when the number of particles is fixed. We have up till now mainly considered the case of a constant chemical potential. When the number of particles is held fixed we need to consider Helmholz free energy F = Ω + Nµ. The chemical potential is determined by the equation where f η σk = (exp(βE η σk ) + 1) −1 This is a numerically cumbersome problem since we need to solve the BdG-equations self-consistently for a given chemical potential, then calculate <N > and repeat the calculation for a new value of µ until Eq.(24) is obeyed. However it is essential that we determine the chemical potential self-consistently. If we naively assume that the chemical potential oscillates as in the normal state we would obtain persistent magnetic oscillations of the free energy even when the Landau level structure is completely destroyed by superconducting order. In Fig. 4 we have plotted the magnetization when the chemical potential is constant (✷) and when the number of particles is constant ( * ) for a very low temperature. We have chosen parameters such that ω D /ω c = 5, V hωcl 2 = 9.0 and k b T /hω c = 0.05 and gm * /m 0 = 1 when n f = 12. For comparison the solid and dotted lines give the magnetization in the normal state for n f > ∼ 8.2 for conserved µ and N respectively. We see that there is only a significant difference between the two curves close to H c2 (n f ≈ 7.7 at H c2 ) when the chemical potential behaves differently in the two cases.
Deeper into the mixed state the oscillatory behaviour of the chemical potential is damped by the superconducting order and it becomes practically constant. This is illustrated in Fig. 5 where we have plotted n f = µ(H)/hω c − 0.5 as a function of the magnetic field (H c2 ≈ 1.5) when the number of particles N is constant (solid line) and when the chemical potential is constant (dashed line). We see that the oscillations in the chemical potential when N is constant are damped in the mixed state. Once the superconducting order has damped the oscillations in the magnetization it has also damped the oscillation in µ(H) and the behaviour for fixed N is essentially the same as for fixed µ. Thus the conclusion is that although there is some difference in the dHvA signal close to H c2 when N is fixed conserved as opposed to fixed µ the overall rate of damping of the oscillations is the same in the two cases.
V. SIMPLIFIED FORM FOR THE DAMPING
A. The first harmonic of the condensation energy To obtain a simple form for the damping, we must take a closer look at the coefficients α(H) and γ(H) given in Eq. (11) and Eq. (12). As mentioned already, the transition to the mixed state occurs when α(H) changes sign. The Gor'kov expansion is most relevant for temperatures such that only the lowest harmonics of the dHvA signal are significant. This allows us to focus only on the zeroth and first harmonics of the relevant quantities. Thus we take α(H) to have the form: where a 1 > 0 and a 2 > 0. The coefficients a 1 and a 2 will in general depend weakly on the magnetic field but we assume they are constant. This is reasonable since for µ/hω c ≫ 1 the rate of change of a 1 and a 2 is very slow as compared to the frequency µmc/he of the oscillations. The essential physics comes from the sign change of α(H) and its oscillatory behaviour, combined with the features of γ(H) described below. For simplicity we confine ourselves to fourth-order perturbation theory. The fourth-order coefficient γ(H) is has the form: where g 1 > 0 and g 2 > 0. Again both g 1 and g 2 depend on the magnetic field but this dependence is weak as compared to the strong oscillatory behaviour coming from the Landau level structure. Note that we have opposite signs for the first harmonics of α(H) and γ(H).
In section V B we will extract estimates of a 2 and g 2 from Eq. (11) and Eq. (12) whereas the approximate expressions for a 1 and g 1 will be given in appendix B. Using these approximate forms for α(H) and γ(H) we get for the condensation energy Assuming that g 2 ≪ g 1 we get the following approximate form for the first harmonic of Ω S − Ω N to first order in g 2 /g 1 : ( where Ω(H) n is the n'th harmonic of Ω(H). It should be recalled that the above expression is only valid for α(H) < 0. When we are deep enough into the superconducting state so that we are away from the reentrance region we have a 1 (H c2 /H − 1) > a 2 . This means 3a 2 2 g 2 4g 2 1 ≪ a 2 2 g 1 < a 1 a 2 g 1 (H c2 /H − 1) and we can neglect the small constant term We thus get the following form for the first harmonic of the grand potential: where 32 We have written the reduction due to finite temperature in square brackets.
B. Calculation of the oscillatory terms
In this section we will derive some approximate expressions for the coefficients a 2 and g 2 .
We are interested in how a 2 and g 2 depend on the parameters n f , ω D , and T . It turns out that it is fairly straightforward to extract this dependence for the oscillatory terms. First we note the following approximate identity coming from the law of large numbers: where we have assumed that |n 1 − n 2 |/n 1 ≪ 1 and n 1 ≃ n f (i.e ω D /ω c ≡ 2σ ≪ n f ). Using we write the integral in the form: where ω ′ ν = ω ν /ω c . The first harmonic of α(H) comes from the terms with |l − m| = 1.
Taking m = 1 and l = 0 yields the integral: We approximate this integral by: since we have assumed 8n f ≪ σ 2 . The integral can be solved exactly and we get : 9 After some algebra, the calculations outlined above combined with Eq. (11) lead to the following approximate result: The above result that a 2 is proportional to 1/ √ n f and k b T e −2π 2 k b T hωc and independent on ω D is still correct even when σ 2 ≪ 8n f , as long as min(σ, √ n f ) ≫ 1 and Inclusion of spin is equivalent to making the substitution x → x+ gm * 4m 0 and y → y − gm * 4m 0 in the integrals I l,m . This results in a reduction factor cos(π gm * 2m 0 ) in Eq.( 37) if min( √ n f , σ) ≫ g.
The calculations for g 2 are very similar to the ones above. Using Eq. (12) and the Poisson formula we end up with the following integrals determining the dependence of γ on n f , T and ω D : where Ξ j 1 ,j 2 j 3 ,j 4 is given in Eq. (14). Contributions to the first harmonic g 2 come from the terms with |l 1 + l 3 − l 2 − l 4 | = 1. As in the case for a 2 we can neglect the terms with more than one l i different from zero when 2π 2 k b T /hω c > ∼ 1. Although we do not have any simple expression for Ξ x 1 +x 4 ,x 2 +x 3 x 1 +x 2 ,x 3 +x 4 we can still extract the dependence on T , ω D , and n f . This is because the integral over x 2 . . . x 4 does not vary appreciably with x 1 on a scale ≃ ω ′ ν=0 . Using the result for any well-behaved function f (x) which varies slowly for x < ∼ ω ′ and taking l 1 = 1, l 2 = l 3 = l 4 = 0 we get the integral: The factors 1/(iω ′ ± x j ) in the integrand makes the integral largely independent of any long range behaviour determined by σ and n f as long as |ω ′ | ≪ min(σ, √ n f ). We therefore conclude that g 2 is independent of ω D and that it only depends on n f through the n −1 ffactor coming from the four B N,M 0 coefficients. We also obtain that g 2 is proportional to The proportionality constant is found through an exact evaluation of γ given in Eq. (12). We obtain: Again the effect of spin (ie. non-zero g-factor) provide an additional cos(π gm * 2m 0 ) in Eq.( 40). It is not surprising that the oscillatory terms a 2 and g 2 are independent of the pairing width ω D since the oscillations are a consequence of the individual Landau levels going through the chemical potential. Likewise the 1/ √ n f and 1/n f dependence reflect the fact that the probability for two electrons, each with energy (n + 1/2)hω c , to form a pair with minimum center-of-mass energy is proportional to 1/ √ n for high quantum numbers, as can be seen from Eq. (31). This proportionality can be explained via simple phase-space considerations.
We have tested the dependence of a 2 and g 2 on the different parameters n f , ω D and T and we find excellent agreement with our approximate forms.
To facilitate comparison with earlier papers we will now formally treat the order parameter ∆(r) as a free parameter and assume that the oscillatory behaviour of Eq.(18) only comes from the harmonics of the expansion coefficients α, γ etc. This is of course incorrect since the self-consistent order parameter itself is a oscillatory function of the field, making the results where the corrections to the harmonics of the dHvA oscillations due to superconductivity are expressed as a power series in ∆ 11,12,33 of limited validity. However, to compare with the earlier predictions we ignore for the moment the oscillations in ∆ 0 and treat it formally as a free paramter (i.e. (Ω S − Ω N ) 1 = a 2 ∆ 2 0 − g 2 ∆ 4 0 + . . .). Here we focus on the ∆ 4 -term since there are discrepancies between the predictions of different authors for this term. Since we obtain using Eq.(40) and Eq.(30) the formal result for the fourth order term: ( Stephen 12 obtained ∼ 16Ω N 1 /n f (∆/hω c ) 4 for the same quantity using a different semiclassical approach. The n f dependence of the two result agree but the numerical prefactors are somewhat different. The above arguments for the n f dependence of g 2 can easily be generalised yielding that the n f dependence of the first harmonic of the ∆ 2n -term is n f −n/2 . This n f dependence agrees with the result obtained by Stephen whereas it disagrees with the n −3/2 f -dependence for the ∆ 4 -term obtained by Maniv et al. 33 We cannot overemphasize the fact that the above scheme to calculate the damping of the oscillations due to superconductivity is incorrect, since it ignores the oscillations in ∆ as a function of the field. To include those we have to use a self-consistent order parameter and hence Eq.(28).
One debated issue is the possibility of reentrance for type-II superconductors. 10 The oscillatory behaviour of α due to the Landau level structure gives rise to the possibility of several solutions to α(H) = 0 for a given temperature. This should be reflected in a highly oscillatory behaviour of the transition line H c2 (T, H). Such an oscillatory behaviour has never been observed experimentally. Using the approximate expressions for a 1 and a 2 we can estimate the temperature below which there is reentrance and such oscillations in H c2 should occur in a two dimensional metal. We obtain that when there is no impurity scattering, no Zeeman splitting, and n f ∼ O(10 2 ) one should observe these oscillatory effects in H c2 in a 2D metal for temperatures lower than k b T /hω c ≈ 0.3. However, inclusion of spin reduces the amplitude of the oscillations of α by a factor cos(π gm * 2m 0 ) close to the transition line. Assuming that impurities reduces the oscillations by a factor exp(−2π where T D is the Dingle temperature we obtain that there will not be any reentrance if and | cos(π gm * 2m 0 )| ≈ 0.3. They will therefore never observe these reentrance effects. The magnetic oscillations in the thermodynamic quantaties will of course still be there since α and γ are still oscillatory.
C. Approximate results for damping
In this section we will draw some conclusions from the general form of the damping of the dHvA-oscillations due to the growth of the superconducting order described by Eq. (28). The first thing we notice is that in this approximation the superconducting damping has a simple polynomial form in (H c2 /H − 1). The damping is maximum for (H c2 /H − 1) = a 2 g 1 a 1 g 2 . For (H c2 /H −1) > a 2 g 1 a 1 g 2 the damping decreases when we go deeper into the superconducting state and for (H c2 /H − 1) > 2a 2 g 1 a 1 g 2 the magnetic oscillations are enhanced by the superconducting order. This explains the observations made in section III. The in-phase oscillations between the fourth order Ω S − Ω N and Ω N are due to the oscillatory behaviour of γ(H). Since γ(H) oscillates in phase with Ω N we will get the enhancement of the oscillations of Ω S compared to Ω N when the smooth part of α(H) is sufficiently large. Again we must emphasize that this is obviously a sign that our perturbative scheme has broken down and does not reflect any physical effect.
To make any quantitative predictions we need to use our approximate expressions for a i and g i . Since we only have very good approximations for a 2 and g 2 and for the temperature and spin dependence of a 1 and g 1 we will concentrate on properties that can be derived from these results. From Eq. (28) and the temperature dependence of a i and g i we conclude that the first harmonic of the condensation energy (Ω S − Ω N ) 1 is proportional this means that the magnetic oscillations have the same temperature dependence in the mixed state as in the normal state. This result agrees with the general theory (see Schoenberg 32 Sec. 2.5 and Sec. 2.3) valid for any part of the grand potential which is proportional to cos(µ/hω c ). It is also confirmed by experimental observations. 34 Likewise the effect of spin on (Ω S − Ω N ) 1 is a reduction in the amplitude by a factor cos(π gm * 2m 0 ). This is the same reduction factor as for the oscillations in the normal state. 32 We thus have no extra damping effects due to spin close to the transition line where the perturbation theory is valid.
We can now examine whether the arguments based on the Gor'kov expansion leading to a sign change of the first harmonic are valid. Naively one would expect a sign change since the contribution from the condensation energy to the magnetic oscillations is in antiphase with the normal state oscillations. When the system is deep enough into the mixed state the superconducting oscillations would dominate leading to a sign change of the magnetic oscillations. Extrapolating the rate of the damping close to H c2 obtained from the Gor'kov expansion Maniv et al 18 have estimated the magnetic field H inv < H c2 at which this sign change should occur. We are now able to show that this argument based on the perturbative expansion of the grand potential is incorrect. From Eq. (28) we obtain that the maximum amplitude of the antiphase oscillations of Ω S − Ω N is given by a 2 2 4g 2 . Using our approximate expressions for a 2 and g 2 we get Comparing this amplitude with the contribution from the normal state oscillations given in Eq. (30) we see that our perturbation scheme roughly predicts a maximum damping of 50%.
It must be emphasized that this does not mean that the damping of the model described by the Hamiltonian in Eq.( 3) has a maximum of 50%. However, using the result above combined with the results in section III, we can conclude that neither the argument based on the Gor'kov expansion nor the arguments based on a simplified form for the quasiparticle spectrum leading to an inversion of the first harmonic of the dHvA signal are valid.
VI. COMPARISON WITH EXPERIMENT
In this section we present a typical result for the damping of the magnetic oscillations Eq. (12). We used the a set of parameters such that k b T /hω c = 0.25, V hωc = 2.315, and ω D /ω c = 75 when n f = 175. There is no Zeeman effect and the chemical potential is conserved. In Fig. 6 we have plotted the magnetization for both the normal state and the mixed state calculated from the perturbative expansion to fourth order as a function of n f . The perturbation theory predicts a substantial damping of the oscillations over many periods reaching a maximum for n f ≃ 170. At the maximum the first harmonic is damped approximately 50 % in agreement with the result in the previous section. As we go deeper into the mixed state, the damping decreases according to the perturbative scheme. Based on the results in Section III, we expect the perturbation theory to describe the damping well for n f < ∼ 170 . Due to the large number of Landau levels involved in pairing, we have not undertaken the exact numerical calculation for this set of parameters. In Fig. 7 we have plotted M s calculated from the exact evaluation of α(H) and γ(H) and calculated from Eq. (28). We see that the simplified expression reproduces the perturbative predictions well.
The above parameters approximate the experiment performed by van der Wel et al. 2 on the essentially 2D organic superconductor κ−(ET ) 2 Cu(NCS) 2 . To compare with the experimental data we will formulate our results in terms of a field dependent quasiparticle scattering rate τ defined such that e −π/ωcτ gives the damping of the first harmonic of the dHvA oscillations due to superconductivity. From Eq. (29) we get: where we have used Eq. (28). The approximate equality is only valid for a 2 g 1 a 1 g 2 ≪ H c2 /H − 1. Using the expressions for a i , g i , and Ω N 1 we can now compare this expression with the experimental observations. Unfortunately the experimental value of H c2 is uncertain. The transition from the normal state to the superconducting state occurs over a field range of approximately 2T . 35 This gives a 'smooth' variation of the τ −1 on entering the mixed state which our theory cannot account for. To model this transition region we use the method introduced in ref. 2 by including a Gaussian spread in H c2 In Fig. 8 we have plotted the experimental data for τ −1 (bars) measured in (THz) as a function of 1/B measured in Tesla −1 .
The solid line is our theoretical prediction based on Eq.(44) including a Gaussian spread in H c2 . The agreement between theory and experiment is good. It should be noted that we have no fitting parameters apart from H c2 . However, without a more reliable measurement of H c2 a precise comparison between our theory and the experimental observations cannot be made.
VII. CONCLUSION
In this paper we have examined the dHvA oscillations in the mixed state of a type II superconductor in the 2D limit using both a numerical solution of the BdG equations and an analytical theory based on a self-consistent Gor'kov expansion. The use of translational and rotational symmetry has simplified the analysis such that we have been able to calculate the expansion coefficients exactly to any order without using semiclassical or other approximations. Comparison with the exact numerical solution has showed that perturbation theory works well close to H c2 thereby disproving recent claims of non-perturbative effects. We have found that the condensation energy oscillates in antiphase with the normal grand potential, thus producing damping of the dHvA oscillations in agreement with numerical and experimental results. The damping is directly connected with the enhancement of superconductivity when we have a Landau level at the chemical potential. We have excluded the possibility of a sign change of the first harmonic of the dHvA oscillations in the mixed state. The effect of spin and a conserved number of particles as opposed to a conserved chemical potential was examined. Using a simple approximate form of our analytical theory valid when many Landau levels participate in pairing we have compared our theory with an experiment on the quasi 2D organic superconductor κ−(ET ) 2 Cu(NCS) 2 . We have found good agreement. However, due to experimental uncertainty about H c2 any quantitative comparison is impossible.
VIII. ACKNOWLEDGEMENTS
We would like thank J. Singleton and S. Hayden for many helpful discussions. This work has been supported in part by EPSRC grant GR/K 15619 (VNN nad NFJ) and by The Danish Research Academy (GMB).
APPENDIX A:
Using the symmetry of the vortex lattice and making the restriction ∆ j =0 = 0 we obtain: Likewise the eighth order term gives for ∆ j =0 = 0: Ξ n 1 +n 2 ,n 3 +n 4 ,n 5 +n 6 ,n 7 +n 8 n 7 +n 6 ,n 5 +n 4 ,n 3 +n 2 ,n 1 +n 8 The Matsubara sums and the k-sums can be calculated as in the fourth order case.
APPENDIX B:
In this appendix we will extract the dependence of a 1 and g 1 on n f , T , σ and spin. This is considerably harder than for a 2 and g 2 because we do not have any oscillatory factor in the integrals that would make the long range behaviour of the remaining integrand insignificant. It turns out that it is still fairly straightforward to derive the temperature and spin dependence of a 1 and g 1 , whereas we have to make some rather drastic approximations to obtain the dependence on n f and σ for g 1 .
The smooth part (zero harmonic) of α(H) comes from the terms I l,l in Eq. (32). We first look at the term l = m = 0. Making the variable substitution v = x+y σ √ 2 , u = x−y σ √ 2 we get the following integral: where K = βhω c /2 √ 2 ≫ 1 determines the temperature dependence of the integral. Since K is only important around the region v ≃ 0 which does not contribute significantly to the integral, we conclude that I 0,0 is independent of the temperature to a very good approximation. Since similar calculations to the ones in Sec. V B show that for 2π 2 k b T /hω c > ∼ 1 we can neglect the contribution to the zero harmonic from the I l,l -terms where l = 0, we conclude that a 1 for is independent of the temperature for temperatures that are not too low. We have checked this independence against the exact result given in Eq. (11) and found very good agreement. To obtain the dependence on n f and σ we make the simplification which is a very good approximation since K ≫ 1. It is exact for T = 0. The integral can be solved and we obtain: where we have assumed σ 2 ≫ 4n f . This yields the result The expression for a 1 is independent of any spin effects for min( √ n f , σ) ≫ g. We have again checked the independence of a 1 on n f , σ, and spin against the exact result and we find very good agreement.
Using Eq.(16) we can rewrite the integral with l 1 = l 2 = l 3 = l 4 = 0 as where K =hω c /k b T determines the temperature dependence. Again for min( √ n f , σ) ≫ g, g 1 will be independent of spin effects. As in the case of a 1 , it is fairly straightforward to see that since 1/K ≪ min( √ n f , σ), the integral and therefore g 1 are independent of the temperature to a very good approximation. We have checked this independence against the exact result given in Eq. (12) and find very good agreement.
B j 1 j 2 0 factors in Eq. 12 is unaltered and we still get that g 1 ∝ 1/n f for min( √ n f , σ) large and that g 1 is independent of σ and the temperature. By calibrating g 1 through an exact evaluation based on Eq.(12), we obtain: where g 1 is defined in Section V A. It should be noted that the dependence of g 1 on n f and σ in the above expression is only approximate and rests on the various simplifications made. We have tested the above expression against the exact result and we find that the dependence on n f and σ fits to an accuracy of 20%. (Bruun, prb) | 2019-04-14T03:24:02.994Z | 1996-08-14T00:00:00.000 | {
"year": 1996,
"sha1": "e1c98eafca513f36e338a939a5efbd72d82c194f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/supr-con/9608004",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b890b86e38df9fd5279557a29f145302a996e217",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257128360 | pes2o/s2orc | v3-fos-license | Exploring the Multifunctionality of Mechanochemically Synthesized γ-Alumina with Incorporated Selected Metal Oxide Species
γ-Alumina with incorporated metal oxide species (including Fe, Cu, Zn, Bi, and Ga) was synthesized by liquid-assisted grinding—mechanochemical synthesis, applying boehmite as the alumina precursor and suitable metal salts. Various contents of metal elements (5 wt.%, 10 wt.%, and 20 wt.%) were used to tune the composition of the resulting hybrid materials. The different milling time was tested to find the most suitable procedure that allowed the preparation of porous alumina incorporated with selected metal oxide species. The block copolymer, Pluronic P123, was used as a pore-generating agent. Commercial γ−alumina (SBET = 96 m2·g−1), and the sample fabricated after two hours of initial grinding of boehmite (SBET = 266 m2·g−1), were used as references. Analysis of another sample of γ-alumina prepared within 3 h of one-pot milling revealed a higher surface area (SBET = 320 m2·g−1) that did not increase with a further increase in the milling time. So, three hours of grinding time were set as optimal for this material. The synthesized samples were characterized by low-temperature N2 sorption, TGA/DTG, XRD, TEM, EDX, elemental mapping, and XRF techniques. The higher loading of metal oxide into the alumina structure was confirmed by the higher intensity of the XRF peaks. Samples synthesized with the lowest metal oxide content (5 wt.%) were tested for selective catalytic reduction of NO with NH3 (NH3-SCR). Among all tested samples, besides pristine Al2O3 and alumina incorporated with gallium oxide, the increase in reaction temperature accelerated the NO conversion. The highest NO conversion rate was observed for Fe2O3-incorporated alumina (70%) at 450 °C and CuO-incorporated alumina (71%) at 300 °C. The CO2 capture was also studied for synthesized samples and the sample of alumina with incorporated Bi2O3 (10 wt.%) gave the best result (1.16 mmol·g−1) at 25 °C, while alumina alone could adsorb only 0.85 mmol·g−1 of CO2. Furthermore, the synthesized samples were tested for antimicrobial properties and found to be quite active against Gram-negative bacteria, P. aeruginosa (PA). The measured Minimum Inhibitory Concentration (MIC) values for the alumina samples with incorporated Fe, Cu, and Bi oxide (10 wt.%) were found to be 4 µg·mL−1, while 8 µg·mL−1 was obtained for pure alumina.
Introduction
The synthesis of mesoporous materials with tailored porosity was significantly expanded after the discovery of ordered mesoporous silica by the Mobil Oil Company in 1992 [1]. Porous materials may possess micropores (sizes below 2 nm), mesopores (sizes between 2 and 50 nm), and/or macropores (sizes above 50 nm) [2]. Non-silica materials such as carbon [3], alumina [4], metal oxides [5], and metal-organic frameworks [6], were antimicrobial properties. Although these diverse applications seem to be disconnected, their successful implementation depends on the enlarged specific surface area achieved by well-developed mesoporosity and the properly modulated surface properties accomplished by the incorporation of metal oxide species into γ-Al2O3. Namely, this study provides extensive experimental data showing a significant impact of metal oxide incorporation on the physicochemical and structural properties of the resulting alumina-based materials. The rationale for selecting γ-alumina as the support for introducing the abovementioned metal oxides is because of the unique properties of this material, which is widely used as a catalyst and support for various catalysts [8,39], popular adsorbent for gas-and liquid-phase applications, e.g., CO2 capture [40], and antimicrobial compound [41]. Additionally, the above-mentioned metal oxide additives are known for their catalytic and antimicrobial properties [39,41].
Basic Information about the Materials Studied
Schematic representation of the synthesis of γ-alumina with incorporated metal oxides is presented in Figure 1, while the notation of the samples studied with basic information is provided in Table 1.
Compositional Analysis of the Materials Studied
The composition of the selected samples was studied using EDX and elemental mapping. The elemental mapping of Al-Fe 10 -3 is displayed in Figure 2 and shows that the iron species are uniformly distributed throughout the sample. Similarly, the elemental mappings of pristine alumina and Al-Cu 10 -3 are shown in Figures S1 and S2, respectively. Additionally, these samples were characterized by TEM to obtain images of Al-Fe 5 -3 and Al-Fe 10 -3 ( Figure 3), which show the presence of disordered but quite uniform mesopores. In addition, the TEM images of Al-Cu 10 -3 show a similar distribution of mesopores ( Figure S3). The wide-angle X-ray diffraction was also used to elucidate the incorporation of metal oxide species into the alumina framework. The XRD patterns of Al-Fe 5 -3 and Al-Fe 10 -3 ( Figure S4) show signals characteristic for γ-alumina with some signs originating from the incorporated iron oxide species as indicated by the spectra of gamma alumina (standard ICSD DB card # 66559) and hematite (standard ICSD DB card # 96075). These tiny XRD signals can be related to the very high dispersion of metal oxide species.
The composition of the selected samples was studied using EDX and elemental mapping. The elemental mapping of Al-Fe10-3 is displayed in Figure 2 and shows that the iron species are uniformly distributed throughout the sample. Similarly, the elemental mappings of pristine alumina and Al-Cu10-3 are shown in Figures S1 and S2, respectively. Additionally, these samples were characterized by TEM to obtain images of Al-Fe5-3 and Al-Fe10-3 (Figure 3), which show the presence of disordered but quite uniform mesopores. In addition, the TEM images of Al-Cu10-3 show a similar distribution of mesopores ( Figure S3). The wide-angle X-ray diffraction was also used to elucidate the incorporation of metal oxide species into the alumina framework. The XRD patterns of Al-Fe5-3 and Al-Fe10-3 ( Figure S4) show signals characteristic for γ-alumina with some signs originating from the incorporated iron oxide species as indicated by the spectra of gamma alumina (standard ICSD DB card # 66559) and hematite (standard ICSD DB card # 96075). These tiny XRD signals can be related to the very high dispersion of metal oxide species. To discover the composition of the obtained materials, an X-ray fluorescence analysis was carried out. The XRF spectra of the samples studied in comparison to the spectrum of pristine Al2O3 are shown in Figure 4. To discover the composition of the obtained materials, an X-ray fluorescence analysis was carried out. The XRF spectra of the samples studied in comparison to the spectrum of pristine Al 2 O 3 are shown in Figure 4. To discover the composition of the obtained materials, an X-ray fluorescence analysis was carried out. The XRF spectra of the samples studied in comparison to the spectrum of pristine Al2O3 are shown in Figure 4. All synthesized materials are composed mainly of aluminum oxide as shown in Table 1. The presence of different elements was proven in all samples with incorporated metal oxide species, which indicates the efficiency of the synthesis process. In the case of Al-Fe5- All synthesized materials are composed mainly of aluminum oxide as shown in Table 1. The presence of different elements was proven in all samples with incorporated metal oxide species, which indicates the efficiency of the synthesis process. In the case of Al-Fe 5 -3 and Al-Fe 10 -3, signals at 6.40 and 7.06 keV, characteristic of iron were noted. Both materials with incorporated copper oxide show signals around 8.04 and 8.90 keV, characteristic of Cu. For Al-Zn 5 -3 and Al-Zn 10 -3 signals characteristic for zinc are located at 8.63 and 9.57 keV. The alumina with incorporated Ga oxide reveals peaks at 9.24 and 10.26 keV, which confirm the presence of this metal in its structure. Samples Al-Bi 5 -3 and Al-Bi 10 -3 represent peaks at around 10.84 and 13.02 keV, characteristic of bismuth. In all cases, an increased percentage contribution of metal elements in the sample results in increased intensity of XRF patterns, which indicates the effective incorporation of those elements into the alumina structure. The γ-phase of alumina exists at the temperature range of 400-700 • C. The rise in the temperature up to 900 • C transforms the γ-phase to δ-Al 2 O 3 . The thermal behavior of the pristine alumina along with and without triblock copolymer was investigated by a thermal decomposition study as shown in Figure S5. The presence of metal species in the alumina structure changes the decomposition pattern of the triblock copolymer. This behavior agrees with the previous study conducted by Goncalves et al. [39].
Low-Temperature N 2 Sorption Analysis
The textural properties of the mechanochemically synthesized samples were obtained based on low-temperature N 2 sorption isotherms data. The adsorption isotherms of γ-Al 2 O 3 , the alumina samples with incorporated metal oxide species (10 wt.%) and the reference samples, are shown in Figure 5a together with the corresponding PSD curves. Similarly, the N 2 adsorption/desorption isotherms and their respective PSD curves of γ-Al 2 O 3 with 5 wt.% and 20 wt.% metal oxide loading are shown in Figures S6a,b and S7a,b, respectively. The adsorption isotherms obtained for all synthesized samples are of Type IV with the H1 hysteresis loop, characteristic of mesoporous materials [2]. The incorporation of metal oxide species somewhat alters the adsorption isotherms in comparison to those obtained without metal oxides and reference samples. Capillary condensation for all metalincorporated samples occurs at higher relative pressure because of the presence of larger pores. This feature is validated by the specific surface area, the pore size, as well as the pore volume of the samples studied (Al-Me 5 -3 and Al-Me 10 -3), as shown in Table 2. Data for samples Al-Me 20 -3 are provided in Table S1 and the textural properties for 4, 5 and 10 h milled samples are listed in Table S2.
Low-Temperature N2 Sorption Analysis
The textural properties of the mechanochemically synthesized samples were obtained based on low-temperature N2 sorption isotherms data. The adsorption isotherms of γ-Al2O3, the alumina samples with incorporated metal oxide species (10 wt.%) and the reference samples, are shown in Figure 5a together with the corresponding PSD curves. Similarly, the N2 adsorption/desorption isotherms and their respective PSD curves of γ-Al2O3 with 5 wt.% and 20 wt.% metal oxide loading are shown in Figures S6a,b and S7a,b, respectively. The adsorption isotherms obtained for all synthesized samples are of Type IV with the H1 hysteresis loop, characteristic of mesoporous materials [2]. The incorporation of metal oxide species somewhat alters the adsorption isotherms in comparison to those obtained without metal oxides and reference samples. Capillary condensation for all metal-incorporated samples occurs at higher relative pressure because of the presence of larger pores. This feature is validated by the specific surface area, the pore size, as well as the pore volume of the samples studied (Al-Me5-3 and Al-Me10-3), as shown in Table 2. Data for samples Al-Me20-3 are provided in Table S1 and the textural properties for 4, 5 and 10 h milled samples are listed in Table S2.
Catalytic Tests
The catalytic performance of the prepared materials was studied for ammonia-induced selective catalytic reduction of NO at the temperature range of 150-450 • C. The results of the NH 3 -SCR catalytic tests obtained for the mechanochemically synthesized materials are presented in Figure 6.
Catalytic Tests
The catalytic performance of the prepared materials was studied for ammonia-induced selective catalytic reduction of NO at the temperature range of 150-450 °C. The results of the NH3-SCR catalytic tests obtained for the mechanochemically synthesized materials are presented in Figure 6. In the case of all tested samples, besides pristine Al2O3 and gallium oxide-incorporated samples, an increase in the reaction temperature accelerates the NO conversion as In the case of all tested samples, besides pristine Al 2 O 3 and gallium oxide-incorporated samples, an increase in the reaction temperature accelerates the NO conversion as shown in Figure 6a. In the case of alumina, after exceeding the temperature of 450 • C, nitrogen oxides are produced rather than NO conversion. This phenomenon is not observed for the metal oxide-incorporated alumina samples. Among all metal oxides used, the copper and iron oxide-incorporated samples showed a significant improvement in the catalytic behavior of alumina. The Al-Cu 5 -3 material reveals the best catalytic properties for the lowtemperature SCR, reaching 71% NO reduction at 300 • C. Overall, the highest NO conversion rate was observed for Al-Fe 5 -3 material, reaching a 70% reduction at the temperature of 450 • C. Incorporation of zinc intoAl 2 O 3 structure caused increased catalytic performance of the material with increasing temperature, but this change is so small, that it is below profitability. Modification of Al 2 O 3 with bismuth in principle did not affect the catalytic properties of alumina at all and the incorporation of gallium oxide even reduced them.
During experiments, the N 2 O formation, as a by-product, was constantly measured, and the results are presented in Figure 6b. The amount of N 2 O in the case of all samples with incorporated metal oxides was kept low-not exceeding 20 ppm. The highest byproduct generation was observed for pristine alumina, which may indicate that metal incorporation in the alumina structure decreases N 2 O production. However, and this is worth mentioning, all the catalytic materials obtained had very low nitrous oxide formation (below 30 ppm). Therefore, it is shown that the effective catalytic reduction of NO is due to the effective incorporation of selected metal oxide species into the alumina structure.
Carbon Dioxide Capture Study
The CO 2 adsorption isotherms for pristine alumina and all alumina samples with incorporated metal oxides (5 wt.% and 10 wt.%), measured at 25 • C and 1.03 bar pressure, are shown in Figure 7a,b and the amount of CO 2 captured is listed in Table 1. The general mechanism for CO 2 capture is based on the interaction between the acidic nature of CO 2 and the basic site of the metal oxide species in the alumina framework. The development of basic O 2− sites on the surface results from high-temperature calcination, which enhances CO 2 adsorption [42]. During calcination, the surface hydroxyl groups are removed, and some basic sites are formed. The amount of CO 2 captured by pristine alumina is 0.85 mmol·g −1 , which is the lowest value compared to that recorded for the alumina samples with incorporated metal oxide species. The observed increase in CO 2 capture by metal oxide-incorporated alumina samples is due to the presence of additional basic sites exposed on the surface. The highest CO 2 capture was observed for the alumina samples with incorporated metal oxides (samples obtained by using 10 wt.% of metal). Among them, Al-Bi 10 -3 (1.16 mmol·g −1 ) shows the best adsorption capacity of CO 2 at 25 • C. The CO 2 adsorption isotherms and the respective amounts of CO 2 captured measured for alumina samples containing 20 wt.% of the selected metal oxide are given in Supplementary Materials (see Table S1).
Antimicrobial Activity against Pseudomonas aeruginosa
Samples of alumina with incorporated metal oxide species, synthesized via a onepot-mechanochemical method, were tested as antimicrobial agents against Pseudomonas aeruginosa (PA). The samples calcined at 600 °C were not completely dispersed in the given solvent. Therefore, they were sonicated in DMSO for 10 h and the homogeneous dispersible obtained was used for antimicrobial activity evaluation. PA is a member of the ESKAPE pathogens, a group of Gram-positive and Gram-negative bacteria that can readily evade (i.e., escape) the attack of most clinical antibiotics because of the multidrug resistance (MDR) developed by a variety of phenotypes of these bacteria which can escape the biocidal action of antibiotics and resist their working mechanisms [43,44]. After incu-
Antimicrobial Activity against Pseudomonas aeruginosa
Samples of alumina with incorporated metal oxide species, synthesized via a one-potmechanochemical method, were tested as antimicrobial agents against Pseudomonas aeruginosa (PA). The samples calcined at 600 • C were not completely dispersed in the given solvent. Therefore, they were sonicated in DMSO for 10 h and the homogeneous dispersible obtained was used for antimicrobial activity evaluation. PA is a member of the ESKAPE pathogens, a group of Gram-positive and Gram-negative bacteria that can readily evade (i.e., escape) the attack of most clinical antibiotics because of the multidrug resistance (MDR) developed by a variety of phenotypes of these bacteria which can escape the biocidal action of antibiotics and resist their working mechanisms [43,44]. After incubation of PA (ATCC15692) with alumina as a control and metal oxide incorporated alumina for 18 h, the MIC value was found to be 8 µg·mL −1 for the control and 4 µg·mL −1 for the samples with incorporated Cu (10 wt.%), Fe (10 wt.%) and Bi (10 wt.%) oxides. Similarly, the samples with incorporated Zn (10 wt.%), and Ga (10 wt.%) oxides gave the MIC of 8 µg·mL −1 as shown in Figure 8. Higher MIC for the alumina samples with incorporated Zn and Ga oxides might be caused by poor dispersion and agglomeration of the samples. To further explore the biological activity of pristine alumina and metal oxide-incorporated samples, the antimicrobial activity was tested against drug-resistant Pseudomonas aeruginosa (DRPA). The activity of pristine alumina (control) was found to be 8 µg·mL −1 . The antimicrobial activity in the same strain for the samples with incorporated Cu and Fe oxides was found to be 4 µg·mL −1 and for those with incorporated Bi, Zn, and Ga oxides, the MIC was found to be 8 µg·mL −1 . The photographs of the MIC measurements against DRPA for all the samples studied are shown in Figure S9. As expected, the incorporation of 5 wt.% of metal oxide-incorporated alumina structure was found to be less effective than the 10 wt.% contributions, see Figure S10. Surprisingly, alumina samples with the highest metal oxide loading (20 wt.%) were found to be the least effective among all the samples studied. The lower activity might be due to the higher agglomeration of these samples. To understand the antimicrobial mechanism of action of the metal oxide-incorporated alumina samples, a morphological study was performed through SEM and the results are shown in Figure 9. As it is shown, the rupture of the cell membrane and release of the intracellular fluid is the main reason for bacteria-killing. In the case of the control sample, the slight deformation and fissures in the cell membrane indicate the cellular activity of the pristine alumina sample. In the case of the samples with incorporated Fe, Cu, Bi, Ga, and Zn oxides, the clear shrinkage and rupture of the cell membrane prove the advantage of metal oxides in the alumina structure, which enhances their biological activity. Thus, this study opens new areas for research concerning mechanochemically synthesized porous samples for biological applications. To understand the antimicrobial mechanism of action of the metal oxide-incorporated alumina samples, a morphological study was performed through SEM and the results are shown in Figure 9. As it is shown, the rupture of the cell membrane and release of the intracellular fluid is the main reason for bacteria-killing. In the case of the control sample, the slight deformation and fissures in the cell membrane indicate the cellular activity of the pristine alumina sample. In the case of the samples with incorporated Fe, Cu, Bi, Ga, and Zn oxides, the clear shrinkage and rupture of the cell membrane prove the advantage of metal oxides in the alumina structure, which enhances their biological activity. Thus, this study opens new areas for research concerning mechanochemically synthesized porous samples for biological applications. of the intracellular fluid is the main reason for bacteria-killing. In the case of the control sample, the slight deformation and fissures in the cell membrane indicate the cellular activity of the pristine alumina sample. In the case of the samples with incorporated Fe, Cu, Bi, Ga, and Zn oxides, the clear shrinkage and rupture of the cell membrane prove the advantage of metal oxides in the alumina structure, which enhances their biological activity. Thus, this study opens new areas for research concerning mechanochemically synthesized porous samples for biological applications.
Mechanochemical Synthesis of Metal Oxide-Incorporated γ-Al 2 O 3
The synthesis of metal oxide-incorporated γ-Al 2 O 3 was performed using a modified method reported by Szcześniak et al. [37]. Briefly, 1.2 g of boehmite and 3.0 g of P123 were added to 2 mL of deionized water (DI) and 2 mL of 200-proof ethanol. Next, 100 µL of 70% HNO 3 was added, followed by the addition of the selected metal salt. The control sample was synthesized without the addition of metal salt as depicted in Figure 1. Moreover, the commercial γ-alumina, boehmite, and two-step ground boehmite samples were prepared as a reference for this study. Then, the as-prepared mixture was placed in an yttria-stabilized zirconia (YZrO 2 ) grinding jar equipped with eight yttria-stabilized ZrO 2 balls, 1 cm in diameter each, and milled for 3 h with a rotation speed of 500 rpm using Planetary ball mill (PM200, Retsch). The other milling time (4, 5 and 10 h) was also tested and the selected results are shown in supporting information. The 3 h was found to be the most suitable milling time. Paste-like materials were obtained and furthermore calcined in a quartz glass boat for 4 h, directly in air at 600 • C, at a heating rate of 1 • C/min. This step enabled the removal of the polymer matrix from the final product giving γ-phase alumina, formed at the temperature range of 400-700 • C [39,45] with incorporated selected metal species. The obtained samples were named Al-Me X -Y, where Al = Al 2 O 3 , Me = Fe, Cu, Zn, Bi and Ga), x = weight percentage contribution of the metal element (5%,10% and 20%), and Y = the total grinding time (3 h, 4 h and 10 h). The selected samples with their notions are shown in Table 1.
Measurements and Characterizations of γ-Alumina with Incorporated Metal Oxide Species
The X-ray fluorescence (XRF) analysis (Eplison 4, Malvern Instruments Ltd., Malvern, UK) and energy-dispersive X-ray spectroscopy (EDX, PTG Prism Si (Li), Princeton Gamma Tech., Plainsboro, NJ, USA) were carried out to determine the elemental composition of the samples. Wide angle X-ray diffraction (XRD) measurements were collected on a Rigaku Miniflex 600 X-ray diffractometer operating with a Cu anode at a voltage and current of 40 KV and 15 mA, respectively. The scan rate and the step size were 0.25 • min −1 and 0.02 • , respectively, in the range of 10-80 • . The XRD spectra were analyzed using PDXL-2 software. A transmission electron microscope (TEM) was operated using FEI Tecnai TF20 FEG TEM at 200 KV equipped with a 4 k ultra-scan charge-coupled device (CCD) camera for high-resolution digital images of alumina samples with incorporated metal oxides.
Nitrogen adsorption-desorption isotherms were measured at −196 • C on ASAP 2010/2020 volumetric adsorption analyzers manufactured by Micromeritics Instruments Co. (Norcross, GA, USA), using 99.998% pure liquid nitrogen. Each sample was degassed under vacuum for at least 2 h at 200 • C before adsorption and CO 2 sorption measurements. High-resolution thermogravimetric analysis (HR-TGA) experiments were conducted on a TA Instruments TGA Q500 thermogravimetric analyzer. Thermogravimetric profiles were recorded up to 950 • C in flowing air with a heating rate of 10 • C·min −1 .
Calculations
Brunauer-Emmett-Teller (BET) surface areas (S BET ) were calculated from N 2 adsorption isotherms in the relative pressure range of 0.05-0.2. Pore size distributions (PSDs) were obtained from the adsorption branch of isotherms using the improved Kruk-Jaroniec-Sayari (KJS) method [46]. Pore widths (W KJS ) were determined from the PSD curves at their apex points. The single-point pore volumes were obtained from the maximum amount adsorbed at a relative pressure of about 0.98.
Selective Catalytic Reduction of NO with Ammonia (NH 3 -SCR)
The catalytic performance of the selected samples was tested in the process of ammoniainduced selective catalytic reduction of NO (NH 3 -SCR), in a fixed-bed flow microreactor under atmospheric pressure in the temperature range of 150-450 • C [47]. To investigate the catalytic properties of the samples, 200 mg of the catalyst was sandwiched between the quartz cotton under flowing He . In a typical run, the reaction mixture (800 ppm NO, 800 ppm NH 3 in He with 3% (v/v) addition of O 2 ) was introduced to the catalytic microreactor through mass flow controllers that maintained the total flow rate of 100 cm 3 ·min −1 . The catalytic unit downstream of the reactor was used to decompose possibly forming NO 2 to NO [48]. The concentration of residual NO and N 2 O (a by-product of the reaction) in the final stream was measured every 65 s by non-dispersive infrared sensor (NDIR) from Hartmann and Braun. NO conversion was calculated according to the following formula: where: NO in -inlet concentration of NO, NO out -outlet concentration of NO.
CO 2 Capture Study
Carbon dioxide adsorption was measured on an ASAP 2020 volumetric adsorption analyzer up to~1.15 bar pressure and at 25 • C. Each sample was degassed at 200 • C (ramping 1 • C·min −1 ) for 2 h. Then the dewar filled with water at ambient conditions was used to measure CO 2 capture at that temperature.
Determination of Minimum Inhibitory Concentrations
Guidelines from the Clinical and Laboratory Standards Institute (CLSI) were adopted to determine the Minimum Inhibitory Concentration (MIC) values by the broth microdilution method. The tested bacterial strains of Pseudomonas aeruginosa (PA) (ATCC 15692) and drug-resistant PA (DRPA) (ATCC BAA 2108) were cultured [46]. Various concentrations of Al-Me 10 -3 and pristine alumina as a control sample were dispersed in Nutrient Broth (NB) medium with a given strain of bacteria at a density of 1 × 10 6 CFU·mL −1 . The resulting suspensions were transferred to a 96-well microtiter plate at 200 µL per well (three wells for each compound). The plate was then incubated at 37 • C for 24 h. MIC values were determined as the lowest concentration that inhibited the visible growth of the tested microorganisms with unaided eyes.
SEM Images of Bacteria
The morphology of incubated PA bacteria was characterized by SEM as previously described [41]. At first, PA bacteria (1 × 10 9 CFU·mL −1 ) were treated with the control and Al-Me 10 -3 samples, at MIC concentration for 2 h. Similarly, pristine alumina was taken as a control sample with a concentration of 8 µg·mL −1 . The bacterial solution was then centrifuged at 3750 rpm for 7 min at 4 • C and resuspended in 1 mL of phosphate-buffered saline (PBS) twice. Subsequently, the bacteria were fixed with PBS containing 2.5% of glutaraldehyde. After washing with PBS three times, the bacteria were subjected to a few minutes post-fixation with 1% tannic acid. After fixation, the sample was washed three times with PBS, dehydrated with a series of graded ethanol solutions, dried in air, and coated with gold. SEM images were taken using a Quanta 450 Field Emission Gun Scanning Electron Microscope (FEG SEM).
Conclusions
The one-pot mechanochemical synthesis of metal oxide-incorporated alumina samples using boehmite as an alumina precursor and liquid-assisted grinding has been shown to be successful and satisfies the main principles of green chemistry and an eco-friendly type of synthesis. The method helps avoid an unnecessary rise in the temperature during friction, shear, or mechanical processing. This method provided pristine alumina with a surface area of 320 m 2 ·g −1 and a single point pore volume of 0.96 cm 3 ·g −1 . Reference samples such as commercial γ-alumina, boehmite, and two-step synthesized samples show lower surface area and smaller pore volumes. The alumina samples with incorporated metal oxides show somewhat reduced surface area; however, there is a rise in mesoporosity and hence in the total pore volume. The very high total pore volume is an advantageous feature for the selective catalytic reduction of NO. It was found that the catalytic activity of metal oxide-incorporated alumina samples is enhanced in comparison to that of pristine alumina. The iron oxide and copper oxide species incorporated into the alumina structure results in the samples that show the best catalytic properties in SCR (high NO conversion and very low N 2 O formation under a low-temperature range-under 300 • C), among all the samples tested. The materials obtained in this way are potential materials for industrial applications. The formation of γ-alumina was possible by the appropriate thermal treatment at 600 • C of the boehmite-polymer composite. High-temperature calcination facilitates a higher number of basic sites exposed to the surface, and therefore, facilitates a higher CO 2 capture. The metal oxide-incorporated alumina samples showed an improved CO 2 capture capacity at ambient temperature compared to that of pristine alumina. The EDX elemental mapping, XRD, TEM, and X-ray fluorescence analysis results confirmed the presence of metal oxides in the samples as evidenced by EDX for Al-Fe 10 -3 and Al-Cu 10 -3 and an increase in the XRF patterns intensity with increasing metal content in each sample. This was further supported by a composition study through XRF. Most of the samples have higher percentages of the respective metal oxides (XRF data) than the predicted percentage based on the amount of metal salt used in the synthesis. The observed difference may be caused due to some losses during sample processing. After proper characterization, the biological activity of the samples was tested, and it was found that analyzed samples are quite active against Pseudomonas aeruginosa. The best activity was obtained for Al-Me 10 -3. The Cu, Fe, and Bi samples show better antimicrobial activity than pure alumina. However, the samples with incorporated Zn and Ga oxides exhibit similar MIC values to the control pristine alumina. Similarly, antimicrobial activity against drug-resistant PA (DRPA) was tested and it was found that the samples Al-Cu 10 -3 and Al-Fe 10 -3 show 4 µg·mL −1 , and the rest of the samples with incorporated metal oxides show 8 µg·mL −1 .
Data Availability Statement:
The data presented in this study are available upon request from the authors. | 2023-02-24T17:21:18.468Z | 2023-02-21T00:00:00.000 | {
"year": 2023,
"sha1": "e51555d9764113bfd253926250c168c31727cd26",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9cbd95e6ccf5c0bfd1138aea6ebe871147b0617e",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267886882 | pes2o/s2orc | v3-fos-license | Safety of primary nasotracheal intubation in the pediatric intensive care unit (PICU)
Background Nasal tracheal intubation (TI) represents a minority of all TI in the pediatric intensive care unit (PICU). The risks and benefits of nasal TI are not well quantified. As such, safety and descriptive data regarding this practice are warranted. Methods We evaluated the association between TI route and safety outcomes in a prospectively collected quality improvement database (National Emergency Airway Registry for Children: NEAR4KIDS) from 2013 to 2020. The primary outcome was severe desaturation (SpO2 > 20% from baseline) and/or severe adverse TI-associated events (TIAEs), using NEAR4KIDS definitions. To balance patient, provider, and practice covariates, we utilized propensity score (PS) matching to compare the outcomes of nasal vs. oral TI. Results A total of 22,741 TIs [nasal 870 (3.8%), oral 21,871 (96.2%)] were reported from 60 PICUs. Infants were represented in higher proportion in the nasal TI than the oral TI (75.9%, vs 46.2%), as well as children with cardiac conditions (46.9% vs. 14.4%), both p < 0.001. Severe desaturation or severe TIAE occurred in 23.7% of nasal and 22.5% of oral TI (non-adjusted p = 0.408). With PS matching, the prevalence of severe desaturation and or severe adverse TIAEs was 23.6% of nasal vs. 19.8% of oral TI (absolute difference 3.8%, 95% confidence interval (CI): − 0.07, 7.7%), p = 0.055. First attempt success rate was 72.1% of nasal TI versus 69.2% of oral TI, p = 0.072. With PS matching, the success rate was not different between two groups (nasal 72.2% vs. oral 71.5%, p = 0.759). Conclusion In this large international prospective cohort study, the risk of severe peri-intubation complications was not significantly higher. Nasal TI is used in a minority of TI in PICUs, with substantial differences in patient, provider, and practice compared to oral TI. A prospective multicenter trial may be warranted to address the potential selection bias and to confirm the safety of nasal TI.
Introduction
Many patients admitted to the pediatric intensive care unit (PICU) require tracheal intubation (TI) and mechanical ventilation to support the airway or a failing respiratory system, minimize the work of breathing or the systemic oxygen consumption, control the ventilatory drive, and/or support a failing heart.Intubation is a lifesaving maneuver that has inherent risks [1][2][3][4].Indeed, TI-associated events (TIAEs) were reported in up to 20% of intubation attempts [4].Tracheal intubation can be performed by the oral or the nasal route.Nasal TI involves passing an endotracheal tube through the naris and into the nasopharynx and the trachea and typically requires some manipulation with forceps.
The National Emergency Airway Registry for Children (NEAR4KIDS) is an international collaborative quality improvement (QI) initiative and registry of TI from PICUs and emergency departments.From this registry, oral intubation represented 95.8% of all TI [4].The choice of the oral versus nasal route for TI is usually determined by the physician's own experience and the clinical context.Each route has its advantages and disadvantages [5][6][7].Limited evidence exists for the safety of nasal TI, although the oral route is recommended for rapid sequence intubation [6].It is assumed that oral TI allows more expeditious management of the airway in emergent situations, and it may therefore cause less TIAEs [7].Nasal TI is however associated with lower rate of unplanned extubations [8] and may increase comfort [6,9].Data regarding nasal TI in the PICU is scarce.It is therefore important to assess its safety, to provide knowledge of its related risks.To address this knowledge gap, we utilized the NEAR4KIDS database with the aim to assess the use of nasal TI in multiple PICUs.We hypothesized that patients receiving primary nasal TI have higher risk of severe peri-intubation-related events, desaturation (SpO 2 decline > 20% from baseline), and/or severe TIAEs, compared to those receiving oral TI.
Materials and methods
The NEAR4KIDS registry is a quality improvement initiative comprised of prospectively collected TI data from 60 international PICUs.This registry collaborative was approved by the Institutional Review Board (IRB) at the Data Coordinating Center under the study title "Observation of Multi-center Quality Improvement Project: Improving Safety and Quality of Tracheal Intubation Practice in Pediatric ICUs" (Children's Hospital of Philadelphia IRB 09-007253).IRB approval or exemption was obtained at each participating site.Data collected for each TI event included patient characteristics (age, primary diagnosis, indication for TI, history or features suggestive of a difficult airway), provider (discipline and training level), TI characteristics (route of intubation, equipment, medications used), and TI clinical outcomes.Procedures were followed in accordance with the ethical standards of the responsible committee on human experimentation (institutional or regional) and with the Helsinki Declaration of 1975.Each center follows a data compliance plan to ensure at least 95% of all site TIs are captured with high data accuracy [2,4].
Inclusion and exclusion criteria
In this study, we included primary TI for children < 18 years of age in the PICU and pediatric cardiac intensive care unit (CICU) from January 2013 to December 2020.Intubations in the operating room, in the ED, or out-of-hospital location were excluded [10].Exchange of an existing endotracheal tube was also excluded.
Exposure and outcome measures
The primary exposure variable was initial nasal TI, defined as the first route reported on the first attempt.Our primary outcome was a composite of rate for periintubation severe adverse events: severe oxygen desaturation and/or severe TIAEs.Severe desaturation was defined as pulse oximetry saturation (SpO 2 ) decline more than 20% from pre-procedure baseline during the first TI attempt [11,12].Severe TIAEs, by NEAR4KIDS definition, included cardiac arrest, esophageal intubation with delayed recognition, emesis with witnessed aspiration, hypotension requiring intervention (intravenous fluid and/or vasopressors), laryngospasm, pneumothorax/pneumomediastinum, or direct airway injury.These events must occur within 20 min of the TI attempt in order to meet the operational definition of TIAEs.The definition of TIAEs was described in the shared NEAR4KIDS operational definition documents, and each site PI and data coordinator received the training by the Data Coordinating Center.Additional details are available in a prior publication [3].
Our secondary outcomes included the overall TIAE rate (minor and severe) and the number of TI attempts.Minor TIAEs included mainstem bronchial intubation, esophageal intubation with immediate recognition, emesis without aspiration, hypertension requiring therapy, epistaxis, dental or lip trauma, medication error, arrhythmia, or pain and/or agitation requiring additional medication and causing delay in TI.The data were entered into secure Research Electronic Data Capture (REDCap ® ) system hosted by the Data Coordinating Center [13].
Sample size calculation
The minimal sample size and statistical power were estimated a priori.To detect an absolute difference of 4% in the primary outcome (severe oxygen desaturation and/or severe TIAEs), with an estimated incidence of 14% of severe TIAEs related to nasal TI in the NEAR4KIDS registry, a sample size of 14,489 TIs (with a proportion of 4% of nasal TI) was necessary to achieve a power of 80%.Summary statistics were provided as percentages for categorical variables and either median with interquartile range (IQR, 25th-75th percentile) or mean and standard deviation (SD) for continuous variables.Categorical variables were compared between groups using the chisquare test, whereas continuous variables were compared After confirming that we had achieved acceptable balance in the covariates, the association between the exposure and the primary outcome was assessed using the matched cohort.
Nasotracheal intubation
From a total of 25,363 encounters reported in the NEAR4KIDS cohort during the study period, 2622 encounters were excluded based upon exclusion criteria.We included 22,741 TI, 870 (3.8%) nasal TI, and 21,871 (96.2%) oral TI, from 60 PICUs (Table 1).Infants and patients with cardiac conditions more often underwent nasal TI than other demographic groups (p < 0.001).
Nasal TI was used more commonly for procedural indication and less commonly for oxygenation or ventilation failure indication (p < 0.001).Nasal TI was used less frequently for the patients with a difficult airway history and by fellow (as compared to attending) physicians.
Primary outcome
In the univariate analysis, the primary outcome (occurrence of either severe desaturation and/or severe TIAEs) was reported in 23.7% of nasal TI and 22.5% of oral TIs (p = 0.408) (Table 2).Severe desaturation (SpO 2 decline > 20% from baseline) occurred in 22.4% of nasal TI vs. 19.2% of oral TI (p = 0.312).Severe TIAEs were reported in 2.2% of nasal TI vs. 5.6% of oral TI (p < 0.001).However, multivariable logistic regression did not show higher likelihood of severe oxygen desaturation and/or severe TIAEs with nasal TI route (OR 1.03, 95% CI: 0.87-1.22,p = 0.704) (Table 3).One-to-one PS matching was possible for 869 patients with nasal TI.The covariates were well balanced between two groups (Table 4).
Discussion
The aim of this study was to evaluate the association between primary nasal TI, severe desaturation, and TIAEs in a large and international prospective registry of TIs across PICUs.The occurrence of the primary outcome, either severe desaturation and/or severe TIAE, was similar in both groups in the univariate analysis, yet the severe TIAE was less common in the nasal TI group, and severe desaturation was more common in the nasal TI group.After adjusting for the imbalance in patient, provider, and practice characteristics with a multivariable logistic regression and in a PS-matched analysis, we did not observe a significant association of nasal TI with severe oxygen desaturation and/or severe TIAEs.
In our study, there was a significant difference in patient, provider, and practice characteristics related to children undergoing nasal versus oral TI.Infants with cardiac conditions were more prevalent in the nasal TI group, for instance.This is in concordance with the prior literature, where patients receiving nasal TI were mostly children under 2 years old (88.1%), with a cardiac disease (82.2%) [8].In our study, nasal TI was also associated with procedural indication for TI.More attending physicians and subspecialists performed nasal TI.This may be explained by the fact that nasal TI procedure may require more airway experiences and technical skills, as it may be more challenging technically [14].Among 22,741 primary TIs, fewer than 4% were by the nasal route.Our results are consistent with a recent retrospective cohort study of 121 PICUs in the USA, which reported that nasal TI was used in a minority of PICUs, and a similar small proportion (5.6%) of all 12,088 TIs were nasal TIs [8].Of note, this study included academic and nonacademic medical centers, while the overwhelming majority of our NEAR4KIDS TI data were from academic centers.
We speculate that the choice of intubation route (i.e., nasal vs. oral) is determined by the physician's experience and the clinical context, such as the patient's physiological tolerance to intubation because duration of the TI procedure may be longer in the patient undergoing nasal TI.In a study by comparing nasal and oral TI on neonatal mannequins by inexperienced providers, longer time spent for the intubation procedure was reported in the nasal group (85 s in nasal TI vs. 48 s in oral group, p < 0.001).Lenclen et al. showed that the success rate for intubation with a duration less than 30 s was higher for the oral TI group (100% vs. 66% in nasal TI, p < 0.001) [14].In the study by Abdelbaser et al., the median time needed for the intubation was significantly longer with nasal TI (31.5 s) compared to the oral group (16.0 s) (p < 0.001) [9].Some may also consider that nasal TI is the preferred route for prolonged intubation in critically ill children, to improve tube stability and comfort, and to decrease unplanned extubation 10 .Christian et al. reported that nasal TI may be associated with lower occurrence of unplanned extubation compared with the oral TI group (0.9% vs. 2.9%, p < 0.001) 10 .Of prior literature report, no statistically significant difference in sinusitis and VAP between children with nasal TI and oral TI was found [8,15].Nasal intubation (vs.oral) at 24 h of endotracheal tube is associated with increased duration of invasive mechanical ventilation in children with bronchiolitis [16].
In our study, there were no difference in the severe TIAEs and/or severe oxygen desaturation in the nasal TI compared with oral group.In a randomized controlled trial of nasal TI versus oral TI evaluating post-extubation airway obstruction, complications of peri-intubation desaturation and bradycardia and more than one intubation attempt were comparable in both groups [15].In a recent randomized controlled trial of nasal versus oral TI in infants and neonates who underwent a cardiac surgery, the change in S p O 2 from baseline during intubation (3.4% vs. 3.2%, p = 0.826) and more than one intubation attempt were similar between the nasal TI and oral group [9].Another study by Orlowski et al. also described similar rate of major complications occurring in children who had nasal versus oral TI (11% vs. 10%) [17].Finally, in a Cochrane review of nasal versus oral TI for mechanical ventilation of newborn infants, the intubation failure rate was greater in the nasal TI compared with the oral TI, indicating the former procedure may be more difficult in this age group [18].However, these studies did not report other peri-intubation adverse TIAEs or severe desaturation.The uniqueness of our study is the throughout evaluation of peri-intubation events, highlighting the importance of this prospectively collected data.This study has several limitations.Our study was unable to report the duration of the TI procedure.This data point would require direct observation or video recording of TI procedure.Our study was also unable to address the outcomes related to mechanical ventilation with nasal endotracheal tube in place.These outcomes can include the occurrence of sinusitis, ventilator-acquired pneumonia (VAP), and unplanned extubation.In our study, the two groups of nasal versus oral TI were markedly unbalanced, in terms of prevalence as well as in patient, provider, and practice characteristics.We attempted to account for this, utilizing multivariate logistic regression and PS analysis, but we cannot exclude residual confounding factors.Although prospectively collected, an underreporting bias for TIAEs and desaturation may exist.In addition, detailed information regarding diagnosis and severity of illness was not recorded and may be of influence in the choice of route of TI.The type of unit (exclusively PICU versus mixed PICU and CICU) of PICUs included may limit the generalization of the study findings.
Conclusion
In this large international prospective cohort, children receiving primary nasal TI did not have higher risk of severe peri-intubation desaturation and or severe TIAEs compared with those receiving oral TI.Nasal TI was infrequently used and associated with substantial differences in patient, provider, and practice.A prospective interventional multicenter trial is warranted to address the potential selection bias and to confirm the safety of nasal TI.
Table 1
Patient, provider, and practice characteristics stratified by route of intubation (N = 22,741) using the Wilcoxon rank-sum test.Univariable and multivariable logistic regressions were performed to evaluate the association between nasal TI and the primary composite outcome (rate of severe oxygen desaturation > 20% from baseline and/or severe TIAEs).In the multivariate model, variables that were chosen a priori were patient age and diagnostic category, TI for indications of respiratory and shock, and provider level of training.In addition, the following variables were added as they were unbalanced at baseline and potential confounders: device (video laryngoscope vs. direct laryngoscopy), history of difficult airway, vagolytic and paralytic use, and apneic oxygenation utilization.To further address the imbalance in patient, provider, and practice characteristics, we performed a propensity score (PS) analysis, with 1:1 matching without replacement.The PS was calculated for each patient as the predicted probability of nasal TI.With the calculated PS, nearest one-to-one neighbor matching without replacement was performed with a caliper width no greater than 0.2 times the SD of the logit of the PS to generate matched cohorts in which covariates are balanced.
Table 3
Multivariable logistic regression analysis of the association between the route of tracheal intubation (TI) and severe desaturation and/or severe tracheal intubation adverse events (TIAEs) CI Confidence interval, TBI Traumatic brain injury, TI Tracheal intubation
Table 4
Characteristics and primary outcome of nasal TI and oral TI groups after 1:1 propensity score matching without replacement TBI Traumatic brain injury, TI Tracheal intubation, TIAE Tracheal intubation-associated events.N/A, not applicableStandardized absolute mean difference (SAMD)#This value is calculated as follows: SAMD (the absolute value of the difference in average outcome between cases and controls, divided by the square root of the average of the sample variance for cases and controls) × 100
Table 5
propensity score-matched analysis: absolute risk difference with 95% confidence interval in the primary and secondary outcomes CI Confidence interval, TI Tracheal intubation, TIAE Tracheal intubation-associated events.Please refer to the "Materials and methods" section of the paper a Note that desaturation data were not available for all TIs due to non-detectable SpO 2 values 95% confidence interval and p-value are from bootstrap resampling | 2024-02-26T05:08:13.338Z | 2024-02-23T00:00:00.000 | {
"year": 2024,
"sha1": "ec9e7833f8bb5f5b5e65dac11be58313221ce2ff",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ec9e7833f8bb5f5b5e65dac11be58313221ce2ff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246669898 | pes2o/s2orc | v3-fos-license | Synovitis-acne-pustulosis-hyperostosis-osteitis Syndrome with Bilateral Pleural Effusion
Pleural effusion is a rare manifestation in synovitis-acne-pustulosis-hyperostosis-osteitis (SAPHO) syndrome, which is characterized by the presence of osteoarticular lesions and dermatological involvement. We herein report a 71-year-old man with pleural effusion resulting from SAPHO syndrome. He was successfully treated using corticosteroids and has experienced no recurrence for one year. We should consider SAPHO syndrome when encountering cases of anterior chest pain and pleural fluid.
Introduction
Synovitis-acne-pustulosis-hyperostosis-osteitis (SAPHO) syndrome was first introduced by Chamot et al. in 1987 (1). It is characterized by the presence of osteoarticular and dermatological lesions. Because the syndrome is commonly underrecognized or misdiagnosed, the true incidence and prevalence of SAPHO syndrome are unknown (2), and the etiology, pathogenic mechanism, and target therapy of the disease are still unclear. A variety of manifestations are known to be associated with SAPHO syndrome; however, pleural involvement with this disease is a rare manifestation.
We herein report a case of SAPHO syndrome with bilateral pleural effusion.
Case Report
A 71-year-old man presented to our hospital with a 3month history of anterior chest pain and shortness of breath on exertion. He was an ex-smoker and had a 40-year complaint of lower back pain with morning stiffness. He had previously been diagnosed with palmoplantar pustulosis about 30 years earlier and taken topical medication for his skin lesions for several years. Five years before admission, he noticed anterior chest pain, but he did not have it examined. A medical checkup at a nearby clinic revealed hypoxia and bilateral pleural effusion. He was referred to our rheumatology clinic on suspicion of pleuritis-associated connective tissue disease and was admitted for a further investigation and treatment.
Chest X-ray showed blunting of the costophrenic angles and bilateral pleural effusion (Fig. 1A). Pelvic X-ray revealed joint space narrowing and partial ankylosis of the sacroiliac joints (Fig. 1B). Lumbar spine X-ray showed syndesmophytes and ankylosis of the facet joints, recognized as bamboo spine (Fig. 1C). Cardiac ultrasonography showed that the left ventricular motion had a regional abnormality, but the ejection fraction was 53%, and there was no valvular disease. F-18 Fluorodeoxyglucose positron emission tomography/computed tomography (FDG PET/CT) showed bilateral pleural fluid and pleural thickness with the uptake of FDG [maximum standardized uptake value (SUVmax) 5.0] on both sides ( Fig. 2A). Hyperplasia of sternoclavicular joints with the slight uptake of FDG (SUVmax 3.2) was also detected (Fig. 2B), but on FDG PET/CT, there were no abnormal findings in the lungs, salivary or lacrimal glands, lymph nodes, or other major organs. Contrast magnetic resonance imaging (MRI) of the thorax revealed that the bilateral sternoclavicular joints were bony and enlarged. There was mild bone marrow edema and contrast enhancement of the right clavicle (Fig. 3).
The laboratory results of the pleural fluid were as follows: cell count 1,500/μL (neutrophils 26.0%, lymphocytes 68.2%), TP 4.2 g/dL, LDH 101 U/L, and adenosine deaminase 15.2 U/L. The pleural fluid cultures were negative, and there were no malignant cells on cytology. According to the diagnostic criteria of SAPHO syndrome proposed by Benhamou in 1988 (3), we made a diagnosis of SAPHO syndrome based on the palmoplantar pustulosis, bilateral sternal and spine osteitis, osteosclerosis of the sacroiliac joints, and suspected pleuritis associated with SAPHO syndrome.
Although infection and malignancy were considered unlikely to be the cause of the pleural effusion, we also considered malignant pleural mesothelioma, IgG4-related disease, and amyloidosis as possible differential diagnoses. On hospital day 15, a left pleural biopsy using thoracoscopy under local anesthesia was performed to examine the cause of the pleural fluid and pleural thickness. The parietal pleura was generally erythematous and hypertrophic with some white plaques. A pathological examination of the pleura revealed nonspecific but severely inflamed fibroconnective tissue with dominant infiltration of lymphocytes (Fig. 4) and no malignancy, Congo red staining, or infiltration of plasma cells, with an IgG4/IgG ratio exceeding 40%. Given the result of the biopsy, we diagnosed the pleural effusion as having been caused by SAPHO syndrome.
Loxoprofen had been prescribed for his chest pain on admission, but his pleural fluid accumulation was exacerbated. Corticosteroid therapy, in the form of oral prednisolone (PSL) 30 mg/day, was administered on hospital day 25. His respiratory symptoms gradually improved, resulting in cessation of oxygen supplementation therapy, and was accompanied by the reduction of pleural effusion (Fig. 5A). Chest CT performed 18 days after PSL initiation showed that the pleural effusion was markedly reduced compared with before the treatment and had almost disappeared (Fig. 5B). PSL was gradually tapered, and he was discharged on hospital day 49. There has been no recurrence of pleural fluid in the year since PSL was initiated.
Discussion
SAPHO syndrome is a rare disease with an estimated prevalence likely less than 1 in 10,000. Most initial reports about the disease were from Japan and northwestern Europe (4). It is difficult to diagnose this disease not only because of its rarity but also because of the diverse clinical manifestations. The most widely applied diagnostic criteria of SAPHO syndrome were proposed by Benhamou in 1988, based on the presence of at least one of the following four inclusion criteria: osteoarticular manifestations of severe acne; osteoarticular manifestations of palmoplantar pustulosis; hyperostosis with or without dermatosis; and chronic recurrent multifocal osteomyelitis with or without dermatosis (3). We diagnosed the present patient based on the presence of osteoarticular manifestations of palmoplantar pustulosis and hyperostosis. Recently, SAPHO syndrome has been a concept that has been proposed to include pustulotic arthro-osteitis (PAO), acne arthritis, and chronic recurrent multifocal osteomyelitis (CRMO) (2). From this perspective, our patient was classified as having PAO because he had bilateral hyperplasia of the sternoclavicular joints and bamboo spine with a history of pustulosis palmaris et plantaris.
Our case was an uncommon one because of the presence of pleuritis, which was recognized from findings such as bilateral exudative pleural effusion and thickening. The etiology and clinical characteristics of SAPHO syndrome occurring in pleuritis remain unclear (4). To clarify the clinical features of pleuritis with SAPHO syndrome, we searched for cases similar to our own patient in PubMed/MEDLINE and identified four (Table) (5-8). Interestingly, these cases varied in age but showed little difference in gender. All of the cases had sternum and sternoclavicular pain, suggesting that thoracic inflammation might be related to pleural involvement of SAPHO syndrome. The pleural effusion could be either unilateral or bilateral, and the characteristics of the pleural effusion differed among the cases, so no common features could be detected with regard to pleural involvement in SAPHO syndrome. Furthermore, it was difficult to detect any marked differences in clinical characteristics between those with and without pleural involvement among cases with thoracic inflammation in SAPHO syndrome. Although contiguous chronic sternal inflammation is a possible mechanism, it is very important to exclude other reasons for the occurrence of pleuritis because of the difficulty of diagnosing SAPHO syndrome related to pleural involvement based on the clinical features alone (6). Hasegawa et al. reported a patient with pleuritis and undiagnosed SAPHO syndrome, similar to our own case, and pointed out that consideration should be given to the possibility of SAPHO syndrome when encountering a case with anterior chest pain and pleural effusion. Furthermore, they noted that imaging studies, such as X-ray and CT, were needed to investigate the presence of osteosclerosis and hyperostosis of the anterior chest wall (7). However, frequent causes of pleuritis include infectious diseases, malignancy, and autoimmune diseases, so caution should be exercised, especially in undiagnosed cases of SAPHO syndrome.
We ruled out the abovementioned differential diagnoses using microbiology, cytology, laboratory data, and imaging; FDG PET/CT was particularly helpful in our case. Interestingly, FDG PET/CT also revealed not only the uptake of FDG in hypertrophic pleura and sternoclavicular joints but also the location of these lesions in close proximity. Previous reports have suggested the usefulness of FDG PET/CT in diagnosing SAPHO syndrome (9). Our observations also raised the possibility that FDG PET/CT might be beneficial for understanding the etiology of pleuritis in SAPHO syndrome. A pleural biopsy was performed in our case, demonstrating nonspecific but severely inflamed fibroconnective tissue with dominant infiltration of lymphocytes; we were thus able to concretely exclude other possible causes of pleuritis. As in our patient, one other case report described how a biopsy specimen of the right pleura revealed nonspecific chronic inflammation (8). Collectively, our case indicates the importance of FDG PET/CT and pathological evaluations for diagnosing pleuritis involved with SAPHO syndrome.
Anti-inflammatory therapy using oral nonsteroidal antiinflammatory drugs (NSAIDs) is the first-line therapy for SAPHO syndrome. Corticosteroid therapy, which is common for patients who do not respond to NSAIDs and disease modified anti rheumatic drugs (DMARDs), such as methotrexate and sulfasalazine, is used to control peripheral synovitis (10). Tumor necrosis factor (TNF) inhibitors may also be used in patients with refractory disease (11). Regarding cases with pleuritis, the clinical course and treatment history differed among cases (Table). Spontaneous disappearance of pleural effusion without treatment was reported in one case (8), while the pleural effusion was ameliorated by treatment with methotrexate in a case whose clinical manifestations were similar to those in our patient (7). In our patient, loxoprofen administration did not improve the pleural effusion and was eventually discontinued because of elevated liver enzymes. Oral PSL was administered because methotrexate was incompatible with the presence of pleural fluid. After the initiation of this corticosteroid therapy, the pleural effusion diminished. About half of patients with SA-PHO syndrome have a chronic course characterized by fluctuating intermittent periods of exacerbation and short improvement (12). After one year of follow-up and gradual tapering of corticosteroid treatment, our patient showed no recurrence of pleural effusions. The present and previous findings suggest that oral corticosteroid therapy and immunosuppressive drugs, such as methotrexate, are effective for improving pleuritis in cases of SAPHO syndrome, especially those without spontaneous improvement.
Several limitations associated with the present study warrant mention. First, there is a possibility that the pleural effusion was due not only to SAPHO syndrome but also the involvement of some other disease. The effect of the corticosteroid therapy could thus not be evaluated properly, as corticosteroids are used to treat many diseases, such as rheumatoid arthritis and other autoimmune diseases. Second, thus far, we have only reported the one-year follow-up of our case, and it is possible that pleural effusion might recur in the future. Morán-Álvarez et al., for example, reported a case with multiple recurrent episodes of pleural effusion (8). Therefore, for our patient, long-term observation to detect pleural effusion is needed.
In conclusion, we herein report a case of anterior chest wall osteitis possibly associated with pleuritis that showed non-specific but severe inflammation on a pleural biopsy. Our patient with SAPHO syndrome-associated pleural effusion was successfully treated with corticosteroids. Even though pleural effusion is a rare manifestation of SAPHO syndrome, SAPHO syndrome should be considered when encountering cases of anterior chest pain and pleural fluid.
Informed consent was obtained to publish this case report.
The authors state that they have no Conflict of Interest (COI). | 2022-02-09T16:32:52.457Z | 2022-02-08T00:00:00.000 | {
"year": 2022,
"sha1": "cf2b4b3a975edc27e20985be33b49141a02043f1",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/61/17/61_8473-21/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "97aeabfbf8ad9c195c29f07e8a2a0a85b134bcb0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229244538 | pes2o/s2orc | v3-fos-license | Comparative Effect of Zinc Concentration and Sources on Growth Performance, Accumulation in Tissues, Tibia Status, Mineral Excretion and Immunity of Broiler Chickens
25/December/2019 Approved: 27/February/2020 ABSTRACT This experiment was conducted to investigate the effect of feeding different concentrations and sources of zinc (Zn) on the growth performance, tissue mineral status, bone morphology and immunity responses in 0–4-week broiler chickens. Four hundred and forty 1-d-old broiler chickens were assigned randomly to 11 dietary treatments with 4 cages per treatment and 10 broiler chickens per cage in a completely randomized design. Dietary treatments were: corn–soybean meal basal diet (negative control), basal diet supplemented with 5 g yeast/kg (yeast), and basal diet supplemented with 20, 50, or 80 mg of added Zn/kg as ZnSO4, Zn-Met, or Zn-yeast in a 3 x 3 factorial arrangement of treatments. The results showed that broilers fed Zn supplemented diets had greater average weight gain and average feed intake than chickens fed the negative control diet ( p< 0.05). The Zn deposition in tibia, meat (thigh and breast) and excreta increased ( p< 0.01), regardless of source, in response to increasing dietary Zn concentrations. Zinc level increased dry weight of tibia bone and its large diameter. The strength of tibia bone as judged by Seedor index and breaking strength was improved ( p <0.01) with Zn concentration in increased diets. Furthermore, supplemental Zn up to 50 mg/kg improved immunity responses of broiler chickens ( p< 0.01). It is concluded that supplementation with 50 mg Zn may be sufficient for normal broiler growth up to 28 d of age and the dietary inclusion of organic Zn could be utilized more effectively when compared
INTRODUCTION
Zinc is an essential trace element that acts as a cofactor in many metabolic pathways including cell proliferation, growth, skeletal development, immune system, reproduction, hormone secretion, and antioxidant defense system, as well as many biochemical processes Swiatkiewicz et al., 2001;Ao et al., 2011;Muszyński et al., 2018). Thus, it is critical to use an optimal supplementation inclusion rate of Zn to allow poultry to reach their genetic potential and performance. The National Research Council (NRC 1994) recommended the minimum levels of 35 mg/kg in the diet that are necessary for optimum productivity for young broiler chickens. It has been reported that 71±13 mg Zn/g in a maize-soybean meal basal diet was necessary to maximize growth in broilers from hatch to 21 d of age (Huang et al., 2007). However, natural Zn concentrations in common feedstuffs are generally lower than the daily Zn requirement for poultry, leading to the necessity of dietary Zn supplementation. In practice, food manufacturers and producers formulate diets to contain 100-120 mg supplemental Zn/kg (Shyam Sunder et al., 2008;Feng et al., 2010). On the other hand high dietary Zn supplementation in diet may affect the balance of other trace elements (Abedini et al.,2017), and Azad SK, Shariatmadari F, Torshizi MAK, Chiba LI
Comparative Effect of Zinc Concentration and Sources on Growth Performance, Accumulation in Tissues, Tibia Status, Mineral Excretion and Immunity of Broiler Chickens
eRBCA-2019-1245 causes toxicity (Carl et al.,2003). High Zn consumption could also lead to increased excretion of Zn in feces, which causes environmental contamination (Pierce et al., 2005).
Research shows that the bioavailability of trace minerals in inorganic forms is low in poultry carcass (Cao et al., 2002). Therefore, enhancement of Zn bioavailability using more available sources can help to solve such problems. As such, organic mineral sources such as proteinate and amino acid chelates have been increasingly used in recent years because of their greater bioavailability and less excretion (Pierce et al., 2005). Previous studies have shown that the effect of different mineral sources, organic or inorganic, varies on production performance (Schlegel et al., 2013;Sahoo et al., 2014;Badawi et al., 2017). While in many studies, organically bound Zn has been demonstrated to have greater relative bioavailability than that of inorganic forms (Wedekind et al., 1992;Cao et al., 2000), others have seen no differences in bioavailability among organic and inorganic Zn sources (Hudson et al., 2004;Zakaria et al., 2017).
To our knowledge, the bioavailability of Zn-enriched yeast has not been previously fully elucidated and no comparative study (between this form and other organic and/or inorganic forms) has been carried out in broilers. Thus, the other objectives of the present study were to examine the effects of dietary supplementing different concentrations of Zn from various sources on growth performance, Zn excretion, leg development and immune responses in broiler chickens.
Birds, diets, and experimental design
The experimental protocol was approved by the animal-welfare committee of Tarbiat Modares University, and the animals were handled and treated in a humane manner.
Four hundred and forty 1-day-old broiler chickens (Ross 308) were assigned randomly to 1 dietary treatment group consisting of 4 replicates of 10 broilers in a completely randomized design arrangement of treatments. The house for the broilers was provided with programmed lighting and ventilation. Ambient temperature was gradually decreased from 32°C on day 1 to 22°C by the end of the experiment. Broilers were allowed ad libitum access to experimental diets and tap water containing no detectable Zn (<0.001 mg/L). Also, feed and water were provided using plastic instruments to minimize environmental Zn contamination.
The basal corn-soybean meal diets (Table 1), which was fed in mash form, were formulated to meet or exceed the NRC (1994) requirements for starter and grower broilers except for Zn. Dietary treatments included the un-supplemented basal diet with Zn (control), and supplemented with 20, 50, or 80 mg of Zn/kg as feed-grade Zn sulfate (ZnSO 4 •7H 2 O), Zn Methionine (Bioplex Zinc; Alltech, Inc ) and Zn-enriched yeast (that was produced on a laboratory scale as described by Kamran Azad et al 2014). All of the diets were calculated to contain equal concentrations of methionine. The background Zn concentration of the control diets were 24.4 and 21.6 mg/kg, in the starter and grower diets, respectively (Table 1). recorded at weekly intervals, starting from one day of age. Growth performance was evaluated in terms of average weight gain (AWG), average feed intake (AFI) and feed conversion ratio (FCR) at the end of each feeding period.
Sample collection and measurements
On d 28 of the experiment, 2 broilers from each replicate cage (8 broilers per treatment) were randomly selected within the cage following a 6-h fast, weighed individually, and were then killed.
Chemical analysis
Zinc concentrations in Zn sources, diets, tap water and tissues were determined by flame atomic absorption spectrophotometry (Model Avanta S, Atomic Absorption, GBC, Sydney, NSW, Australia) after wet digestions, as described by Sandoval et al. (1998). Briefly, samples of tissues and diets were dried at 105 ºC for 12 h. Tissues were predigested in HNO 3 , till charring was completed, then all samples were ashed dry at 550 ºC for 12 h, solubilized in HCl, and filtered through 42 Whatman paper.
The right tibia was removed and cleaned of the soft tissue before the bone was dried at 105 ºC for a minimum of 24 h. After 72 h-extraction in diethyl ether, the bone was dried for 12 h at 105 ºC and ashed at 550 ºC overnight in a muffle furnace (Yuan et al. 2011). Charred bones were digested as described before. The bone outer dimensions were measured by a digital caliper (Mitutoyo, Mizonokuchi Co, Utsonomiya, Japan). These bone parameters were measured on un-dried tibia samples. For determination of ash, the bones were placed in the oven for 24 h at 105 °C to dry completely and weighed. After that, the bones were ashed (650 °C for 14 h) and the ash weight was recorded. The material obtained from 2 duplicate was pooled.
For bone mechanical and geometric properties character, the following measurements were conducted. The Seedor index is the value obtained when the bone weight is divided by its length, as proposed by Seedor et al. (1991). It is used as a bone density indicator, the higher the value the denser the bone. Bone breaking strength was determined by material testing machine (Model H50KS; Hounsfield Co, London, England). The robusticity indexes were determined using the following formulas (Reisenfeld, 1972).
Robusticity index = bone length / cube root of bone weight To collect the excreta, 2 birds from each replicate were placed in a cage. A polythene sheet was attached under the cages of the birds. Feed and feathers were carefully removed. The excreta were homogeneously mixed replicate-wise, representative samples of excreta were collected in a moisture cup and oven-dried at 105°C for 24 hr, and finely ground for mineral analysis as described previously.
Immune response
On the 14th and 21th day of the experimental period, 2 birds were selected from each group and intramuscularly injected with 2 ml of 0.5% sheep red blood cell (SRBC). On the 21 st and 28 th day of the experiment, blood samples were collected and serum samples were used to measure humoral immunity. Antibody titre produced against haemagglutination was measured according to Peterson et al. (1999).
Statistical analysis
To test the effect of supplemented Zn, data were analysed using single degree of freedom contrast to compare all supplemental Zn treatment with the control treatment. Data were further analyzed by 2-way ANOVA (excluding control treatment) using the General Linear Model (GLM) procedure of SAS institute (SAS 2003). Replicate was considered as the experimental unit for all data. The model included main effects of supplemental Zn level, Zn source and their interaction. Influences regarding one (Zn level) of the main effects were based on irthohnal comparsion for linear response of dependent variables to independent variables. Duncan's multiple range tests was used to assess any significant differences at the probability level of p≤0.05 among the experimental treatments.
Growth performance
The broiler chickens fed diet without Zn supplementation had lower (p<0.01) AWG and AFI than those fed zinc supplemented diets (Table 2). Similarly, in the entire 4-week period, AWG increased with the dietary Zn content (p<0·05), up to dietary concentration of 50 mg Zn/kg. No additional response was observed at higher Zn concentrations. The inadequacy of Zn in the control diet depressed feed consumption (p<0.05); so that the lowest feed intakes were attributed to un-supplemented groups and the highest FI were shown in the 50 and 80 mg/kg of Zn. Moreover, the main effect of Zn level was not
Comparative Effect of Zinc Concentration and Sources on Growth Performance, Accumulation in Tissues, Tibia Status, Mineral Excretion and Immunity of Broiler Chickens
eRBCA-2019-1245 significant for feed conversion ratio (p<0.05). On the other hand the FCR was not affected with the dietary Zn sources but decreased with increasing Zn levels in both organic and inorganic groups.
Tibia Zn concentration
Zinc concentration in tibia was low in the broilers fed diets with no Zn supplementation (Table 3) but it increased in proportion to the dose of Zn supplementation to the basal diet and reached plateau at 50mg/kg (p<0.01). Tibia Zn concentrations were also strongly related to the Zn source origin, as organically bound Zn significantly increased the Zn content compared to inorganic supplementation (p<0.05). There were small changes in tibia Zn when broiler chickens were fed on either Zn-enriched yeast or commercial Zn methionine sources. The interaction between Zn level and its source was not significant for tibia Zn status.
Zinc content of meat and excretion
The Zn deposited in breast and thigh muscles reflected the level of dietary Zn (Table 3), the higher the inclusion level, the higher Zn content of muscles (p<0.01). Zn accumulation was more substantial in birds fed organically bound Zn supplemented diets relative to those fed inorganic Zn counterpart (p<0.01).
Although the source of zinc affected the Zn content of thigh meat, it did not influence the Zn content of breast. Zinc excretion increased nearly three folds (p<0.01) from 175 ug/g (DM) control group to 515 ug/g (DM) for 80 mg/kg zinc level (p<0.01). There was no effect of zinc source on zinc excretion.
Mechanical and geometric character of bone
Mechanical and geometric characters of bones are presented in Table 4. Dry weight, Sidoor index, and breaking strength of tibia bone were affected by zinc content (p<0.05). However, there was no further increase in these parameters beyond 50 mg/ kg zinc levels. The zinc source had no significant effect on mechanical and geometric characters of bone measured. There was no interaction between level and zinc source effect on any parameters measured.
Immunity
The dietary treatment had no effect on weight of bursa Fabricius while the spleen weight was increased as compared to the control group (p<0.05). There was neither effect of zinc source nor interaction effect between zinc source and zinc level (p<0.05) on the weights of bursa Fabricius and spleen (Table 5). Although there was no significant difference in preliminary antibody titer among all birds, the chicks that had received zinc supplementation had a higher secondary antibody titer as compared to the control (p<0.05).
Growth performance
Zinc is an essential trace element for the normal function of numerous important structural proteins, enzymatic processes, lipid metabolism, hormone production and ultimately necessary for healthy growth and development of chicken. Inadequacy of Zn dosage in the bird diet reduces feed consumption and consequently body weight gain, but it could be reversed by Zn supplementation (Bao et al., 2007). Hudson et al. (2004) reported that different sources of Zn significantly affected body weight gain of broilers. Wedekind et al. (1992) studies indicated that the improvement in weight gain in broilers fed Znsupplemented diets may have resulted, in part, from increased consumption of basal diet, because anorexia Means in the same column with no superscript letters after them or with a common superscript letter following them are not significantly different (p<0.05).
1 There were 8 replicate consisting of samples taken from 2 birds per replicate (cage).
Comparative Effect of Zinc Concentration and Sources on Growth Performance, Accumulation in Tissues, Tibia Status, Mineral Excretion and Immunity of Broiler Chickens
eRBCA-2019-1245 is the most common symptom of Zn deficiency, which led to the depressed growth of broilers. The mechanisms involved in the effects of Zn deficiency on growth are unknown, but a reduction in food consumption may be a protective reaction to allow survival (MacDonald, 2000). An explanation for increased body weight gain may be due to the positive effects of Zn methionine on digestion and absorption of nutrients in the gastrointestinal tract and/or to a higher bioavailability of Zn in the form of Zn methionine. On the other hand, refer to the participation of Zn in protein and carbohydrate metabolism, the reduction in feed intake associated with a lack of Zn (that could reduce feed digestibility), and consequently impaired performance parameters in broilers (Bao et al., 2007). These results are in agreement with Huang et al. (2007) who found Zn supplementation induced increase in feed intake and weight gain.
Tibia Zn concentration
Zinc is the most abundant trace element in bone, being present at a concentration of up to 300 mg/kg (Grynpas et al., 1987), and has been considered an important factor in bone metabolism. Not surprisingly, adequate Zn concentration is required for growth, development and mineralization of bone (Bao et al., 2003). Huang et al. (2009) demonstrated that the Zn content of the tibia was significantly influenced by dietary levels. Previous studies reported that feeding birds diets with a Zn concentration greater than 85 mg/kg did not influence tibia Zn deposition (Wedekind et al., 1992). However, in this study, tibia Zn concentrations increased with Zn supplementation only up to 50 mg/kg and there was no change at 80 mg/kg. Huang et al. (2007) indicated that bone status is commonly used as an indicator of mineral adequacy in poultry diets.
Also in agreement with the present study, other studies indicated that different sources of Zn supplementation affected the levels of Zn in tibia (Ao et al., 2011;Idowu et al., 2011). Moreover, many research studies (Cao et al., 2000;Huang et al., 2009;Ao et al., 2011) indicated that the dietary supplementation of organic Zn can result in greater accumulation of Zn in tibia than the same supplemental concentration of inorganic Zn.
According to Loveridge (1992) bone is a complex heterogeneous tissue that supports the musculature, and thus its growth and development are intimately connected with overall body growth and thus making tibia Zn concentration a good predictor of whole-body growth. This study indicates that using 50 mg/kg Zn might meet the requirement for normal bone mineralization. In the present study, the same birds that obtained optimal body weight had highest tibia Zn. a a-b Means in the same column with no superscript letters after them or with a common superscript letter following them are not significantly different (p<0.05).
1 Mean values are based on data obtained from all (n=10) chicks from each of the 4 replicate pens per treatment (n=40 individual birds per treatment). Through analysis, a dummy variable was considered for the control and the three Zn sources so that the variable set to be zero for the birds not in the relevant group and one when they were in the relevant group. 1 antibody production based on log 10 Azad SK, Shariatmadari F, Torshizi MAK, Chiba LI
Comparative Effect of Zinc Concentration and Sources on Growth Performance, Accumulation in Tissues, Tibia Status, Mineral Excretion and Immunity of Broiler Chickens
eRBCA-2019-1245
Zinc content of meat and excretion
Many researchers indicated that dietary Zn level influence Zn content of nearly all types of tissues and organs (Park et al., 2004). The concept of "Functional foods", enriching poultry meat with different nutrients has attracted many researchers in recent decades (Peric et al., 2011). Our results though, indicate that Zn enrichment took place, the magnitude is not too high as compared to the control. The total Zn of thigh and breast meat in birds with no Zn addition is 26.4 (µg/g DM) while the highest level of Zn inclusion (80) resulted to total Zn content of 30 (µg/g DM).
On the other hand the magnitude of Zn excretion is nearly 2.5 folds; 176 (µg/g DM) for no zinc added diet as compared to 515 for 80 mg/kg added Zn. This imbalance between the ability to enrich a product and/ or cause industrial pollution is of great importance to consider. This proves that there is a limit for enriching meat with zinc (like most other minerals) beyond which is depleted through excretion (Keen et al., 2003).
However, Bao et al. (2007) had shown organic minerals decreases minerals of excreta. Salim et al. (2010) indicated that chelated minerals (such as organic zinc) could resist interferences from dietary anti nutritional factors in the digestive tract and directly reach the intestinal brush border, where it is hydrolyzed and absorbed as an ion into to the blood, resulting in a greater availability.
Mechanical and geometric character of bone
This study suggests that using 50 mg Zn/kg (from yeast Zn) might meet the requirement for normal bone mineralization. Although Zn supplementation at specific levels is essential to optimize bone breaking resistance, it has been reported that higher levels of Zn in the diet appear to interfere with the absorption and utilization of Ca and P, particularly above 80 ppm, and could decrease the bone mineralization (Underwood & Suttle 1981) which was also observed in our study.
Many previously conducted studies show the negative impact of Zn deficiency on bone growth, and disorders that are associated with reduced activity of the growth plate (Brown et al., 1978). Scrimgeour et al. (2007) indicated that when Zn is not supplied sufficiently, proliferation, differentiation and survival of the bone cells are compromised. It seems that Zn, by increasing the number and activity of osteoblasts, leads to the deposition of calcium in the diaphysis of bones and increase mineralization in the tibia. Masayoshi and Hidetoshi (1989) demonstrated that Zn enhanced the effects of vitamin D on bone metabolism by stimulating the synthesis of DNA in bone cells. On the other hand, Zn by stimulating bone metabolism and bone protein synthesis by increasing the activity of enzymes such as alkaline phosphatase, is involved in increasing bone mineralization and strength (Yamaguchi et al., 1988). It has also been suggested that Zn is involved in insulinlike growth factor I-(IGF-I) production that increases the collagen, DNA and bone matrix syntheses (Hock et al., 1988). Therefore, many effects of Zn on bone metabolism may be related to nucleic acid and protein metabolism.
Immunity
Since the spleen is the organ that is directly related to antibody production, it is expected that its weight be directly related to antibody production (Steiniger and Barth, 2000). While there are no concrete evidences as to the effect of zinc sources on weight of the brusa Fabricius and the spleen, there is not much dispute in regard to the effect of zinc levels on these organs. Yu et al. (2005) showed that diet lacking in zinc results to lower weight of the spleen. Increase of the weight of bursa (and not the spleen) in this study was similar with the result of Bartlett and Smith (2003), who showed a slight increase in the weight of lymphoid organs. Also, in their experiments on broilers reported that thymus, spleen, and bursa of Fabricius increased linearly with increasing dietary Zn (from 35mg/kg to 68mg/kg). These findings could be due to the role of Zn in the growth and function of lymphocytes.
The results of this study indicated that Zn supplementation improves immune responses, as compared to the control. The immune system is dependent on the functions of cellular metabolism. Zn is ubiquitous in cellular metabolism and functions both structurally and catalytically in metalloenzymes (Bartlett & Smith, 2003).
In this study the birds with higher spleen weight had higher SRBC secondary titer. However, the maximum secondary SRBC immune response was seen in birds receiving 50 mg/kg zinc (p<0.05) but there was no differences with the 80mg/kg level. In the other hand according to Sunder et al. (2008), humoral and cellmediated immune responses were significantly higher in broilers supplemented with 80 mg/kg or greater amounts of Zn than those supplemented with less than 80 mg/kg of Zn. Hudson et al. (2004) observed a higher cellular immune response to PHA and antibody titres against the Newcastle disease in broiler breeders fed with diets supplemented with organic sources of Zn, as compared to inorganic sources. Zinc is essential for thymulin, a thymic hormone that regulates T lymphocyte maturation. Thus, birds fed with diets supplemented with a more available Zn source might have more thymulin activity and, therefore, promote immune responses through the increased maturation of T-lymphocytes and activation of B-lymphocytes by T-helper cells.
CONCLUSION
This experiment was conducted to examine the effect of different levels of zinc from different sources on broiler chickens performances. The levels were chosen to reflect the earlier proposed zinc requirement (NRC), recommended levels to present faster grower birds, and higher level than commonly practiced. As many earlier experiments to determine zinc requirement were done with inorganic zinc, it was presumed that organic zinc being more bioavailable will enhance the broiler performances at lower levels as compared to inorganic Zn. The main objective of our study apart from general performances criteria was to evaluate the effect of zinc on tissue and excretion zinc level and tibia morphology. According to our results, the optimal dietary Zn requirements for 0-4-wk-old broilers were 50 mg/kg diet. Zinc level above 50 mg/kg not only did not improved performances criteria further, it adversely increased zinc excretion and ultimately environment pollution. Unlike many other studies, our results did not indicate that zinc sources (sulfate vs organic) within the range tested in this experiment had much influence on broiler performances.
DISCLOSURE STATEMENT
No potential conflict of interest was reported by authors. | 2020-10-29T09:06:08.404Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "6e3fc45e1fe175d7c9d4900945b2a64f4431c5de",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rbca/a/s6bTPYzSHpvjm9pFgGzwFFM/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "fbf772309b0387f9cb977605c9edbfcabe809232",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
1782074 | pes2o/s2orc | v3-fos-license | Anaplasma marginale Infection with Persistent High-Load Bacteremia Induces a Dysfunctional Memory CD4+ T Lymphocyte Response but Sustained High IgG Titers
ABSTRACT Control of blood-borne infections is dependent on antigen-specific effector and memory T cells and high-affinity IgG responses. In chronic infections characterized by a high antigen load, it has been shown that antigen-specific T and B cells are vulnerable to downregulation and apoptosis. Anaplasma marginale is a persistent infection of cattle characterized by acute and chronic high-load bacteremia. We previously showed that CD4+ T cells primed by immunization with an A. marginale outer membrane protein were rapidly deleted following infection. Furthermore, peripheral blood T cell responses to bacteria were not observed after acute infection was controlled, suggesting dysfunctional T cell priming to other A. marginale antigens. The current study more closely investigated the kinetics of A. marginale-specific CD4+ T cell responses primed during infection. Frequent sampling of peripheral blood and spleens revealed that antigen-specific CD4+ T cell responses were first detected at 5 to 7 weeks, but the responses were sporadic and transient thereafter. A similar pattern was observed in animals sampled weekly for nearly 1 year. Paradoxically, by 2 weeks of infection, cattle had developed high titers of A. marginale-specific IgG, which remained high throughout persistent infection. This dysfunctional CD4+ T cell response to infection is consistent with continual downregulation or deletion of newly primed effector T cells, similar to what was observed for immunization-induced T cells following A. marginale infection. The failure to establish a strong memory T cell response during A. marginale infection likely contributes to bacterial persistence.
Many pathogens have evolved strategies to evade innate and antigen-specific host responses, which may result in persistent infection (14,23,40,53). Chronic infections caused by bloodborne pathogens that achieve and maintain high antigen loads can result in progressive dysfunction and eventual apoptosis of antigen-specific effector and memory T or B cells (17,28,38,50,54,55). The immune response to infections that persist with a high antigen load is unlike that following immunization or infection with a pathogen that is rapidly cleared (28). In the latter situation, the lymphocyte response contracts gradually, yielding a population of memory cells that are rapidly recalled upon subsequent encounter with antigen (24,31). Anaplasma marginale is a persistent bacterial pathogen of cattle that is characterized by high levels of bacteremia, attaining levels of 10 8 to 10 9 organisms/ml of blood during acute infection. Control of acute infection generally begins at 4 to 5 weeks, but infection is not eliminated, and recurring peaks of 10 4 to 10 7 organisms/ml of blood occur throughout lifelong persistent infection (15,19,42). Thus, even during persistent infection, a continuous high antigen load is a prominent feature of this infection.
The mechanisms of immune control of A. marginale have not been completely elucidated, but in cattle protectively immu-nized against infection using purified outer membranes (OMs), protection was associated with OM protein (OMP)-specific CD4 ϩ T cell responses, including gamma interferon (IFN-␥) production and proliferation and IgG2 production (10,11). Nonimmunized infected cattle consistently produce high levels of OMP-specific IgG1 and IgG2 (43,44,46,57), which are thought to be CD4 ϩ T cell dependent (42). A. marginale continually undergoes antigenic variation in major surface protein 2 (MSP2) and MSP3 during infection (5,37,40), and variantspecific IgG2 is produced in response to each emerging variant (18), suggesting that IgG2 responses control the newly emerging variants but fail to eliminate the pathogen because new variants continually escape the immune response. Thus, the current paradigm of the role of CD4 ϩ T cells in immunity to this infection is that antigen-specific CD4 ϩ T lymphocyte priming and activation result in T cell expansion and IFN-␥ secretion. IFN-␥, which in cattle induces isotype switching to IgG2 (16), is proposed to promote opsonization of bacteria or infected erythrocytes and to activate macrophages for enhanced phagocytosis and cytokine and nitric oxide production, which help eliminate intracellular bacteria (42).
Two previous studies have shown that cattle immunized with either A. marginale MSP2 or A. marginale MSP1a developed robust antigen-specific memory CD4 ϩ T lymphocyte proliferation and IFN-␥ secretion and then experienced a rapid loss of antigen-specific CD4 ϩ T cell responses following infection (1,22). The loss of the MSP1a-specific response was associated with physical loss of antigen-specific CD4 ϩ T cells from the peripheral blood, monitored with major histocompatibility complex (MHC) class II tetramers (22). In addition, the response to MSP2 as well as to A. marginale homogenate remained undetectable for up to 1 year during persistent infection, but sampling was infrequent (1). This suggested that high-level bacteremia not only downregulated preexisting immunization-induced CD4 ϩ T cell responses to a specific OMP but also impaired responses to these and additional bacterial antigens that should prime T cells during infection. We hypothesize that a continual high antigen load during acute and persistent anaplasmosis prevents the establishment of longlived functional antigen-specific memory T cells.
In the present work, we investigated the kinetics of antigenspecific CD4 ϩ T cell responses, primed by A. marginale infection, over the course of both acute and persistent infection in the peripheral blood and spleens of naïve cattle infected with A. marginale South Idaho or Florida strain organisms. Antigenspecific T cell proliferation and IFN-␥-secreting cells were monitored throughout infection. Our results are consistent with a functional dysregulation of A. marginale-specific CD4 ϩ T lymphocytes primed during infection and failure to establish long-term memory. Paradoxically, high titers of A. marginalespecific IgG were maintained throughout infection, in spite of the impaired CD4 ϩ T cell response. The failure to establish long-term memory T cell responses may be an important mechanism by which this bacterial pathogen modulates the host immune response.
MATERIALS AND METHODS
Cattle. Four 6-month-old Holstein steers (animals C1028b1, C1030b1, 33875, and 33901) were serologically negative for A. marginale and received killed Clostridium sp. vaccine (Vision 7; Intervet) prior to onset of the study. Additionally, animals 33875 and 33901 underwent a surgery to marsupialize the body and tail of the spleen as previously described (49). All animals expressed one MHC class II DRB3*1101 allele and one different DRB3 allele, as determined by restriction fragment length polymorphism of exon II and sequencing (39,45,48). Spleen aspirates were obtained as described previously (21). Peripheral blood mononuclear cells (PBMCs) and splenocytes were washed and purified as described previously (2,21,22). Lymphocytes were immediately used in assays or cryopreserved in fetal bovine serum containing 10% dimethyl sulfoxide and stored in liquid nitrogen. All animal studies were conducted using an approved Institutional Animal Care and Use Center (Washington State University, Pullman, WA) protocol.
Infection with A. marginale. Animals C1028b1 and C1030b1 were infected with A. marginale via transmission feeding of adult male Dermacentor andersoni ticks. Ticks were applied beneath a dermal patch and allowed to transmission feed for 7 days. Animals 33875 and 33901 were infected by intravenous inoculation of 1.0 ϫ 10 6 fresh erythrocytes infected with the A. marginale Florida strain obtained from an acutely infected donor calf and diluted in Hanks' balanced salt solution (HBSS). All cattle were monitored daily by determination of rectal temperature, packed cell volumes (PCVs), and percent infected erythrocytes, as determined by light microscopy of Giemsa-stained blood films.
A. marginale-specific antibody quantitation. Sera were collected approximately weekly during infection, and antibody specific to A. marginale MSP5 was measured. Sera were stored at Ϫ20°C until the assays were performed simultaneously on all samples. MSP5-specific antibody was detected using a competitive enzymelinked immunosorbent assay (C-ELISA) commercial kit (VMRD) as previously described (29) and following the manufacturer's instructions. Immunoblotting was used to determine the titers of IgG antibody specific for A. marginale using sera diluted 1:100, 1:1,000, 1:3,000, 1:30,000, 1:100,000, and 1:1,000,000 in blocking buffer (I-block; Applied Biosystems) containing 5% bovine serum albumin (BSA) and 0.5% Tween 20 (New England Biolabs) essentially as described previously (10) but with the following modifications. A. marginale homogenate (200 g) was boiled in sample buffer and electrophoresed on 10% Tris-HCl polyacrylamide Criterion gels (Bio-Rad) under denaturing conditions for 1 h at 100 V, and protein was transferred to 0.45-m-pore-size nitrocellulose membranes at 100 V for 30 min. Membranes were cut into individual strips, dried overnight, and washed repeatedly with blocking buffer. Each strip was incubated with individual dilutions of serum for 1.5 h with rocking at room temperature. Membranes were washed and blocked with blocking buffer and incubated with mouse anti-bovine IgG1 and mouse anti-bovine IgG2 (both from Serotec) at a 1:100 dilution in blocking buffer for 1.5 h. Tertiary antibody was horseradish peroxidase-conjugated goat anti-mouse IgG (Kirkegaard and Perry Laboratories) diluted 1:10,000 in blocking buffer. Blots were washed and developed with enhanced chemiluminescence Western blotting substrate (Thermo Scientific Pierce). Preimmune sera and uninfected erythrocyte (URBC) membranes were used as negative controls, and serum from an A. marginale South Idaho straininfected calf (C894b1) with known A. marginale-specific IgG2 and IgG2 titers was used as a positive control to standardize the immunoblots.
Quantitative PCR for detection and quantitation of bacteria. To quantify A. marginale in blood during infection, a previously described quantitative TaqMan assay (Applied Biosystems) based on the copy number of the A. marginale msp5 gene was performed (20,47). Peripheral blood collected weekly during infection was washed three times in phosphate-buffered saline (PBS; pH 7.2) with centrifugation to remove leukocytes. Washed erythrocytes were diluted 1:1 in PBS and frozen at Ϫ20°C until samples were processed. DNA was extracted from 300 l of washed erythrocytes using a genPURE kit (Qiagen). Reverse transcription-PCRs (RT-PCRs) were performed in triplicate using 100 ng of template DNA per reaction mixture in a total volume of 50 l including 25 l TaqMan Universal buffer, 12.5 g msp5 forward primer, 12.5 g msp5 reverse primer, 1.25 g of msp5 probe, and 17.4 l of nuclease-free water. Once the bacterial copy number was determined for the 100 ng of template DNA, the number of organisms per ml of whole blood was calculated.
Lymphocyte proliferation assays. Lymphocytes from fresh or cryopreserved PBMCs and spleen biopsy specimens were assayed in triplicate using 2.5 ϫ 10 5 viable cells/well in complete RPMI 1640 medium (Gibco) as previously described (1,2,10,22). Cells were stimulated in 96-well round-bottomed plates in a volume of 100 l/well with 15.0 g/ml of antigen using A. marginale South Idaho or Florida strain homogenate (matching infection strain) and negative-control URBC or Babesia bovis membranes. T cell growth factor diluted 1:10 and Clostridium sp. vaccine antigen diluted 1:100 in complete RPMI 1640 medium were positive controls (22). Cells were cultured for 6 days at 37°C in 5% CO 2 , labeled with 0.25 Ci [ 3 H]thymidine for 8 h, harvested, and counted in a beta counter. Short-term cell lines were established by culturing 4.0 ϫ 10 6 cryopreserved PBMCs in 1.5 ml complete RPMI 1640 medium with 5 g/ml of A. marginale homogenate in a 24-well plate for 6 days. Proliferation assays with these cells were performed as described above using 2.0 ϫ 10 4 lymphocytes with 2.5 ϫ 10 5 fresh autologous irradiated PBMCs/well, which were cultured for 4 days before they were harvested and counted. Results are presented as the mean cpm to A. marginale minus the mean cpm to negative-control antigen.
In some experiments, cryopreserved PBMCs or cell lines were depleted of CD4 ϩ , CD8 ϩ , and/or ␥␦ T cells and cultured in 6-or 4-day proliferation assays. Depletion was accomplished by incubating cells with monoclonal antibodies (MAb)s, obtained from the Washington State Monoclonal Antibody Center, specific for CD4 ϩ (ILA-11), CD8 ϩ (7C2B), and ␥␦ (GB21A) T cells at 15 g MAb per 10 7 cells rotating at 4°C for 30 min. Cells were repeatedly washed with compete RPMI 1640 medium and MACS bead buffer (Miltenyi) with 5% BSA. MAb-labeled cells were incubated on ice for 20 min with 80 l of goat antimouse IgG magnetic microbeads (Miltenyi) per 10 7 cells. MAb-and beadlabeled cells were depleted by filtration through an LS magnetic column (Miltenyi) under a strong magnetic field and washed with 3 ml of MACS buffer with BSA. Depletion and enrichment of cell subsets were verified by flow cytometry, and cells were assayed for proliferation to antigen as described above for cell lines and PBMCs. Statistical significance was determined for each proliferation assay by comparing the mean counts per minute of cells cultured with A. marginale antigen versus the mean counts per minute of cells cultured with negativecontrol antigen at the same concentration, using a Student's one-tailed t test with a Welch correction, where significant results had P values of Ͻ0.05. The results of proliferation assays with significant A. marginale-specific responses were then compared over the course of infection in reference to preinfection values using a one-way analysis of variance with a Bonferroni correction for multiple comparisons, and results were considered significant where P values were Ͻ0.05.
Bovine IFN-␥ ELISPOT assays. Bovine IFN-␥ enzyme-linked immunospot (ELISPOT) assays were performed in triplicate using 1.0 ϫ 10 6 cryopreserved PBMCs and splenocytes/well and cultured with 15.0 g/ml A. marginale South Idaho or Florida strain homogenate or medium as described previously (1,2). A mixture of 1.0 g/ml phytohemagglutinin (Sigma-Aldrich) plus 0.01 ng/ml recombinant human interleukin-12 (IL-12) (Genetics Institute) was used as the positive control. Antigen-specific responses were determined by comparing the mean number of spot-forming cells (SFCs) cultured with A. marginale homoge-
nate to the mean number of SFCs cultured with medium, and statistical significance was determined using a Student's one-tailed t test, where a response with a P value of Ͻ0.05 was considered significant.
Infection with A. marginale South Idaho strain.
To observe the dynamics of infection-primed antigen-specific CD4 ϩ T cell and antibody responses during acute and persistent infection, two calves were infected by transmission feeding D. andersoni ticks infected with the South Idaho strain of A. marginale. Animals C1028b1 and C1030b1 developed fever, malaise, and clinical anemia that were concurrent with peak blood levels of bacteria of 4.95 ϫ 10 8 and 4.51 ϫ 10 7 organisms/ml of blood, respectively, on day 31 postinfection (Fig. 1A and B). Both animals resolved the anemia by day 55 postinfection. Maximal decreases in PCVs were 34.5% (C1028b1) and 43.8% (C1030b1). Persistent-phase anaplasmosis was characterized by cyclical waves of bacteremia that peaked as high as 9.6 ϫ 10 6 organisms/ml of blood until day 336 postinfection, the termination of the study ( Fig. 1A and B).
Humoral responses to A. marginale measured by MSP5 C-ELISA showed that animals developed significant levels of A. marginale-specific antibody (Ͼ30% inhibition) by day 28 postinfection, and the levels remained elevated (Ͼ50% inhibition) throughout persistent infection ( Fig. 1A and B). Serum IgG1 and IgG2 titers (Table 1), determined by Western blotting using A. marginale South Idaho strain antigen, showed that A. marginale-specific IgG1 and IgG2 were detectable by day 9 postinfection with titers of 1,000 and 30,000, respectively, for animal C1028b1 and 30,000 and 10,000, respectively, for animal C1030b1. Titers remained high over the course of infection, reaching 100,000 late in infection, consistent with C-ELISA data.
T cell proliferation assays were performed with fresh PBMCs stimulated with A. marginale South Idaho strain homogenate ( Fig. 1C and D). Animals failed to developed significant antigen-specific proliferative responses until days 106 (C1028b1) and 112 (C1030b1) postinfection, and responses were transient and not significant 1 to 2 weeks later. Proliferation was not consistently associated with the level or fluctuation in the level of bacteremia, as in animal C1028b1 the response seemed to parallel bacteremia, whereas in animal C1039b1 peaks of proliferation were observed when levels of bacteremia dropped. However, in both animals initial proliferative responses occurred after bacteremia dropped below 10 6 organisms/ml of blood. Between days 106 and 336 postinfection, significant antigen-specific lymphocyte proliferation was observed eight and seven separate times in animals C1028b1 and C1030b1, respectively, and the responses were similarly transient. These results were repeated using cryopreserved cells (data not shown). In contrast, T lymphocytes responded to positive-control Clostridium sp. vaccine antigen, and responses were statistically significant throughout the course of the infection, although they did fluctuate ( Fig. 1E and F). These results indicate that the poor A. marginalespecific T cell responses were not due to generalized or nonspecific immune suppression.
To characterize the lymphocytes responding to A. marginale at time points where significant responses were observed, pro-liferation was repeated using short-term cell lines derived from cryopreserved PBMCs from responding time points. Cell lines were depleted of CD4 ϩ cells, CD8 ϩ cells, or ␥␦ T cells using magnetic beads coated with lymphocyte-specific MAb (Fig. 1G and H). A. marginale-specific proliferation was maintained when cell lines were enriched for CD4 ϩ T lymphocytes by depleting either CD8 ϩ cells or ␥␦ T cells, but responses were lost when cell lines were depleted of CD4 ϩ T lymphocytes. In one experiment with C1030b1 cells, depleting ␥␦ T cells also resulted in significantly diminished proliferation (Fig. 1H). These results demonstrate that A. marginale-specific lymphocyte proliferation was predominantly due to antigen-specific CD4 ϩ T lymphocytes, although in some experiments ␥␦ T cells may contribute to this response, as shown for certain ␥␦ T cell clones (30).
To enumerate antigen-specific cells and to characterize IFN-␥ production over the course of infection, IFN-␥ ELISPOT assays were performed using cryopreserved PBMCs stimulated with A. marginale. PBMCs from time points that had antigen-specific CD4 ϩ T lymphocyte proliferation did not show statistically significant numbers of A. marginale-specific IFN-␥-secreting cells compared to the background numbers of antigen-specific IFN-␥producing cells (cultured without antigen), although for the majority of time points sampled there were fewer SFCs in antigen-stimulated cells. Significant numbers of antigen-specific SFCs were noted only one time in each calf on days not associated with significant proliferation ( Table 2).
Infection of spleen-marsupialized cattle with A. marginale Florida strain. The lack of antigen-specific CD4 ϩ T lymphocyte responses during acute infection is difficult to reconcile with early production of antigen-specific IgG and development of high and persistent IgG titers, which is typically dependent on CD4 ϩ T lymphocyte help. This led us to determine whether antigen-specific CD4 ϩ T lymphocytes were present in the spleen, where infected erythrocytes are presumably removed (36), and whether responses occurred more transiently than we may have detected by sampling animals weekly. Also, it was possible that exposure to ticks induced an early immune suppression to A. marginale (51). To address these possibilities and to determine whether similar poor T cell priming occurred in response to a different A. marginale strain, naïve animals with spleens surgically marsupialized to permit frequent sampling by needle biopsy were infected intravenously with the non-ticktransmissible Florida strain, and the sampling frequency was increased. Florida strain-infected animals showed typical signs of acute infection, including fever, malaise, and bacteremia, peaking at 4.78 ϫ 10 9 (calf 33875) and 6.58 ϫ 10 9 (calf 33901) organisms/ml of blood on days 31 and 26 postinfection, respectively ( Fig. 2A and B). Anemia was quite severe and was associated with decreases in PCVs of 61.8% (animal 33875) and 62.3% (animal 33901). Both cattle resolved anemia by day 69 postinfection and remained persistently infected, with levels of bacteremia ranging from 10 4 to 10 8 organisms/ml of blood. PBMCs and splenocytes were collected every 2 to 3 days during acute infection and then approximately weekly over the remainder of the study to monitor lymphocyte responses.
Antigen-specific antibody and lymphoproliferative responses in A. marginale Florida-infected animals. Florida strain-infected animals produced MSP5-specific antibody by day 20 postinfection ( Fig. 2A and B). Antibody levels increased
(A and B)
A. marginale-specific antibody was measured weekly throughout infection with the MSP5 C-ELISA, and results are presented as percent inhibition, where values of Ͼ30% inhibition were considered significant; (C and D) A. marginale-specific lymphocyte proliferation was determined using fresh PBMCs; (E and F) proliferation specific for an unrelated vaccine antigen, a Clostridium sp. vaccine antigen, was determined using cryopreserved PBMCs. Asterisks denote significant responses compared with those on URBC or B. bovis membranes, where P is Ͻ0.05. All graphs are superimposed over levels of bacteremia, presented as log 10 number of organisms/ml of blood. (G and H) Cryopreserved lymphocytes from selected time points with significant A. marginale-specific proliferation were cultured for 1 week and depleted of CD4 ϩ , CD8 ϩ , or ␥␦ T cells. Depleted cells were used in proliferation assays with A. marginale South Idaho strain antigen. Significant decreases in proliferation compared to that for the nondepleted cell line are denoted with asterisks, where P is Ͻ0.05.
sharply and by day 27 postinfection maintained levels of 83.49 and 94.56% inhibition in the two animals, respectively, for the duration of the study. IgG1 and IgG2 titers determined by Western blotting using A. marginale Florida strain homogenate (Table 3) were 30,000 and 3,000 for the two animals, respectively, on the first day of sampling (day 13 postinfection). Titers fluctuated between 3,000 and 30,000 thereafter. These data and those in Table 1 show that both IgG1 and IgG2 were produced in response to infection.
Antigen-specific T lymphocyte responses in PBMCs were detected earlier in Florida strain-infected animals than in South Idaho-infected animals on days 36 (animal 33875) and 48 (animal 33901) postinfection, concurrent with declining levels of bacteremia ( Fig. 2C and D). Earlier detection of antigenspecific lymphocyte proliferation in the blood may be due to more frequent sampling. However, similar to South Idahoinfected animals, in Florida strain-infected animals antigenspecific proliferation was transient, lost within 1 to 2 weeks following initial detection, and recurred sporadically on days 69 and 82 (calf 33875) and on day 69 (calf 33901) postinfection. Cryopreserved cells from all responsive time points were retested in lymphocyte proliferation assays, and similar proliferation results were obtained at least twice (data not shown). Overall, our results with PBMCs indicate that antigen-specific T cell responses do occur in the peripheral blood during acute infection, although responses are transient and detectable for no more than two consecutive weeks. Additionally, responses prior to the peak of infection, in the face of ascending and high levels of bacteremia, could not be detected, despite increasing the sampling frequency to every 2 to 3 days.
Significant antigen-specific proliferation in the spleen was initially detected in animal 33901 on day 48, which coincided with significant proliferation in the peripheral blood, and in animal 33875 on day 89 postinfection. These responses were transient, as in the peripheral blood, and were absent the following week, recurring sporadically on day 104 (animal 33875) and days 89 and 104 (animal 33901) postinfection ( Fig. 2E and F). The responses were also repeatable using cryopreserved cells (data not shown). These data suggest that poor PBMC responses to A. marginale were not explained by sequestration of responding cells in the spleen during acute or persistent infection.
Lymphocytes were cultured over the course of infection with the Clostridium sp. vaccine antigen as a positive control (1,22). PBMCs and splenocytes maintained significant proliferative responses to this antigen at all time points in both animals, although the levels varied, again demonstrating that poor A. marginale-specific T cell responses were not due to generalized immune suppression ( Fig. 2G and H).
Antigen-specific lymphocytes in PBMCs were further characterized by depleting CD4 ϩ T cells or CD8 ϩ and ␥␦ T cells and repeating the proliferation assays (Fig. 2I). PBMCs maintained significant A. marginale-specific proliferation when the cells were enriched for CD4 ϩ T lymphocytes following depletion of CD8 ϩ and ␥␦ T cells but had insignificant proliferation following depletion of CD4 ϩ T cells. This again indicates that antigen-specific lymphocyte proliferation was predominantly mediated by CD4 ϩ T lymphocytes.
DISCUSSION
Cattle that survive infection with the intracellular bacterium A. marginale are incapable of completely eliminating the organism and remain persistently infected for life, although they are asymptomatic and otherwise immunocompetent (41). Our work has focused on understanding how A. marginale escapes and modulates the immune response to facilitate long-term persistence in this natural disease model. In previous studies we found that cattle immunized with A. marginale outer membranes were completely protected from infection, and protection correlated with OMPspecific CD4 ϩ T cell responses, including IFN-␥ and IgG2 production (10,11). Conversely, when cattle were immunized with native MSP2 or a recombinant partial MSP1a, which did not elicit protection against infection, there was a rapid loss of immunization-induced antigen-specific T cell responses, in one case documented as a loss in specific CD4 ϩ T cells, concurrent with peak levels of bacteremia during acute infection (1,22). Furthermore, the animals failed to develop new A. marginale-specific peripheral blood T cell responses during infection. This suggested that infection with A. marginale may impair priming of additional CD4 ϩ T cells in response to infection. The current study was designed to systematically monitor Anaplasma-specific T cell responses during acute and long-term persistent infection and to determine if specific T cells were sequestered in the spleen. We also examined responses to Clostridium vaccine antigen, which were significant throughout A. marginale infection in both spleen and peripheral blood. The results support the generation of an abnormal memory CD4 ϩ T cell response during A. marginale infection and not sequestra-tion of cells to the spleen or a generalized immune suppression.
Our data show that the predominant lymphocytes that do proliferate in response to A. marginale during infection are CD4 ϩ T cells, based on the findings of depletion experiments. This is logical, as A. marginale infects erythrocytes, which do not express MHC molecules. Thus, exogenous antigen must be taken up and presented by professional antigen-presenting cells, which favors CD4 ϩ T cell priming. However, in one marginale-specific antibody was measured weekly with C-ELISA; (C to F) A. marginale-specific lymphocyte proliferation was performed with fresh PBMCs (C and D) or splenocytes (E and F). Cells were collected every 3 to 7 days during acute infection and in general weekly thereafter. Asterisks denote significant responses compared with those on the URBC membrane, where P is Ͻ0.05. (G and H) Proliferation specific for an unrelated vaccine antigen, Clostridium sp. vaccine antigen, was determined using fresh PBMCs and splenocytes, and responses were significant at all time points (although they are not indicated by asterisks). All graphs are superimposed over levels of bacteremia, presented as log 10 number of organisms/ml of blood, for 110 days following infection. (I) Cryopreserved lymphocytes from selected time points with significant A. marginalespecific proliferation were depleted of either CD4 ϩ cells or CD8 ϩ cells and ␥␦ T cells, and proliferation assays were performed. Significant decreases in proliferation compared to that of the nondepleted cell line are denoted with asterisks, where P is Ͻ0.05. a Serum was collected on specified days postinfection (p.i.) and frozen at Ϫ20°C until Western blots were performed simultaneously. Blots for calves 33875 and 33901 were prepared with 20 g of Florida strain initial body lysate. All responses were interpreted as a response to a protein of ϳ35 kDa, corresponding to MSP2. Serum was diluted 1:100 to 1:1,000,000 with I-block. A value of Ͻ100 denotes a nondetectable antigen-specific response. Fig. 2I) did not significantly reduce the proliferative response. The mechanisms resulting in impaired antigen-specific T cell responses during A. marginale infection are not known. It does not appear that exposure to ticks can explain the dysfunctional response, as this was observed in cattle inoculated intravenously with infected blood as well. One possibility is that transient T cell responses result from periodic escape from a regulatory environment, such as that imposed by regulatory T cells, a mechanism that has been well described in many other persistent bacterial and viral infections (3). Another explanation is that impaired T cell responses result from continual deletion of newly primed antigen-specific CD4 ϩ T cells in response to high antigen load. Mechanistically, this may occur through activation-induced cell death from chronic antigen stimulation of the T cell receptor, resulting in deletion of antigen-specific T cell clones (28). It is likely that naïve T cells are continually primed to new antigenic variants of immunodominant MSP2 and MSP3 that arise during the course of infection (6,37), as we have shown that both conserved and hypervariable regions of MSP2 are immunogenic (1,2,(7)(8)(9)(10)18). There may also be continuous priming and expansion of CD4 ϩ T cells to subdominant OMP epitopes (33,34) over the course of persistent infection, as has been described in chronic viral and mycobacterial infections (27,52).
Dysfunction of antigen-specific CD4 ϩ T cells has been described in other persistent blood-borne infections characterized by a chronic high antigen load. Examples include human immunodeficiency virus (HIV) (56) and mouse models of malaria and Brugia pahangi microfilaremia (25,55). During infection with HIV, CD4 ϩ T cells undergo progressive dysfunction characterized by loss of IL-2 production ability but a retained ability to produce IFN-␥. This resulted in a short-lived effector phenotype of cells incapable of proliferation and therefore undetectable in antigen-specific proliferation assays. Such cells have been successfully rescued in vitro by culturing with IL-2 (56). In the mouse malaria model, adoptively transferred antigen-specific CD4 ϩ T cells were rapidly deleted from blood and tissues in an IFN-␥-dependent manner following infection (55). During infection with Brugia pahangi microfilariae, CD4 ϩ T cells had defective proliferation but did produce IFN-␥. The T cells underwent apoptosis ex vivo in response to antigen in an IFN-␥-dependent manner (25). In other models of infection-mediated T cell apoptosis, T cell-produced IFN-␥ was shown to be mechanistically involved in the dysfunctional response by driving intrinsic and extrinsic pathways of apoptosis (13,32). We rarely detected antigen-specific IFN-␥-secreting T cells following A. marginale infection (Table 2), and culturing PBMCs from nonresponding time points with antigen and several concentrations of IL-2 from suboptimal to optimal failed to elicit an antigen-specific proliferative response greater than that of IL-2 alone (data not shown). These results suggest that the lack of proliferation at many time points sampled through-out infection is not simply due to a lack of IL-2 production or inhibitory effects of IFN-␥.
Paradoxically, in spite of the inability to detect T cell responses early in infection, all cattle produced high titers of antigen-specific IgG1 and IgG2 as early as 9 days postinfection and maintained high titers for up to 1 year. Several possibilities may explain the isotype switching and high titers of IgG1 and IgG2 in the absence of detectable antigen-specific CD4 ϩ T cell responses. One is that antigen-specific T cells were present in other lymphoid organs not sampled, such as lymph nodes, lungs, liver, bone marrow, or gut. Interestingly, T cells specific for Plasmodium yoelii sporozoite antigens were primed in mouse lymph nodes draining the site of mosquito bites (12). In tick-transmitted A. marginale South Idaho strain-infected cattle, T cell priming could similarly occur in draining lymph nodes, but the cells should then traffic to the spleen, where infected erythrocytes are removed. In cattle infected intravenously with the Florida strain, one would expect T cell priming and expansion to occur in the spleen. Despite sampling repeatedly early in infection, we failed to detect antigen-specific proliferative and IFN-␥-productive responses in the spleen. It is also possible, but less likely, that IgG is produced in a CD4 ϩ T cell-independent (TI) manner (4,26,35), although TI antibody responses are generally characterized by higher and shorterlived antigen-specific IgM responses and seldom account for prolonged high-affinity IgG antibody responses.
In summary, we provide evidence that A. marginale-specific CD4 ϩ T cells primed during infection develop a poor memory response. Downregulation of the T cell response may prevent prolonged and likely deleterious systemic inflammation in the infected host in response to continual high levels of bacteria. A. marginale-mediated immune regulation would be beneficial for the pathogen as well as the host, which acts as a reservoir to ensure pathogen survival at high enough concentrations for efficient tick transmission of A. marginale to other naïve animals within areas of endemicity. | 2018-04-03T03:15:38.075Z | 2010-10-13T00:00:00.000 | {
"year": 2010,
"sha1": "d0bbba5b53f59475a2004d60ee898ff117084307",
"oa_license": null,
"oa_url": "https://doi.org/10.1128/cvi.00257-10",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "530ec03521859ca71a34006c42dabf84a3d39e60",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
36813600 | pes2o/s2orc | v3-fos-license | Ultraviolet-C Overexposure Induces Programmed Cell Death in Arabidopsis, Which Is Mediated by Caspase-like Activities and Which Can Be Suppressed by Caspase Inhibitors, p35 and Defender against Apoptotic Death*
Plants, animals, and several branches of unicellular eukaryotes use programmed cell death (PCD) for defense or developmental mechanisms. This argues for a common ancestral apoptotic system in eukaryotes. However, at the molecular level, very few regulatory proteins or protein domains have been identified as conserved across all eukaryotic PCD forms. A very important goal is to determine which molecular components may be used in the execution of PCD in plants, which have been conserved during evolution, and which are plant-specific. Using Arabidopsis thaliana, we have shown that UV radiation can induce apoptosis-like changes at the cellular level and that a UV experimental system is relevant to the study of PCD in plants. We report here that UV induction of PCD required light and that a protease cleaving the caspase substrate Asp-Glu-Val-Asp (DEVDase activity) was induced within 30 min and peaked at 1 h. This DEVDase appears to be related to animal caspases at the biochemical level, being insensitive to broad-range cysteine protease inhibitors. In addition, caspase-1 and caspase-3 inhibitors and the pan-caspase inhibitor p35 were able to suppress DNA fragmentation and cell death. These results suggest that a YVADase activity and an inducible DEVDase activity possibly mediate DNA fragmentation during plant PCD induced by UV overexposure. We also report that At-DAD1 and At-DAD2, the two A. thaliana homologs of Defender against Apoptotic Death-1, could suppress the onset of DNA fragmentation in A. thaliana, supporting an involvement of the endoplasmic reticulum in this form of the plant PCD pathway.
Programmed cell death (PCD) 1 is involved in some plant pathogen interactions (1) and in normal developmental processes during the plant life cycle. For example, it plays a role in the germination of seeds, the differentiation of the tracheary elements, reproduction, flower senescence (2), and senescence (e.g. Ref. 3). Building on the ancestral form of PCD, plants are expected to have evolved their own pathways to cope with plant-specific features such as the presence of cell walls that prevent dead cells from being engulfed by neighboring cells. Light dependence may be another example of a specific aspect of at least some forms of plant PCD. Induction of cell death requires light in a number of lesion-mimic mutants in Arabidopsis, lsd1 (4) and acd11 (5), and in maize, lls1 (6). Light is also required for PCD induced by the mycotoxin fumonisin B 1 (7). Whether and how light is required for activating cell death or for its execution are not yet clear.
Despite these specificities, some cellular aspects appear to be conserved in animals and plants, including DNA fragmentation (laddering), protoplast shrinkage, and chromatin condensation (8). In addition, caspase-like activities have been detected in plants (9). In animals, caspases are specifically activated during PCD. In particular, caspases initiate cell death by degrading several proteins essential for cell integrity (e.g. poly(ADPribose) polymerase), lamins, and gelsolin). Caspase activities can be measured using fluorogenic peptide substrates and can be blocked by the same peptide substrates coupled to an aldehyde. Caspase-like activities have been detected and measured in plant PCD induced during hypersensitive response (HR) (10) or after a heat shock of suspension cells (11). In support of a role for these caspase-like activities in plant PCD, experiments in tobacco showed that, during PCD induced by menadione in protoplasts, caspase inhibitors can block the induction of DNA fragmentation and of poly(ADP-ribose) polymerase cleavage (12). Caspase inhibitors (Ac-DEVD-CHO and Ac-YVAD-CHO) have also been shown to block PCD after pathogen induction (10). Expression of p35, a caspase inhibitor, has been reported to reduce the onset of apoptosis in embryonic callus in maize (13). This protein specifically inhibits caspases in insects, nematodes, and humans by blocking their active sites (14,15). Key amino acid residues essential for inhibitor function have been identified using point mutations (15). More recently, it was shown that transgenic tomato plants bearing the p35 gene are protected against Alternaria alternata f. sp. lycopersici (Aal)toxin-induced death and pathogen infection, confirming that p35 can suppress PCD in plants (16).
The possible common origin of PCD in multicellular organisms and the conservation of some features of apoptosis might be expected to be partially reflected at a molecular level. However, despite the completion of the Arabidopsis genome sequence, only a few plant genes have been identified as orthologs of mammalian genes involved in apoptosis, e.g. At-BI1 (Arabidopsis thaliana BAX Inhibitor-1) (17,18); At-DAD1 (A. thaliana Defender against Apoptotic Death-1) (19,20); and cytochrome C, AIF, and PIG3 (reviewed in Ref. 8; see Ref. 21). The plant Bax inhibitor is localized in the endoplasmic reticulum (ER), and overexpression of the Bax inhibitor has recently been shown to suppress pathogen-induced plant PCD (22,23). Finally, although there are no caspase orthologs, it has been proposed that plant metacaspases are homologous and functionally equivalent to animal caspases (24).
We have show in a previous report that UVC stress induces apoptosis-like changes in Arabidopsis (25). These include detection of a DNA ladder, changes in nucleus morphology (crescent shape), and nucleus fragmentation. In protoplasts, DNA fragmentation was detected using the TUNEL reaction and correlated with the onset of cell death measured using a vital dye (25). UVC radiation has often been used to study various physiologically relevant responses to DNA damage; and in particular, it has been shown to induce apoptosis in animal cells (26). UV radiation can damage many aspects of plant processes at the physiological and DNA levels (27), and our study showed that this cellular damage can trigger PCD in response (25).
In this report, we show that cell death induced by UVC can be inhibited specifically by caspase inhibitors and is possibly mediated by caspase-like activities that cleave Asp-Glu-Val-Asp (DEVDase) or Tyr-Val-Ala-Asp (YVADase). The DEVDase activity was induced prior to DNA fragmentation detected by the TUNEL reaction, and caspase inhibitors blocked the onset of DNA fragmentation. Using protoplast transfection, we show here that the DNA fragmentation induced by UVC can be suppressed by transient overexpression of p35 or At-DAD genes. These findings confirm that UV-induced cell death is a form of PCD and suggest a role for At-DAD or for ER stress in the PCD of plants.
EXPERIMENTAL PROCEDURES
Plant Material-Seeds from A. thaliana (Columbia 0) were sterilized, sown under aseptic conditions on agar plates containing Murashige and Skoog salts and vitamins (Duchefa Ltd.) supplemented with 20 g/liter glucose, and grown at 21°C with a 14-h photoperiod.
Protoplast Preparation-A. thaliana leaf protoplasts were isolated from 3-4-week-old seedlings essentially as described (25,28). After isolation, protoplasts were resuspended in culture medium (Murashige and Skoog salts and vitamins supplemented with 0.4 M sucrose, 0.4 M mannitol, and 100 mg/liter cefotaxime) at a density of 10 6 protoplasts/ ml. For each treatment, 10 6 protoplasts were used and then incubated in one 15-ml tube (Sarstedt, Inc.).
Plasmid Constructions-At-DAD1 cDNA (GenBank TM /EBI accession number X95585) was excised from a library prepared in the -ZAP vector (Stratagene). At-DAD2 cDNA (GenBank TM /EBI accession number AF030172) was cloned in the pMOSBlue vector (Amersham Bio-sciences). Plasmids p35S::At-DAD1 and p35S::At-DAD2 were constructed by inserting a BamHI/SphI fragment from the cDNA construction into the BamHI/SphI sites of the pDH51 plasmid (a gift of Dr. Paszkowski, Friedrich Miescher Institute, Basel, Switzerland). p35S::p35 was constructed by inserting an XbaI/SacI fragment from the pPRM-35K-ORF mammalian expression vector containing p35 cDNA (a gift of Professor P. Friesen, University of Wisconsin, Madison, WI) into the XbaI/SacI sites of the pBI221 plasmid. p35S::p35-D87A was generated using p35S::p35 and primers according to the QuikChange protocol (Stratagene). The pRTL2-GUS plasmid (a gift of Professor James Carrington, Texas A&M University, College Station, TX) was used as a transfection control. To obtain an At-DAD1-GFP fusion, At-DAD1 cDNA was amplified using PCR primers 3Ј-GFP:F (CAGTCCATGGT-GAAATCGACGAGTAA) and 3Ј-GFP:R (CAGTCCATGGCTCCGAGGA-AGTTGATGATCA), which add a new NcoI site to either end of At-DAD1 cDNA. The At-DAD1 PCR product was digested using NcoI (Roche Applied Science) and cloned into the NcoI site immediately 5Ј to the EGFP coding sequence in the pK100 construct. The resulting p35S::DAD1-EGFP cassette was then excised using HindIII and cloned into the HindIII site of the plant binary vector pPZP111b (a gift of Dr. S. Michaels, University of Wisconsin, Madison, WI) and used in onion bombardment experiments. pK100 was created by excising the EGFP-1 gene (Clontech) with NcoI/NotI and cloning into the multiple cloning site of the pDH51 vector. pEGFPer was created by adding a basic chitinase secretion peptide and the ER retention signal KDEL to EGFP. 2 Protoplast Transfection-For transfection, protoplasts were resuspended at 10 6 protoplasts/ml in 0.4 M mannitol, 15 mM MgCl 2 , and 0.1% MES (pH 5.6) and kept for 30 min on ice. Protoplasts were then mixed with 50 g of plasmid DNA in a 6-well Petri dish (Corning), and 1 ml of 40% polyethylene glycol (PEG) was carefully added. Protoplasts were incubated for 30 min at room temperature and washed sequentially three times for 5 min with 2, 4, and 8 ml of W5A (25) solution. Protoplasts were then pelleted at low speed centrifugation (60 ϫ g, 5 min), resuspended in 1 ml of culture medium, and cultured for 24 h at room temperature. Transfection efficiency was between 40 and 60%. A transfection control using -glucuronidase assays was routinely carried out.
Semiquantitative RT-PCR and Transgene Expression-Total RNA was extracted from aliquots of protoplasts transfected with various plasmids using the Invitek kit (Invisorb). 5 g of total RNA was treated with 5 units of RQ1 RNase-free DNase (Promega) in a 40-l total volume. cDNAs were obtained using the PRO STAR TM first-strand RT-PCR kit (Stratagene) according to the manufacturer's instruction with the following modifications. 1.2 l of oligo(dT) primer was added to 2 g of total RNA in a final volume of 15.2 l in Milli-Q water. In the fourth step of the protocol, the reagent volumes were reduced by 2.5fold. Absence of transfected plasmid or genomic DNA in the cDNA samples was verified by PCR amplification without a reverse transcription step. Semiquantification of target transcripts was carried out using PCR and gene-specific primers, selecting a number of cycles in the linear range of amplification. Actin-2 (GenBank TM /EBI accession number U41998) was used as a reference gene to compare expression levels in different batches.
UVC Irradiation of Plants and Protoplasts-Seedlings were irradiated in open Petri dishes (9-cm diameter; Corning) using a UV Stratalinker 2400 (Stratagene) fitted with 254-nm UVC light bulbs. Protoplasts were irradiated in an open 6-well Petri dish at 10 6 protoplasts/ml of culture medium. For caspase inhibition experiments, 10 5 protoplasts in 100 l of culture medium were irradiated in an open 6-well Petri dish. Seedling and protoplasts were kept in white light: 400 -700 nm, 100 mol m Ϫ2 s Ϫ1 (OSRAM L 36W/21-840 Hellweiss Lumilux Cool White) after UV treatment unless stated otherwise. A UVC sensor fitted inside the Stratalinker irradiation chamber measured the UVC energy delivered in each experiment. The UVC doses used varied between 10 and 50 kJ/m 2 . The UVC bulbs used emit a sharp wavelength band centered at 253.7 nm.
In Situ Detection of Cell Death Using Evans Blue-Protoplast samples to be analyzed were incubated with 0.04% Evans blue for 5 min. Blue cells were scored as dead cells using a light microscope.
In Situ Detection of Nuclear DNA Fragmentation-The TUNEL reaction was carried out on fixed protoplasts according to Danon and Gallois (25). The protoplasts on slides were viewed with a fluorescence microscope (Zeiss Axioplan) using an FITC Blue 450 -490 filter (Zeiss). For each sample, photographs of multiple microscopic fields were taken, and TUNEL-positive nuclei were scored on prints.
Effect of Caspase Inhibitors in Vivo-Protoplasts were incubated for 1 h at room temperature with a caspase inhibitor (Ac-DEVD-CHO or Ac-YVAD-CHO (100 M); Bachem Ltd.) and irradiated with 10 or 15 kJ/m 2 UVC. After various incubation times, protoplasts were harvested and incubated in the presence of Evans blue or fixed for the TUNEL reaction.
Protein Extraction and Caspase-like Activities-A. thaliana seedlings were grown for 3 weeks in Petri dishes (9-cm diameter) and then irradiated with 50 kJ/m 2 UVC and incubated under our standard culture conditions. At different time points, the plants were frozen in liquid nitrogen and ground to a powder with a mortar and pestle. The powder was collected in 1.5-ml tubes and kept at Ϫ80°C until processing. The samples were then resuspended in assay buffer (20% glycerol, 0.1% Triton, 10 mM EDTA, 3 mM dithiothreitol, 2 mM phenylmethylsulfonyl fluoride (PMSF), and 50 mM sodium acetate) and incubated with 200 M Ac-DEVD-AMC or Ac-YVAD-AMC (Bachem Ltd.). AMC fluorescence was detected using a Labsystems Fluoroskan II every 5 min during a 1-h reaction. The enzymatic activity was calculated as the slope of the product concentration as a function of time. This activity was then standardized to the quantity of total proteins present in the sample (Bradford assay, Bio-Rad). For inhibitor assays, samples were resuspended in assay buffer and incubated for 1 h with a caspase inhibitor Transfection of Onion Cells-Microprojectiles and DNA were prepared according to Hull et al. (30). Pieces of onion epidermis still attached to the ground tissue were placed on Petri dishes containing MS30 medium solidified with 10 g/liter Bacto-agar. Bombardments were carried out according to Hull et al. (30), and onion cells were incubated in the dark for 24 h at 22°C. The epidermis was peeled off, and GFP-positive cells were observed with a Leica fluorescence microscope under blue wavelength excitation light using an FITC Blue 450 -490 filter.
UV-induced PCD Is
Light-dependent-To examine the effect of light on PCD induced by UVC, 5-day-old seedlings were subjected to increasing amounts of UVC and kept in the dark or in the light for 72 h. In the light, seedlings developed an obvious bleaching of their leaves with doses of 10 kJ/m 2 and above. In contrast, seedlings kept in the dark showed no bleaching (Fig. 1A). There was still no bleaching of seedlings kept in the dark for up to 7 days (data not shown). To confirm that bleaching was a measure of cell death, the same treatment was applied to protoplasts prepared from seedlings, and cell death was measured by Evans blue, a marker of plasma membrane integrity. Dead protoplasts accumulated the dye, whereas live protoplasts excluded the dye (Fig. 1B). When treated with UV and kept in the light, up to nearly 100% of the protoplast population was scored dead. The death rate in the dark was as little as 10% and comparable with untreated samples, although treated protoplasts appeared shrunken compared with untreated protoplasts (Fig. 1C).
DEVDase and YVADase Activities in UV-irradiated Seedlings-To assess whether cell death in UV-irradiated plants is partially homologous to animal PCD at the molecular level, we assayed for caspase-1-like activity, a protease that cleaves the substrate Tyr-Val-Ala-Asp (YVADase), and caspase-3-like activity, a protease that cleaves the substrate Asp-Glu-Val-Asp (DEVDase). Caspase-like activities were tested in extracts from irradiated and untreated seedlings using two caspase substrates: Ac-DEVD-AMC and Ac-YVAD-AMC (200 M). YVADase activity was present in untreated plant extracts, and no induction of YVADase activity was detected after UVC ir- radiation (data not shown). In contrast, induction of DEVDase activity was detected 30 min after irradiation and reached a peak at 1 h ( Fig. 2A). Various protease inhibitors were tested to assign this DEVDase activity to a specific class of proteases. The protein extract was preincubated for 1 h with caspase inhibitors Ac-DEVD-CHO and Ac-YVAD-CHO (200 M) before adding the substrate Ac-DEVD-AMC (200 M). This final concentration is in the range used in animal studies. Inhibition with broad-spectrum cysteine and serine protease inhibitors that do not inhibit animal caspases was also tested: pepstatin A (3 mM), leupeptin (4 mM), or E-64 (10 M). Inhibitor analysis (Fig. 2B) showed that the DEVDase activity was totally inhibited by Ac-DEVD-CHO and only partially by Ac-YVAD-CHO or by pepstatin A and leupeptin. E-64 had little effect on DEVDase activity, in agreement with inhibition data for animal caspases (31).
It has been proposed that cysteine proteases of non-caspase families (papain and legumain) are involved in PCD in plants (reviewed in Ref. 32). In addition, it has been shown that legumain can cleave Ac-YVAD-AMC, a substrate specific for caspase-1 (33). But in contrast to DEVDase activity, the activity of these two families of cysteine proteases were suppressed after UVC radiation (data not shown).
In the experiments presented here, the assay buffer used contained PMSF (serine protease inhibitor) and EDTA (metalloprotease inhibitor), two class-specific inhibitors that do not inhibit animal caspases. The inhibitors are used to distinguish possible plant caspase-like activities from other protease activities. In the absence of inhibitors in the assay buffer, there was a background activity in the non-induced samples that masked the activation of the DEVDase activity (data not shown). In the presence of the chosen inhibitors, the background activity was reduced, and activation results were clearer. The use of classspecific inhibitors to assay caspase-like activity in crude extract is consistent with protocols used in studies of animal caspases. (i) Stennicke and Salvesen (34) have reported that caspase substrates can be cleaved in crude extract by proteases other than caspases. (ii) PMSF (serine proteases) and EDTA (metalloproteases) do not inhibit animal caspases (31,35). (iii) These inhibitors have been used in buffers to purify animal caspase-1 (35).
Further experiments showed that, in contrast to animal caspases, which are active at pH 7, DEVDase was active at pH 5 (data not shown). The activity was also relatively insensitive to a salt concentration of up to 200 mM (Fig. 2B), which has been shown to have an inhibitory effect on the detection of some caspase-like activity in plants (36).
In conclusion, UVC radiation induced DEVDase activity, which was specifically inhibited by Ac-DEVD-CHO and which was not significantly affected by other protease inhibitors. YVADase activity was detected in both induced and noninduced tissues.
DEVDase and YVADase Are Required for Induction of DNA Fragmentation and Cell Death-We have shown previously that, in plants, UVC radiation indirectly induces fragmentation of the genomic DNA, resulting in the formation of a DNA ladder (25). As in animal apoptosis, the rungs are multiples of 180 bp. This DNA fragmentation can also be detected in situ using the TUNEL reaction, which labels the free 3Ј-OH DNA extremities (25) and constitutes a marker of plant PCD. In animal cells, caspase-3 activates the DNase responsible for the appearance of the DNA ladder. Having detected DEVDase and YVADase activities in Arabidopsis extracts, we therefore investigated a possible link between these activities, cell death, and DNA fragmentation. Caspase inhibitors were analyzed for their potential ability to block DNA fragmentation and to rescue UV-irradiated protoplasts. Protoplasts were prepared using 3-week-old seedlings grown in vitro, incubated for 1 h at room temperature with a caspase inhibitor (Ac-DEVD-CHO or Ac-YVAD-CHO (100 M)), and subsequently irradiated with UVC. After 4 h of culture in the light, protoplasts were harvested, fixed, and analyzed for DNA fragmentation using the TUNEL reaction. Non-irradiated protoplasts showed a background of 10% TUNEL-positive cells (Fig. 3A). This DNA fragmentation could be suppressed by YVAD-CHO and, to a lesser extent, by DEVD-CHO. Under the experimental conditions used, UV irradiation clearly induced DNA fragmentation, pushing it up to 35%. This induction could be totally prevented when protoplasts were preincubated with either of the two caspase inhibitors used.
The same experiment was repeated, and cell death was measured using Evans blue. 4 h after UV treatment, cell death reached 60% in the control sample without inhibitors. Preincubation in the presence of DEVD-CHO or YVAD-CHO reduced cell death by one-half. Suppression was less efficient than in the previous experiment, probably because Evans blue detected some necrosis (accidental death) that was not suppressed by the inhibitors. It remains that DEVD-CHO and surprisingly YVAD-CHO were able to suppress both DNA fragmentation and cell death.
Transfection of p35 in Protoplasts Inhibits DNA Fragmentation and Cell Death-Transient expression analysis is a powerful tool to analyze the function of plant genes potentially involved in controlling PCD. Our UV and protoplast system is particularly suited for such analysis, as it allows protoplast transfection and cell death quantification. To assess the potential of this approach and to confirm the results obtained with the caspase inhibitors, we selected p35, an apoptosis inhibitor gene of viral origin that specifically suppresses caspase activity in animal cells to prevent the death of infected cells (14). Several mutations, including D87A, are known to prevent the inhibition of the caspase enzymes by p35 (15). To test whether the effect of p35 is conserved in plant cells, we made a construct with a p35 or a p35-D87A coding sequence under the control of the cauliflower mosaic virus 35S promoter. Preliminary experiments showed that the PEG treatment used or the transfec-tion of plasmid DNA (50 g of p35S::GUS) did not in itself protect protoplasts from the effect of UVC and did not prevent the DNA fragmentation response (see Fig. 5) (data not shown). Protoplasts were then transfected with 50 g of p35S::p35 plasmid. After 24 h of incubation to allow expression of the transfected gene, protoplasts were irradiated, and samples were harvested and analyzed using the TUNEL reaction (Fig. 4A). RT-PCR experiments were carried out on the transfected samples and showed that the transgene was expressed. Transfection with the p35S::GUS plasmid showed that -glucuronidase activity was not affected by UV treatments, suggesting that the proteins were not destroyed by UV treatments (data not shown). In the mock transfection, the frequency of TUNELpositive cells increased from 20 to 65% after 4 h. Transfection of p35S::p35 reduced the DNA fragmentation to a frequency of only 40% TUNEL-positive cells. In a second experiment, p35S::p35 or p35S::p35-D87A was transfected, and the effect of UV was analyzed at 4 h. RT-PCR experiments showed that the transgenes were expressed at a similar level. The point mutation D87A abolished the inhibitory effect of p35 (Fig. 4B). The same results were obtained using Evans blue to measure cell death, where the p35S::p35-D87A sample reached 70% dead cells, whereas the samples expressing functional p35 reached only 40% dead cells. As shown above, cell death measured using Evans blue correlated with the TUNEL results.
Cotransfection with another reporter construct such as a p35S::luciferase is clearly not necessary in our system, as the S.D. is already sufficiently small to allow meaningful comparisons. This experiment confirmed the results obtained using synthetic caspase inhibitors and indicates that the inhibition of DNA fragmentation is caspase-like specific. It also emphasizes the potential of protoplast transfection as a means to test the activity of putative suppressor genes.
Transfection of the At-DAD Genes in Protoplasts Inhibits DNA Fragmentation-DAD1 was originally discovered in hamster cells, where the cell line carrying the dad1 mutation dies via apoptosis (19). One A. thaliana homolog (At-DAD1) has been reported in the literature, and we demonstrated that the function of this protein is conserved between plants and animals by transformation of the original mutant hamster cell line with At-DAD1 (20). We identified a second homolog (At-DAD2) in the genome of A. thaliana (GenBank TM /EBI accession number AF030172). The two At-DAD genes share the same organization, with variations only in intron sizes (data not shown). At the amino acid level, the At-DAD1 and At-DAD2 sequences are 95.7% identical, with three of the five amino acid changes localized in the N-terminal cytoplasmic domain of the protein (Fig. 5A). The prediction of the membrane-spanning domains and the orientation of the protein N terminus relative to the cytoplasm have been confirmed experimentally for animal DAD1 (37).
To investigate the subcellular localization of DAD1 in plants, we constructed a C-terminal fusion of At-DAD1 with EGFP. Transfection of onion cells indicated that the At-DAD1-EGFP fusion has the same subcellular localization pattern as ERtargeted EGFP, indicating that the plant DAD1 protein is located in the ER (Fig. 5B). Using RT-PCR, we found that both genes were expressed at similar levels in all tissues and under all conditions tested (data not shown). Given that DAD1 may have an apoptosis suppressor role in animals, we wondered whether overexpression of At-DAD1 or At-DAD2 could protect cells from PCD in A. thaliana. To this end, we overexpressed both cDNAs in protoplasts before UVC irradiation. Protoplast samples were transfected with various plasmid constructions: p35S::GUS (negative control), p35S::At-DAD1, or p35S::At-DAD2. After 24 h of culture, RT-PCR experiments showed that the transgenes were expressed at a similar level. Protoplasts were irradiated with UVC, and aliquots were harvested after irradiation to carry out the TUNEL reaction. Fig. 5C shows that, in irradiated protoplast samples subjected to a mock transfection with no DNA or to transfection with p35S::GUS, the proportion of TUNEL-positive cells doubled, increasing from 25 to 50% 2 h after UVC irradiation. In contrast, in samples transfected with p35S::At-DAD1 or p35S::At-DAD2, a large proportion of the protoplasts were protected from UVCinduced PCD, as the percentage of TUNEL-positive cells reached only 30%. Overall, this suggests that the plant DAD proteins have the ability to suppress or significantly delay the induction of DNA fragmentation in transient expression as-says. We obtained similar suppression of DNA fragmentation using At-BI1. 3 DAD1 is localized in the ER membrane and is a possible anchorage protein for a structural unit within the oligosaccharyltransferase complex (37). This raises the possibility that DAD1 overexpression affects the induction of DNA fragmentation by limiting or preventing ER stress in UVC-treated cells. One possibility is that overexpression of DAD1/2 stabilizes or increases N-glycosylation. Therefore, we subjected the transfected protoplasts to ER stress downstream of glycosylation by pretreatment for 2.5 h with 20 g/ml tunicamycin. This prevents glycosylation, promotes misfolding, and has been used to trigger the accumulation of unfolded proteins in protoplasts (38). Within the 6 h of our assay, At-DAD1 protected protoplasts from UVC-induced PCD as efficiently with or without tunicamycin in the culture medium (Fig. 5D). DISCUSSION UVC is a very convenient trigger to induce PCD in plants and protoplasts in a reproducible manner. UVC has been used in animal studies to study the triggering of apoptosis following DNA damage and the activation of p53. Whether a similar pathway is triggered or not in plants remains to be shown because no homolog of p53 has been identified to date. We have shown that UV-induced cell death displays apoptotic hallmarks (25), and we show here that it is a light-dependent process, possibly mediated by caspase-like activities. We take these two last results as additional evidence that UVC induces PCD in plant protoplasts. Using protoplasts is a reductionist approach that is a strength when studying intrinsic PCD pathways at the cellular level. Whole plants are complex systems to study cell death at the cellular level. Protoplasts are therefore an attractive alternative. It is possible that protoplasts are primed for cell death during isolation and do not recapitulate all aspects of PCD in plants, but our study showed that this is not detrimental to the analysis of the execution of PCD at the cellular level. At later stages, the findings made using protoplasts will be integrated at the whole plant level. As an example, our p35 results in protoplasts are validated by studies using stable transformants in other experimental systems where p35 expression has been shown to reduce cell death (13,16).
We show here that this form of PCD shares a light requirement with other forms of reported PCD such as HR (4,5) or the one induced by fumonisin B 1 (7). One explanation for this light requirement might be that reactive oxygen species generated during photosynthesis (39) are implicated in the PCD process. The generation of reactive oxygen species has been shown to be involved in HR, a typical example of PCD in plants (40). It has been suggested that HR may need functional chloroplasts, although a mechanism for the involvement of chloroplast function in HR has not been established (39). Another possible explanation is based on the discovery that a salicylic acid synthesis pathway is localized in the chloroplast (41). The fact that salicylic acid is an important signal molecule for PCD in plants could indicate a role for chloroplasts in this process and explain light dependence. Interestingly, after exposure of Arabidopsis to UVC, salicylic acid has been reported to accumulate together with EDS5 mRNA, an essential component of salicylic acid-dependent signaling for disease resistance in Arabidopsis (42). Taken together, these results indicate that UVC induces salicylic acid biosynthesis in chloroplasts. It will be thus very interesting to investigate in the future further possible connections between UVC irradiation, salicylic acid production, light, and PCD.
We report here, using A. thaliana protoplasts, that caspase-1 and caspase-3 inhibitors can prevent DNA fragmentation and cell death induced by UVC irradiation. We used both YVAD and DEVD inhibitors and substrates to make our study comparable with other plant studies (e.g. Refs. 10 and 11). We also demonstrated that expression of the baculovirus pan-caspase inhibitor p35 was able to block PCD induced by UV in plants in a specific manner because the null mutant p35-D87A was unable to suppress DNA fragmentation. It has been shown that the D87A mutation prevents the cleavage of p35 by animal caspases and causes the protein to lose its caspase inhibitor activity (15). The fact that caspase inhibitors are able to suppress cell death and DNA fragmentation suggests that proteases, which could be called caspase-like, are involved directly or indirectly with the execution of PCD induced by UVC. In support of this, we showed that, in our system, DEVDase activation took place within the first hour after induction of cell death. This is before the peak of TUNEL-positive cells measured at 4 h (25). This is different from other reports of late activation of caspase-like proteases (10,11). This timing and the inhibition of DNA fragmentation by DEVD-CHO and p35 provide indirect evidence that the detected DEVDase possibly mediates the activation of DNA fragmentation. This DEVDase activity appears to be different from the one described in suspension cells by FIG. 5. Transfection of the At-DAD genes suppresses UVC-induced DNA fragmentation. A, At-DAD1 and At-DAD2 sequences are 95.7% identical. DAD proteins contain three membrane-spanning domains. Different color backgrounds map the predicted transmembrane domains. B, shown is the ER localization of an At-DAD1-EGFP fusion. Constructs were bombarded in onion epidermis cells, and EGFP fluorescence was detected after 24 h of incubation. The plasmids used were p35S::DAD1-EGFP, p35S::EGFPer (a positive control for ER localization), and p35S::EGFP (a cytosolic control). Images show a close-up of cytosolic strands inside onion cells. A reticulum pattern is visible with At-DAD1-EGFP and EGFPer. This pattern is absent with EGFP. C, protoplasts were treated with PEG (control) or transfected with 50 g of various plasmid constructs: p35S::GUS (negative control), p35S::At-DAD1, or p35S::At-DAD2. After 24 h of incubation, protoplasts were irradiated with UVC and kept in continuous light, and aliquots from the same transfection tube were taken 0, 2, and 4 h after irradiation to carry out the TUNEL reaction. Error bars are S.D. values for replicates. D, shown is the effect of tunicamycin on the suppression of DNA fragmentation by At-DAD1. Protoplasts were treated with PEG or transfected with 50 g of plasmid p35S::At-DAD1 (DAD1). After 24 h, an aliquote of PEG and DAD1 was preincubated in 20 g/ml tunicamycin for 2.5 h (PEG ϩ T, DAD1 ϩ T). All samples were subsequently irradiated with UVC and kept in continuous light. Aliquots from the same transfection tube were taken 0, 2, and 4 h after irradiation to carry out the TUNEL reaction. Error bars are S.D. values for replicates. Korthout et al. (36), as it was relatively insensitive to salt concentration and had a different pH optimum.
Although we observed a correlation between the activation of DEVDase and DNA fragmentation, the process involved may be different from that described during animal apoptosis. No caspase ortholog has yet been reported in plants, although it has been suggested that the distant metacaspase family contains functional homologs (24). However, purification experiments or expression in heterologous systems is required to tell us whether metacaspases possess caspase-like enzymatic activities. Caspase-1, which cleaves YVAD, is not involved in animal apoptosis, but in inflammation pathways instead (43). Therefore, inhibition of plant cell death by YVAD-CHO represents a difference between animal and plant cell death. In addition, we have measured a YVADase activity at various time points after irradiation, but no clear induction was detected, although the inhibitor data suggest that YVADase is required for the completion of cell death. We can speculate that a constitutive YVADase activity is necessary to prime the cell for cell death, but is not sufficient to induce cell death. For example, it is possible that YVADase activity is required to preactivate the DEVDase by cleavage. An alternative explanation is that the YVADase activity is repressed in vivo before PCD, but can be measured in cell extracts because its inhibition is relieved artifactually upon tissue homogenization.
In the absence of class-specific protease inhibitors (PMSF (serine proteases) and EDTA (metalloproteases)) and after UVC overexposure, there was a high background activity. In contrast, in the presence of these inhibitors, the background activity decreased, and the DEVDase was clearly up-regulated. This increase cannot be a result of an artifactual increase in the DEVDase activity due to the presence of inhibitors because we compared the activity before and after UVC treatment in the same extraction buffer. The most likely explanation is that, in the absence of class-specific inhibitors, several protease families may contribute non-specifically to the DEVDase activity. In the presence of inhibitors, this non-specific background is reduced.
The DEVDase activity detected could have been due to a non-specific cleavage of caspase substrates by other cysteine proteases such as papain and legumain. Cysteine proteases of the papain family are associated with PCD induced by H 2 O 2 in soybean cells (44) and are also induced in tracheary element differentiation in Zinnia elegans (45). Legumains, another family of cysteine proteases, are expressed in senescent tissue of A. thaliana (46). In addition, we have shown that legumain can cleave the caspase substrate Ac-YVAD-AMC (33). In our experimental system, we have evidence that the DEVDase activity detected was not due to non-specific substrate cleavage by legumain or papain because the latter activities were downregulated after UV treatment, and, in contrast, the DEVDase activity was up-regulated. Moreover, E-64 inhibited the papain activity, but not the DEVDase activity detected. Finally, leupeptin, pepstatin, and E-64 do not inhibit animal caspases and did not inhibit the DEVDase activity in our assay, which therefore behaves biochemically as an animal caspase. This suggests that DEVDase may be a true caspase-like protease, possibly a metacaspase. The proteases responsible for caspase-like activities in plants remain to be identified to establish their exact specificity and to establish whether plant PCD relies on a network of specific proteases. The results presented here form a sound basis for future purification schemes.
A transient assay using protoplasts has several advantages over using stable transformants to investigate the function of genes involved in PCD. (i) It circumvents the difficulties inherent to the study of lethal genes. (ii) It allows the easy quanti-fication of a subtle induction or suppression effect that may be missed when scoring transgenic plants for altered cell death. In this context, we tested the suppression potential of the DAD1/2 genes. Our results point to a role of the DAD genes in suppressing PCD in plants. One conceptual difficulty for the role of DAD1 in PCD is its localization at the ER and its part in the N-glycosylation complex. In contrast, this protein was suggested to have a suppressor activity in PCD by studies in Caenorhabditis elegans, where its overexpression protects some of the cells destined to die by apoptosis during development (47). In addition, knockout mutants in mice show an increased level of apoptosis in cultured embryos (48) or an altered interdigital cell death in heterozygotes (49), suggesting a role for DAD1 in developmental PCD. All these results favor a direct or indirect anti-apoptotic role for this gene in animal apoptosis. Moreover, a physical interaction with a member of the Bcl-2 protein family was reported (50), which may be correlated with DAD1 capacity to suppress apoptosis.
We found that At-DAD1 is localized in the ER and that its overexpression can rescue protoplasts from PCD independent of glycosylation. This suggests that the DAD proteins might be bifunctional proteins involved with the oligosaccharyltransferase complex and with PCD. It should be noted that the ER has been proposed to be a new gateway to PCD in animal cells, with the implication of ER calcium (51) and of a specific caspase (caspase-12) localized and activated in the ER (52). Interestingly, At-BI1 is localized in the ER also and is able to suppress cell death induced in plants by BAX overexpression (53) or induced by a pathogen (22).
It remains to be shown whether PCD suppression by DAD1 is direct or indirect. An indirect involvement could be via an attenuation of an induced ER stress. A direct involvement could be via a physical association with a PCD regulator. In consequence, this work provides new exciting possibilities to investigate the molecular regulation of PCD in plants. | 2018-04-03T03:51:19.721Z | 2004-01-02T00:00:00.000 | {
"year": 2004,
"sha1": "16af6d6a74d8e4bf4e52edd94f19555dd17dc430",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/279/1/779.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "fe0f06f32c24510d04744cbf9942905ba9b10bc5",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
119532494 | pes2o/s2orc | v3-fos-license | The Analytical Solution of Radiation Transfer Equation for a Layer of Magnetized Plasma With Random Irregularities
The problem of radio wave reflection from an optically thick plane monotonous layer of magnetized plasma is considered at present work. The plasma electron density irregularities are described by spatial spectrum of an arbitrary form. The small-angle scattering approximation in the invariant ray coordinates is suggested for analytical investigation of the radiation transfer equation. The approximated solution describing spatial-and-angular distribution of radiation reflected from a plasma layer is obtained. The obtained solution can be applied, for example, to the ionospheric radio wave propagation.
Introduction
Basic goal of the present work consists in derivation of the transfer equation solution describing spatial-and-angular distribution P ( ρ, ω) of radio radiation reflected from a plane stratified layer of magnetized plasma with random irregularities.
The radiation transfer equation (RTE) in a randomly irregular magnetized plasma was obtained in the work [1] under rather general initial assumptions. In particular, the medium average properties were assumed smoothly varying both in space and in time. In the work [2] the radiation energy balance (REB) equation describing radiation transfer in a plane stratified layer of stationary plasma with random irregularities has been deduced. The invariant ray coordinates, allowing one to take into account by a natural way refraction of waves and to represent the equation in the most simple form, were used there. In the work [3] it was shown that the equation REB is a particular case of the radiation transfer equation obtained in [1] and can be deduced from the latter by means of transition to the invariant ray coordinates.Equation REB, thus, allows one to investigate influence of multiple scattering in a plane stratified plasma layer on the characteristics of radiation. In particular, it enables one to determine the spatial-and-angular distribution of radiation leaving the layer if the source directivity diagram and irregularity spatial spectrum are known. A few effects which require of wave amplitudes coherent summation for their description (for example, phenomenon of enhanced backscattering) are excluded from consideration. However, the multiple scattering effects are much stronger, as a rule. This is particularly true for the ionospheric radio propagation.
The numerical methods of the transfer equation solving developed in the theory of neutron transfer and in the atmospheric optics appear useless for the equation REB analysis. They are adapted, basically, to the solution of one-dimensional problems with isotropic scattering and plane incident wave. In a case of magnetized plasma the presence of regular refraction, aspect-sensitive character of scattering on anisometric irregularities and high dimension of the equation REB (it contains two angular and two spatial coordinates as independent variables) complicate construction of the effective numerical algorithm for its solving. In this situation it is expedient to solve the equation REB in two stages. The first stage consists of obtaining of the approximated analytical solution allowing one to carry out the qualitative analysis of its properties and to reveal of its peculiarities. At the second stage the numerical estimation methods can be applied to the obtained analytical solution, or methods of the numerical solving of the initial equation taking into account the information obtained at the first stage can be designed. Therefore the problem of obtaining of the equation REB approximated analytical solutions is of interest.
We begin the present paper from a detailed exposition of the invariant ray coordinates concept. Then possibility to use of the small-angle scattering in the invariant coordinates approximation is discussed. Two modifications of the REB equation solution are obtained. The analysis of the obtained solutions concludes the paper.
Invariant ray coordinates and the radiation energy balance equation
It is convenient to display graphically the electromagnetic wave propagation in a plane-stratified plasma layer with the aid of the Poeverlein construction [4,5]. We shall briefly describe it. Let the Cartesian system of coordinates has axis z perpendicular and the plane x0y parallel to the plasma layer. We shall name such coordinate system "vertical". It is assumed that the vector of the external magnetic field H is situated in the plane z0y . Module of radius-vector of any point inside of the unit sphere with centrum in the coordinate origin corresponds to the value of refractive index n i (v, α), where i = 1 relates to the extraordinary wave, i = relates to the ordinary one, v = ω 2 e /ω 2 , ω 2 e is the plasma frequency, ω 2 is the frequency of a wave, α is the angle between radius-vector and magnetic field H . The refractive index surface corresponding to a fixed value of v and to all possible directions of the radius-vector represents a rotation body about an axis parallel to vector H (see fig. 1).
Convenience of the described construction (in fact, this is an example of coordinate system in space of wave vectors k) is become evident when drawing the wave trajectory: it is represented by a straight line, parallel to the axis z . This is a consequence of the generalized Snell law, which also requires of equality of the fall angle and exit angle onto/from a layer (θ ), and constantness of the wave vector azimuth angle ( ϕ). Note, that the crossing point of a wave trajectory with the refractive index surface under given value of v determines current direction of the wave vector in a layer (it is anti-parallel to a radius-vector) and current direction of the group speed vector (it coincides with the normal to the refractive index surface). The projection of a wave trajectory onto the plane x0y is a point which radius-vector has module sin θ and its angle with relation to axis x equals to ϕ . Thus, the coordinates define completely the whole ray trajectory shape in a plane layer and outside of it and are, in this sense, invariant on this trajectory.
Radiation of an arbitrary point source of electromagnetic waves within the solid angle θ ÷ θ + dθ; ϕ ÷ ϕ + dϕ corresponds to the energy flux in the k-space inside of a cylindrical ray tube parallel to axis z with cross section sin θd(sin θ)dϕ = sin θ cos θdθdϕ . In case of regular (without random irregularities) plasma layer this energy flux is conserved and completely determined by the source directivity diagram: where P is energy flux density in the direction determined by angles θ, ϕ through the point ρ on some base plane situated outside of the layer parallel to it (in the ionosphere case it is convenient to choose the Earth' surface as the base plane), z is distance from the base plane (height in the ionosphere case). We shall assume in the present paper that function z(v) is monotonous in the region of wave propagation and reflection. If random irregularities are absent and source of radiation is point, variable ρ in (1) is superfluous, as the matter of fact, since unequivocal relation between it and angles of arrival of a ray θ, ϕ exists. When scattering is present the radiation energy redistributes over angular variables θ, ϕ and in space what is described by variable ρ . The value of P satisfies in this case to the equation of radiation energy balance [2,3]: C(z; θ, ϕ) is cosine of a ray trajectory inclination angle corresponding to the invariant angles θ and ϕ; |∂(α, β)/∂(θ, ϕ| is Jacobean of transition from angular coordinates θ, ϕ to the wave vector polar and azimuth angles α and β in the "magnetic" coordinate system (which axis 0z is parallel to the magnetic field); is scattering differential cross section describing intensity of the scattered wave with wave vector coordinates α, β in magnetic coordinate system (corresponding invariant coordinates are θ ′ and ϕ ′ ) which arises at interaction of the wave with wave vector coordinates α 0 , β 0 (invariant coordinates θ and ϕ ) with irregularities. Vector function Φ(z; θ ′ , ϕ ′ ; θ, ϕ) represents the displacement of the point of arrival onto the base plane of a ray which has angular coordinates θ ′ and ϕ ′ after scattering at level z with relation to the point of arrival of an incident ray with angular coordinates θ, ϕ . It is essential that in a plane-stratified medium the function Φ is determined only by smoothed layer structure v(z) and does not depend on the scattering point horizontal coordinate and also on coordinate ρ of the incident and scattered rays. Note also that ratio Φ(z; θ, ϕ; It is possible to check up that equation (2) satisfies to the energy conservation law: when integrating over all possible for level z values of θ, ϕ and all ρ its right side turns into zero. It is natural since in absence of true absorption the energy inside the plasma layer does not collected.
Analyzing expression for the scattering differential cross section in a magnetized plasma (see, for example, [6]), it is easy to be convinced that the following symmetry ratio takes place: where ϑ g is angle between the wave vector and group speed vector, n is refractive index. Using (3) the equation (2) can be presented as follows: where Q(z; θ, ϕ; θ′, ϕ′) = σ(θ, ϕ; θ ′ , ϕ ′ )C −1 (z, θ, ϕ) sin θ ′ |dΩ ′ k /dΩ ′ | , and quantity ∼ Q (z; θ, ϕ; θ ′ , ϕ ′ ) ≡ Q(z; θ, ϕ; θ ′ , ϕ ′ ) sin θ cos θ is symmetric with relation to rearrangement of shaded and not shaded variables. The equation REB in the form (4) has the most compact and perfect appearance. It is clear from physical reasons that (4) has to have the unique solution for given initial distribution P 0 (θ, ϕ, ρ). The obtained equation can be directly used for numerical calculation of the signal strength spatial distribution in presence of scattering. However, as it was noted at introduction already, this approach leads to essential difficulties. Subsequent sections describe the method of construction of the energy balance equation approximated analytical solution.
3 Small-angle scattering approximation in the invariant ray coordinates Let us consider the auxiliary equation of the following kind, which differs from (4) only by absence of the dash over variable ω marked by arrow: where designation ω = {θ, ϕ} , dω = dθdϕ has been used for the sake of compactness. Equation (5) can be easily solved analytically by means of Fourier transformation over variable ρ. The solution has the following form: P (z, q, ω) = P 0 ( q, ω)S(z, 0; q, ω), where P 0 ( q, ω) is the Fourier image of the radiation energy flux density passing the layer in absence of scattering and the value of S is defined by the expression (7) One should note that integration over z in this and subsequent formulae, in fact, corresponds to integration along the ray trajectory with parameters θ, ϕ. The area of integration over ω ′ includes rays which reflection level h r (ω ′ ) > z .
Let us transform now equation (4) by the following way: Its solution will be looked for in the form Thus, auxiliary equation (5) allows to present the solution of the equation (4) in the form (9). This is an exact representation while some approximated expressions for quantities ∼ P and X are not used. By substituting of (9) into the equation (4) one can obtain the following equation for the unknown function X : We shall assume now that the most probable distinction of angles ω ′ and ω is small. The heuristic basis for this assumption is given by analysis of the Poeverlein construction ( fig. 1). It is easy to be convinced examining the Poeverlein construction that scattering near the reflection level even for large angles in the wave vectors space entails small changes of the invariant angles θ, ϕ. This is especially true for irregularities strongly stretched along the magnetic field (in this case the edges of scattered waves wave vectors form circles shown in fig. 1 as patterns A and B). One should note also that the changes of invariant angles θ, ϕ are certainly small if scattering with small change of a wave vector direction takes place. This situation is typical for irregularity spectra, in which irregularities with scales more than sounding wave length dominate. Thus, the small-angle scattering approximation in the invariant coordinates has wider applicability area than common small-angle scattering approximation.
Scattering with small changes of θ, ϕ entails small value of Φ That follows directly both from sense of this quantity and from the fact what Φ(z, ω, ω) = 0 . Let us make use of that to carry out expansion of quantity X at the right side of the equation (10) into the Taylor series with small quantities ω ′ − ω and Φ .
Note that making similar expansion of function P at the initial equation (4) would be incorrect since function P may not to have property of continuity. For example, in case of a point source, P 0 is a combination of δ -functions. As it will be shown later, the function X is expressed from P 0 by means of repeated integration and, hence, differentiability condition fulfills much easier for it. Leaving after expansion only small quantities of the first order, we obtain the following equation in partial derivatives: Here is the characteristic system for the equation (11): and initial conditions for it at z = 0: It is necessary to emphasize the distinction between quantity ρ ′ , which is a function of z , and invariant variable ρ. Solving the characteristic system we obtain: where z 0 is z -coordinate of the base plane. It follows that Generally, expression (12) gives the exact solution of the equation (11). However, since we are already within the framework of the invariant coordinate small-angle scattering approximation which assumes small value of A ω (z, ω) , it is possible to simplify the problem a little. Assuming A ω ∼ = 0 and omitting index 0 at invariant coordinates ω , we are coming to the following approximate representation for function X : dω ′ Q(z; ω; ω ′ ) Φ(z, ω, ω ′ ) . Thus, in the invariant coordinate small-angle scattering approximation the solution of the equation REB (4) is represented as a sum of two terms (see (9)), the first of which is where 1 (2π) 2 d 2 qP 0 ( q, ω) exp(i q ρ) = P 0 ( ρ, ω), and the second one is given by expression (13).
The solution can be presented in the most simple form if one uses again the smallness of quantity Φ and expands the second exponent in the formula (14) into a series. Leaving after expansion only small quantities of the first order, one can obtain: The last operation is the more precise the faster value of P 0 ( q, ω) decreases under | q| → ∞. The solution of the radiation energy balance equation obtained in the present section in the form (9), (14), (13), or in the form (15), expresses the spatial-and-angular distribution of radiation intensity passing layer of plasma with scattering through the spatial-and-angular distribution of the incident radiation, that is, in essence, through the source directivity diagram.
Alternative approach in solving the REB equation
The REB equation solving method stated in the previous section is based on representation of quantity P (z, ρ, ω) as a sum of the singular part ∼ P (z, ρ, ω) and the regular one X(z 0 , ρ, ω) . Regularity of the X(z 0 , ρ, ω) has allowed one to use the expansion into the Taylor series over variables ρ and ω at the equation (10) right side and to transform the integral-differential equation (10) into the first order partial derivative differential equation (11).
However, the stated approach is not the only possible. The REB equation can be transformed right away using Fourier-representation of the function P (z, ρ, ω) : Substitution of (16) into (4) gives the following equation for quantity P (z, ρ, ω) : (17) The quantity P (z, q, ω) is a differentiable function even when P (z, ρ, ω) has some peculiarities. Therefore, in the invariant coordinate small-angle scattering approximation it is possible to use the following expansion: Substituting (18) in (17) we obtain the partial derivative differential equation The characteristic system with initial conditions P = P 0 ( q, ω), ω = ω 0 at z = 0 has the following solution: This solution of the REB equation turns into the expression (14) for ∼ P when ∼ A (z, q, ω) −→ 0 . But the latter limit transition corresponds to the invariant coordinate small-angle scattering approximation used in the previous section under derivation of (13) and subsequent expressions. Let us note, however, that in (21), in contrast with (9), any additional terms do not appear. It allows one to assume that in used approximation the ratio X(z, ρ, ω) ≪ P(z, ρ, ω) is fulfilled. Additional arguments to the benefit of this assumption will be presented in the following section.
Analysis of the solution of the REB equation
We shall show, first of all, that the obtained solution satisfies to the energy conservation law. For this purpose it is necessary to carry out integration of the left and right sides of (15) over ω and ρ multiplied them previously by sin θ cos θ. The area of integration over angles is defined by the condition that both wave ω and wave ω ′ achieve the same level z (since at level z their mutual scattering occurs). To satisfy this condition one should add factors Θ [h r (ω) − z] and Θ [h r (ω ′ ) − z] to the integrand expression, where Θ(x) is Heviside step function, h r (ω) is the maximum height which can be reached by a ray with parameters θ, ϕ . Now integration can be expanded over all possible values of angles, i.e., over interval 0 ÷ π/2 for θ and over interval 0 ÷ 2π for ϕ . Then, (15) becomes P (ω) sin θ cos θdω = P 0 (ω) sin θ cos θdω+ where P (ω), P 0 (ω) is a result of integration of P (z 0 , ρ, ω) and P 0 ( ρ, ω) correspondingly over variable ρ . Due to antisymmetry of the integrand expression with relation to rearrangement of shaded and not shaded variables, the last term in (??) is equal to zero. Thus, equation (??) reduces to P (z 0 , ρ, ω) sin θ cos θdωd 2 ρ = P 0 ( ρ, ω) sin θ cos θdωd 2 ρ (23) expressing the energy conservation law: the radiation energy full flux through the plane remains constant regardless of scattering, as it should be in case of real (dissipative) absorption absence. It is not difficult to check that parity (23) is valid for the exact solution in the form (9) and also for the solution in the form (21). With relation to the solution in the form (9) the carried out discussion discovers one curious peculiarity. It appears that the radiation energy complete flux through the base plane is determined by the first term ∼ P . The second one (X) gives zero contribution to the energy complete flux.
Results of the present section give the weighty ground to believe that the radiation spatial-and-angular distribution is determined basically by the first term in the solution (9). The second term represents the amendment to the solution which can be neglected in the invariant coordinate small-angle scattering approximation. This statement validity can be checked under detailed research of properties of the obtained REB equation approximated solutions by numerical methods.
Conclusion
In the present work the heuristic basis for use of the invariant coordinate small-angle scattering approximation is considered under solving of the RTE for a magnetized plasma layer. Within the framework of this approximation two versions of the analytical solution have been obtained. They describe spatial-and-angular distribution of radiation reflected from a monotonous plasma layer with small-scale irregularities.
The final physical conclusions about influence of the multiple scattering effects in a layer of plasma on the spatial-and-angular characteristics of radiation are possible on the basis of detailed numerical research of the obtained solutions. Such research is a subject of other our works. | 2019-04-14T03:16:16.032Z | 1997-09-03T00:00:00.000 | {
"year": 1997,
"sha1": "28024f82303028a354ec80eea4fe61d372c5844d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "28024f82303028a354ec80eea4fe61d372c5844d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
231970596 | pes2o/s2orc | v3-fos-license | A cross-sectional study of latent tuberculosis infection, insurance coverage, and usual sources of health care among non-US-born persons in the United States
Abstract More than 70% of tuberculosis (TB) cases diagnosed in the United States (US) occur in non-US-born persons, and this population has experienced less than half the recent incidence rate declines of US-born persons (1.5% vs 4.2%, respectively). The great majority of TB cases in non-US-born persons are attributable to reactivation of latent tuberculosis infection (LTBI). Strategies to expand LTBI-focused TB prevention may depend on LTBI positive non-US-born persons’ access to, and ability to pay for, health care. To examine patterns of health insurance coverage and usual sources of health care among non-US-born persons with LTBI, and to estimate LTBI prevalence by insurance status and usual sources of health care. Self-reported health insurance and usual sources of care for non-US-born persons were analyzed in combination with markers for LTBI using 2011–2012 National Health and Nutrition Examination Survey (NHANES) data for 1793 sampled persons. A positive result on an interferon gamma release assay (IGRA), a blood test which measures immunological reactivity to Mycobacterium tuberculosis infection, was used as a proxy for LTBI. We calculated demographic category percentages by IGRA status, IGRA percentages by demographic category, and 95% confidence intervals for each percentage. Overall, 15.9% [95% confidence interval (CI) = 13.5, 18.7] of non-US-born persons were IGRA-positive. Of IGRA-positive non-US-born persons, 63.0% (95% CI = 55.4, 69.9) had insurance and 74.1% (95% CI = 69.2, 78.5) had a usual source of care. IGRA positivity was highest in persons with Medicare (29.1%; 95% CI: 20.9, 38.9). Our results suggest that targeted LTBI testing and treatment within the US private healthcare sector could reach a large majority of non-US-born individuals with LTBI. With non-US-born Medicare beneficiaries’ high prevalence of LTBI and the high proportion of LTBI-positive non-US-born persons with private insurance, future TB prevention initiatives focused on these payer types are warranted.
Introduction
Preventing tuberculosis (TB) in the non-US-born United States (US) population is a national public health priority. More than 70% of TB cases diagnosed in the US occur in non-US-born persons; the same population has experienced less than half the recent TB incidence rate declines of US-born persons (declining 1.5% vs 4.2%, respectively, from 2018 to 2019). [1] Approximately 90% of TB cases in non-US-born persons arise not from recent transmission but from reactivation of latent TB infection (LTBI), most likely acquired in their countries of origin. [2] Accordingly, identification and treatment of LTBI in persons from high TB incidence countries is critical to TB prevention and elimination and is recommended by clinical practice guidelines. [3,4] In the US, LTBI-related services are most often delivered in local public health departments but such organizations lack the capacity to provide the volume of services needed to implement current recommendations. [5,6] Conversely, private sector providers in the US (e.g., private physicians, community health centers) may have the capacity to initiate targeted LTBI testing and treatment. However, the success of initiatives to increase these TB prevention activities within the private health care sector depends on LTBI-positive non-US-born persons' access to health insurance and health care providers. While most non-US-born persons have health insurance and/or a usual source of health care, [7,8] the health care access of non-US-born persons with LTBI is unknown. We conducted descriptive analyses of data from the 2011 to 2012 National Health and Nutrition Examination Survey (NHANES) to assess TB prevention opportunities in the private health care sector.
Data source and study design
NHANES is a biennial cross-sectional survey of a nationally representative sample of civilian noninstitutionalized individuals in the US. This survey is conducted by the National Center for Health Statistics of the US Centers for Disease Control and Prevention. It uses a complex, multistage probability sampling design; data are collected from interviews and physical examinations. [9] The 2011-2012 NHANES dataset, which is the most recent survey with TB-related questions and laboratory measurements, was used for this analysis. This project was reviewed and approved by the North Texas Regional Institutional Review Board as exempt category research.
Our study included non-US-born noninstitutionalized respondents ages 6 years or older who had interferon gamma release assay (IGRA) test results and non-missing data for selfreported insurance and usual source of healthcare variables. IGRAs are whole-blood tests that are used to diagnose LTBI by evaluating cell-mediated immune response to Mycobacterium tuberculosis. [10] The 2011-2012 NHANES tested for LTBI in persons aged ≥6 years with a tuberculin skin test and a QuantiFERON-TB Gold In-Tube IGRA. [11] We used IGRA results because persons with prior Bacille Calmette-Guerin (BCG) vaccines may experience false positive results with tuberculin skin tests , and IGRAs more accurately predict the progression of LTBI to TB in non-US-born persons. [12] Those below age 6 years were excluded because IGRAs are not recommended for young children. [13] IGRA results were available for 91.7% of non-US-born respondents aged ≥6 years who received physical examinations.
Study variables
IGRA results were categorized as positive or negative; IGRA positivity was used as a proxy for infection. In addition to IGRA positivity, our study focused on health insurance and usual sources of health care. We created a health insurance variable that categorized coverage as Medicare, Medicaid/Children's Health Insurance program, private insurance, other or unspecified insurance, or no insurance (Supplemental File 1, http://links. lww.com/MD/F721). Our categorical usual source of health care variable included no usual source of care, clinic/health center, doctor's office/health maintenance organization (HMO), and other/not specified. The "no usual source of care" category included persons who said they had no usual source of care and those answering, "hospital emergency room." "Other/not specified" included respondents giving their usual source of care as "hospital outpatient department," "some other place," and those with an unspecified usual source of care (Supplemental File 1, http://links.lww.com/MD/F721).
Demographic variables that were evaluated in this study included gender and age. Participants were identified as either male or female and age was categorized as 6 to 34 years, 35 to 49 years, 50 to 64 years, and 65 years or older. Duration of US residence for non-US-born individuals was categorized as being present in the US for less than 5 years, 5 years or more, or unspecified; this categorization reflects increasing probability for having insurance, a key driver of healthcare utilization. [14] Federal poverty level (FPL) was categorized as above 137%, less than or equal to 137%, and missing to reflect Medicaid eligibility thresholds in expansion states. [15]
Statistical analysis
Univariate analysis was performed to evaluate the demographic characteristics of the sample. These variables were further examined for LTBI prevalence by each subcategory in a bivariate analysis. We estimated demographic category percentages by IGRA status, IGRA percentages by demographic category, and 95% confidence intervals for each percentage; these estimates yielded the sample distribution and IGRA positivity by category. The relative standard error (RSE) for each estimate was calculated and the number of observations was evaluated to assess the reliability of each estimate. In accordance with NHANES analytic guidelines, estimates for which the RSE exceeded 30% and/or the number of observations was less than 30 were deemed unreliable and notated as such in tables. [9,16] We used Stata SE 15.1 (StataCorp) to adjust for weights and complex survey design (Supplemental File 2, http://links.lww. com/MD/F722).
Demographic characteristics
Data for 1793 respondents were analyzed using NHANES weights to represent the total population of non-US-born persons aged ≥6 years. After weighting, an estimated 15.9% [95% confidence interval (CI) = 13.5, 18.7] of non-US-born persons were IGRA-positive. Respondents aged greater than or equal to 65 years were most often IGRA positive (32.0%, 95% CI: 23.6, 41.7%) ( Table 1). Males and those with FPL less than or equal to 137 also had disproportionately higher IGRA positivity within the sample at 17.5% (95% CI: 14.5, 20.5) and 16
Discussion
These results have implications for both public health practice and private sector medical care. More than 70% of TB cases in the US occur in non-US-born persons [1] and roughly 90% of these cases are attributable to reactivation of remotely-acquired LTBI. [2] These statistics, along with our findings, suggest a need for public health agencies to engage with private sector health care providers and payers who serve non-US-born persons. Arrangements in which public health practitioners or consulting medical experts provide technical support to private sector providers to increase targeted testing and treatment could accelerate domestic TB elimination efforts. Our finding that nearly two-thirds of non-US-born individuals with LTBI have health insurance suggests that payers can facilitate LTBI care, with financial benefit to providers. Policy change can also expand TB prevention opportunities in the private sector. Because the US Preventive Services Task Force has given LTBI testing of high-risk persons a "B" rating, [4] the Affordable Care Act (ACA) mandates that such testing be covered by most private health insurance plans with no patient cost sharing. Similarly, the Centers for Medicaid and Medicare Estimates and 95% CIs may be unreliable because the relative standard error (RSE) > 30%. † Estimates and 95% CIs may be unreliable due to small sample size. CHIP = Children's Health Insurance Program, LTBI = latent tuberculosis infection, HMO = health maintenance organization, USHC = usual source of health care. LTBI was identified based on interferon gamma release assay results. All proportions account for the complex survey design of the NHANES. Source. National Health and Nutrition Examination Survey (NHANES). 1 Persons responding that their usual source of health care is a hospital emergency department were categorized as having no usual source of health care. 2 Other/Not Specified usual sources of care included hospital outpatient departments, "some other place," and refusing or having no response to the question regarding the location of care. Services (CMS), which finances care for over 100 million people, has the option to cover testing without cost sharing. Lower outof-pocket costs increase the use of preventive care, [17] so these policies could substantially increase LTBI screening. However, CMS has not taken the administrative steps to include LTBI testing on their list of covered preventive services. [18] This is a particular concern for Medicare beneficiaries because LTBI prevalence increases with age; accordingly, we found that Medicare beneficiaries had high rates of IGRA positivity (Table 1). Since Medicare is the largest third-party payer in the US, a CMS National Coverage Determination that adds LTBI testing to Medicare's list of covered preventive services would be a powerful lever to increase targeted screening. While the majority of IGRA-positive persons had insurance, a sizable minority did not. Policy changes could move private sector providers to expand LTBI care to uninsured persons. State Medicaid plans may opt to cover treatment of LTBI-positive low-income persons who would otherwise be ineligible for Medicaid. [19] Seven states currently elect this option; if more states included this option, it could help TB elimination efforts in the private sector. In addition, Medicaid 1115b waivers support use of Medicaid funds for special initiatives that could include uninsured persons, such as LTBI testing and treatment initiatives at some community health clinics. [20] Our findings suggest such projects are useful, and their lessons learned will be important.
However, LTBI-related TB prevention efforts focused solely on community clinics would exclude many at-risk persons. Our results suggest that physician's offices/HMOs serve as the usual source of care for the largest proportion of non-US-born persons with LTBI. Local and state TB programs' and national public health agencies' engagement with private physicians and HMOs Table 2 Weighted estimates of demographic and health system characteristics of Non-US-born persons in the US aged > = 6 years in 2011-2012, by interferon gamma release assay (IGRA) test results. Estimates and 95% CIs may be unreliable because the relative standard error (RSE) > 30%. † Estimates and 95% CIs may be unreliable due to small sample size. CHIP = Children's Health Insurance Program, HMO = health maintenance organization, LTBI = latent tuberculosis infection, USHC = usual source of health care. LTBI was identified based on interferon gamma release assay results. All percentages accounted for the complex survey design of the NHANES. Source. National Health and Nutrition Examination Survey (NHANES). 1 Persons responding that their usual source of health care is a hospital emergency department were categorized as having no usual source of health care. 2 Other/not specified usual sources of care included hospital outpatient departments, "some other place," and refusing or having no response to the question regarding the location of care. 3 Percentages do not sum to 100 because missing and unspecified data are not shown. has the potential to increase targeted LTBI testing and treatment and greatly advance the nation's goal TB elimination goal. Our analysis has limitations. Because the ACA was implemented after the 2011-2012 NHANES, post-ACA increases in insurance coverage and health care utilization were not captured, [7] so our findings are likely underestimates. Additionally, the NHANES sample excludes noncivilian, institutionalized, and homeless persons, so these populations were not represented in our analysis. Hence, undocumented persons and refugees who recently arrived in the US may be underrepresented. We were also unable to examine IGRA positivity in children younger than 6 years, although young children with LTBI are highly susceptible to progression to TB. We did not have access to specific country of origin, so our data likely include non-US-born persons from countries with low TB incidence (e.g., Canada, Australia, western European countries). The relatively small sample of non-US-born persons resulted in unreliable estimates for certain categories of some variables. However, estimates of overall insurance coverage and having usual sources of care were robust and these results provide important, actionable insights.
Conclusion
Compared to the general US population, non-US-born persons face a higher TB risk and health care barriers. However, we found that health insurance and/or a usual source of health care are common among non-US-born individuals, including IGRApositive individuals. These findings point to significant opportunities to advance TB prevention in the US by leveraging the reach and capacity of private health care. | 2021-02-21T06:16:04.361Z | 2021-02-19T00:00:00.000 | {
"year": 2021,
"sha1": "380f4635dd33846c477223058935fd9fbbd3651b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000024838",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "9df35a8f650b66d1bce6cf7834a5fc0b83ef7786",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15713191 | pes2o/s2orc | v3-fos-license | The autotaxin-lysophosphatidic acid–lysophosphatidic acid receptor cascade: proposal of a novel potential therapeutic target for treating glioblastoma multiforme
Glioblastoma multiforme (GBM) is the most malignant tumor of the central nervous system (CNS). Its prognosis is one of the worst among all cancer types, and it is considered a fatal malignancy, incurable with conventional therapeutic strategies. As the bioactive multifunctional lipid mediator lysophosphatidic acid (LPA) is well recognized to be involved in the tumorigenesis of cancers by acting on G-protein-coupled receptors, LPA receptor (LPAR) antagonists and LPA synthesis inhibitors have been proposed as promising drugs for cancer treatment. Six LPARs, named LPA1–6, are currently recognized. Among them, LPA1 is the dominant LPAR in the CNS and is highly expressed in GBM in combination with the overexpression of autotaxin (ATX), the enzyme (a phosphodiesterase, which is a potent cell motility-stimulating factor) that produces LPA. Invasion is a defining hallmark of GBM. LPA is significantly related to cell adhesion, cell motility, and invasion through the Rho family GTPases Rho and Rac. LPA1 is responsible for LPA-driven cell motility, which is attenuated by LPA4. GBM is among the most vascular human tumors. Although anti-angiogenic therapy (through the inhibition of vascular endothelial growth factor (VEGF)) was established, sufficient results have not been obtained because of the increased invasiveness triggered by anti-angiogenesis. As both ATX and LPA play a significant role in angiogenesis, similar to VEGF, inhibition of the ATX/LPA axis may be beneficial as a two-pronged therapy that includes anti-angiogenic and anti-invasion therapy. Conventional approaches to GBM are predominantly directed at cell proliferation. Recurrent tumors regrow from cells that have invaded brain tissues and are less proliferative, and are thus quite resistant to conventional drugs and radiation, which preferentially kill rapidly proliferating cells. A novel approach that targets this invasive subpopulation of GBM cells may improve the prognosis of GBM. Patients with GBM that contacts the subventricular zone (SVZ) have decreased survival. A putative source of GBM cells is the SVZ, the largest area of neurogenesis in the adult human brain. GBM stem cells in the SVZ that are positive for the neural stem cell surface antigen CD133 are highly tumorigenic and enriched in recurrent GBM. LPA1 expression appears to be increased in these cells. Here, the author reviews research on the ATX/LPAR axis, focusing on GBM and an ATX/LPAR-targeted approach.
Introduction
Glioblastoma multiforme (GBM) is the most highly malignant type of brain tumor. Despite the use of optimal treatments and an evolving standard of care (maximal safe resection with concurrent temozolomide (TMZ) chemotherapy and radiation therapy), the median survival of patients diagnosed with GBM is only 12 to 16 months [1]. GBM cells are highly motile and invade the normal brain parenchyma diffusely, resulting in poor prognosis [2]. Because the blood-brain barrier (BBB) is disrupted in GBM, some components in plasma or serum may be able to affect the cell motility of GBM [3]. The most plausible components are considered to be autotaxin (ATX) and lysophosphatidic acid (LPA). ATX is a potent cell motility-stimulating factor that is identical to lysophospholipase D and that produces a bioactive phospholipid, LPA, from lysophosphatidyl choline. ATX is overexpressed in GBM. In addition, LPA 1 , the LPA receptor (LPAR) responsible for LPA-driven cell motility, is predominantly expressed in GBM [4]. These important results suggest that the ATX/LPAR axis may be a target for GBM therapy. Here, the author reviews current results, focusing on the promising information on the ATX/LPA/LPAR cascade that may lead to amelioration of GBM.
GBM and therapeutic difficulty
GBM is the most aggressive type of tumor of the central nervous system (CNS), and its prognosis is one of the worst among all cancer types. GBM has diffuse, invasive, and highly angiogenic characteristics, which result in a high recurrence rate. Following surgical resection, the current standard of therapy involves concurrent administration of the DNA alkylating agent TMZ with radiation, followed by adjuvant TMZ [1]. In addition to TMZ, use of another agent, implants of biodegradable polymers containing the alkylating agent carmustine, was approved for clinical use. Although, a phase III trial has suggested a modest survival benefit [5], the study had several methodological problems and resulted in frequent toxicities, such as brain edema, infection, and seizures. A direct comparison of carmustine with standard chemotherapy with TMZ is lacking [6]. For the treatment of recurrent GBM, none of the available salvage treatments has clearly shown improved survival [6]. The chemicals in this class of alkylating agents are highly mutagenic. This reveals another aspect of a number of anti-cancer treatments: in addition to their effects in reducing or eliminating tumors, X-rays and certain traditional cytotoxic agents are also carcinogenic, and their short-term success in producing clinical remission may be counterbalanced by the later appearance of independently arising, second-site tumors that are a consequence of their mutagenic action [7].
Thus, at present, GBM is considered a fatal malignancy that is incurable with conventional therapeutic strategies. Given the resistance of GBM to conventional therapeutic approaches, an urgent need exists to develop alternative strategies to complement or improve current approaches and improve long-term patient survival. Strategies under development include novel adjuvant chemotherapeutics to be combined with standard care, as well as novel molecularly targeted approaches against the tumor and its environment.
LPA, LPA receptors, and GBM LPA (1-or 2-acyl-sn-glycerol 3-phosphate) is one of the simplest natural phospholipids, consisting of a single fatty acyl chain, a glycerol backbone, and a free phosphate group. LPA is a major active constituent of serum, and unlike most other phospholipids, it is also water soluble [8]. LPA is a main membrane-derived multifunctional lipid mediator that is best known for its ability to stimulate proliferation, migration, and survival of many cell types, both normal and malignant [9]. LPA has been implicated in the pathogenesis of several conditions, including cancer [8], atherosclerosis and cardiovascular disease [10,11], Alzheimer disease [12], psychiatric disorders such as schizophrenia and bipolar disorders [13,14], ischemic cerebrovascular disease [15], and hydrocephalus [16].
LPA is a bioactive phospholipid that stimulates cell proliferation, migration, and survival by acting on its cognate G-protein-coupled receptor (GPCR). Aberrant LPA production, receptor expression, and signaling probably contribute to cancer initiation, progression, and metastasis [8]. Although LPA production may partially occur intracellularly, most LPA is produced extracellularly by secreted enzymes. Three pathways mediate the production of LPA: (1) cleavage of lysophophatidylcholine by lysophospholipase D (lysoPLD) in blood, (2) deacylation of phosphatidic acid by phospholipase A 2 in inflammatory cells, activated platelets, endothelial cells, and cancer cells, and (3) non-enzymatic, mild oxidation of low-density lipoprotein [17,18].
The primary molecular mechanism was reported in 1996 with the cloning of the first cognate receptor for LPA [19]. Currently, at least six different LPA receptors (LPARs; LPA 1 -LPA 6 ) have been identified that share a common GPCR structure [20]. All six receptors are expressed throughout the body during development and adulthood in unique spatiotemporal patterns [21].
Based on the amino acid sequence, LPA 1 -LPA 3 share 50 % homology and belong to the endothelial differentiation gene (Edg) family. Within the brain, LPA 1 is the most highly expressed, although LPA 2 and LPA 3 are also present [20]. In 2003, Noguchi et al. successfully identified LPA 4 (p2y 9 /GPR23) through ligand screening of orphan GPCRs sharing high amino acid sequence homology with the human platelet-activating factor receptor, a known GPCR [22]. The remaining LPARs, including LPA 4 -LPA 6 , are structurally distinct from the Edg family and are closely related to the purinergic receptor family (non-Edg family) [23]. Non-Edg family members have a higher affinity for alkyl-LPA species compared to the Edg family members that have higher affinity for the acyl variants [22].
Initial studies suggested that the brain is rich in LPA and LPARs [24][25][26] and contains enzymes for the synthesis and degradation of LPA [27]. LPA induces numerous responses related to the morphological, pathological, and clinical functions of the CNS [28][29][30][31][32][33][34][35][36][37][38]. The constant level of LPA 1 expression in undifferentiated and differentiated astrocytes suggests that LPA 1 primarily mediates the LPA-induced stimulation of DNA synthesis [39]. LPA 1 -LPA 3 are expressed at extremely low levels in the normal adult brain, but expression is upregulated following brain injury [40]. Following injury or ischemia of the CNS, LPA activity increases in the cerebrospinal fluid [41,42]. LPA concentrations probably increase in the CNS when the BBB is impaired, including after brain injury, cerebral ischemia, and GBM. LPA 1 , the LPAR responsible for LPA-driven cell motility, is predominantly expressed in GBM [4,43].
ATX and GBM
ATX, a 125-kDa glycoprotein, is a multifunctional phosphodiesterase that was originally isolated from melanoma cells as a potent cell motility-stimulating factor [44]. ATX is identical to lysoPLD and catalyzes the production of LPA from lysophosphatidyl choline [18]. ATX not only possesses lysoPLD activity, but it also is a lipid carrier protein that efficiently transports LPA to its receptors, LPA 1 -LPA 6 [45]. All biological effects of ATX are thought to be attributable to LPA production and subsequent receptor stimulation [46]. ATX is very widely expressed, with mRNA detected in essentially all tissues including high levels of expression in brain [47]. ATX is also present in plasma [9]. ATX is highly expressed in a variety of cancers [48][49][50][51][52] including GBM [53,54], and is implicated in tumor progression, invasion, and angiogenesis. ATX overexpression in GBM may facilitate invasion and migration through endothelial cells in an autocrine manner, as well as promote neovascularization in the tumor core through paracrine signaling [54].
Most brain cancer cells express high levels of ATX, with the highest expression in the SNB-78 glioblastoma cell line (derived from GBM) [4]. In addition, GBM tissue samples derived from surgical specimens show extremely high ATX expression [4]. GBM may acquire its high invasiveness through autocrine production of LPA by ATX [18]. Inhibition of ATX by its specific inhibitor PF-8380 (Pfizer inflammation research, Missouri, USA) leads to decreased invasion and enhanced radiosensitization of GBM cells [55]. Furthermore, inhibition of ATX leads to diminished tumor vascularity and delayed tumor growth of GBM [55]. As a secreted phosphodiesterase, ATX may be an attractive druggable therapeutic target for GBM.
Angiogenesis, hypoxia, pseudopalisading necrosis, and LPA GBM is among the most vascular human tumors [56]. Tumors require angiogenesis to maintain a constant nutrient supply. As the tumor grows, it disrupts preexisting blood vessels. Newly formed brain tumor blood vessels possess a defective BBB that contributes to the pathogenesis of tumor-associated edema [57], are associated with an increased risk of intratumoral hemorrhage [58], and are responsible for contrast enhancement on computed tomography and magnetic resonance imaging [59].
Intravascular thrombosis within the tumor can clearly accentuate and propagate tumoral hypoxia and necrosis. Intravascular thrombosis within the tumoral tissue of GBM is a frequent intraoperative finding by neurosurgeons [60]. In many instances, intravascular thrombosis is seen within or adjacent to the regions of pseudopalisading necrosis, leading to the proposition that vasoocclusion due to thrombosis may directly initiate or propagate hypoxia and necrosis in GBM [60]. The plausible contributing factor of intravascular thrombosis in GBM is access of plasma-clotting factors to tumoral tissue. LPA plays a role in regulating platelet function and thrombosis [11]. Plasma ATX is associated with platelets during aggregation and is concentrated in arterial thrombi [61].
Pseudopalisading necrosis has always been a histopathological curiosity in GBM. No other tumor in humans demonstrates such a histopathological alteration [62]. Mamun et al. reported that cerebral ischemia/hypoxia promotes rich pseudopalisading necrosis in a rat model of glioblastoma [62] and a middle cerebral artery occlusion modified from a reported method [63,64]. Vascular occlusion and intravascular thrombosis lead to tissue hypoxia in perivascular regions. The tumor cells then become hypoxic and undergo apoptosis or necrosis, eventually leading to a central necrotic zone [62]. Hypoxic tumor cells produce angiogenic factors, the most predominant of which is vascular endothelial growth factor (VEGF). Hypoxia also induces nuclear accumulation of the hypoxia-inducible factor (HIF) alpha and beta complex, resulting in transcriptional activation of VEGF [65], and can upregulate expression of VEGFR2, a VEGF receptor, in endothelial cells. VEGF signaling contributes to the highly angiogenic nature of GBM [66].
In pathological states, the role of LPA in angiogenesis becomes important [67]. LPA stimulates cancer cell secretion of VEGF and triggers angiogenesis [68]. Lee et al. reported that LPA induces VEGF via HIF-1 alpha activation [69]. Tissue hypoxia is a critical factor for tumor aggressiveness and metastasis in cancers. HIF-1 alpha plays a critical role in enhancing and/or sensitizing the role of LPA in cell migration and invasion in hypoxic conditions [70]. Both LPA 1 and LPA 3 are involved in LPA-induced VEGF secretion [71].
Bevacizumab, a monoclonal antibody that recognizes VEGF, is currently clinically approved for GBM treatment. Unfortunately, although bevacizumab treatment prolongs progression-free survival in a subset of patients, only minimal improvements in overall survival are observed, and patients invariably relapse [72,73]. This may be explained by several previous observations. As the brain is a highly vascular organ, GBM cells can spread diffusely without necessarily requiring neovascularization [74,75]. Inhibition of tumor angiogenesis can modulate patterns of tumor invasion [76,77]. Increasing evidence suggests that anti-angiogenic therapy can lead to enhanced tumor cell invasion [76][77][78][79]. Although the exact mechanisms responsible for this increased invasiveness are unknown, researchers have speculated that a decreased supply of oxygen and nutrients may act as a stimulus for tumor cell migration [80]. Lamszus et al. reported an interesting double-pronged inhibitory regimen for this condition: a combined treatment directed at VEGFR-2 and the epidermal growth factor receptor (EGFR) [80]. This strategy simulates the ATX/LPA axistargeted approach. In addition to LPA, ATX plays a role similar to that of VEGF in angiogenesis [81]. Thus, inhibition of the ATX/LPA axis may be beneficial in GBM as a double-pronged therapy that includes antiangiogenic therapy and anti-invasion therapy.
Invasion, cell motility, and LPA
Invasion is a defining hallmark of GBM, just as metastasis characterizes other cancers. Drivers of GBM invasion include autocrine signals propagated by secreted factors that signal through receptors on the tumor. Various autocrine motility factors are expressed by invasive GBM cells. Most autocrine and paracrine interactions involved in GBM invasion constitute known signaling systems during CNS development that involve the migration of precursor cells that populate the developing brain [82]. LPA 1 , the LPAR responsible for LPA-driven cell motility, is predominantly expressed in GBM [4]. The pattern of invasion of GBM does not seem to be random, but rather seems to follow the path of blood vessels and more prominently myelinated axons [83]. GBM usually invades along white matter tracts. According to a study using post-mortem human brain tissue, in the normal brain including the cortex and corpus callosum, LPA 1 is expressed in the white matter along fibers resembling myelinated axons [84]. This is consistent with the presence of LPA 1 on white matter tracts of adult mouse brains and human cerebral cortex [85]. The LPA 1 antagonist Ki16425 (Kirin Brewery Co., Takasaki, Japan) effectively suppresses LPA-induced motility of glioblastoma cells. Thus, the motility of these cells appears to depend on ATX and LPA 1 [4]. LPA 1 -induced cell motility of GBM was also shown in a previous report by Manning et al. [43].
Rho family GTPases including Rho and Rac are presumed to modulate various cellular functions such as cytoskeletal reorganization, cell motility, invasion, and proliferation. LPA is especially important in cell adhesion, and LPA signaling has an obvious impact on both focal adhesions (Rho) and lamellopodia (Rac) [86,87]. Cell motility is tightly controlled by the activities of Rho and Rac in a coordinated manner. The balance of Rho and Rac activities is a critical determinant of cell movement [88]. Each LPAR differentially contributes to these activities.
Recently developed molecular targeted approaches to GBM involving signaling pathways such as EGFR, PDGFR, PI3K/AKT, and RAS predominantly address key pathways involved in cell proliferation, whereas recurrent tumors regrow from the cells that have invaded the brain and may be temporally less proliferative [83]. A novel approach that targets this invasive subpopulation of tumor cells and its environment will be necessary to improve the prognosis of GBM. From this point of view, the new treatment approach targeting LPA and LPAR (LPA 1 ) may be promising.
Subventricular zone (SVZ), neural stem/progenitor cells, and LPAR
Migration is a phenomenon that is mainly present during development [89]. In the adult CNS, the only cells thought to maintain the capacity for motility are stem cells or precursor cells in the subependymal layer that may be recruited for regeneration and repair. Research on human cancers including brain tumors has revealed that tumor stem cells often constitute only a small proportion (<5 %) of the neoplastic cells in these tumors. Most of the cells in the neoplastic stem cell population that are not actively dividing have proven to be quite resistant to commonly used cytotoxic drugs, which preferentially kill rapidly proliferating cells [7].
Cells originating from the stem-cell reservoir have been hypothesized to be a source of glioma cells [90,91]. Recent evidence suggests that the heterogeneity seen in GBM may be related to the cells of origin, which have stem cell-like characteristics [92][93][94]. GBM contains a subset of stem-like cells that express the gene for the neural stem cell surface antigen CD133 and are capable of self-renewal, tumor propagation, and differentiation into multiple lineages [93,95]. This population may play an important role in tumor recurrence because they are resistant to chemotherapy and radiation therapy and are capable of initiating tumors that recapitulate GBM histology [92,93]. A putative source of glioma cells is the SVZ, the largest area of neurogenesis in the adult human brain [96]. Neural stem cells (multipotent neural progenitor cells) line the lateral ventricles in the SVZ, and recruitment of these progenitor cells may play a role in the aggressive behavior encountered in GBM [97]. In animal studies, the SVZ demonstrates increased susceptibility to tumorigenesis compared with cortical regions [98][99][100]. Experiments and clinical findings provide evidence that neuronal progenitor cells in the SVZ with a high migratory potential are involved in the aggressive GBM subtype [101]. GBMs that contact the lateral ventricles have been associated with multifocal dissemination [93,102] and worse overall survival than nonperiventricular GBMs [103]. Jafri et al. demonstrated that patients with GBM involving the SVZ have decreased overall survival and progressionfree survival, which may have prognostic and therapeutic implications [97]. A comparison of long-term survivors and short-term survivors with GBM showed that tumor location with regard to the SVZ is significantly associated with survival [104].
The LPA 1 receptor was originally identified from neuronal progenitor cells in the ventricular zone of the developing brain and was initially termed ventricular zone gene-1 (vzg-1) [19]. Human neural progenitors express functional LPARs that regulate cell growth and morphology [105]. CD133(+) stem cells are highly tumorigenic [106] and enriched in recurrent GBM [107]. Lysophospholipids have been reported to regulate a diverse range of stem cell processes including proliferation, survival, differentiation, and migration in adult and embryonic stem cells and progenitors [108]. LPA inhibits neuronal differentiation of neural stem/progenitor cells derived from human embryonic stem cells [109]. Expression of the LPA 1 receptor is increased in CD133(+) GBM stem cells [110].
Taken together, LPAR (LPA 1 ) may be significantly involved in the aggressive behavior and poor prognosis encountered in GBM. Thus, therapies that target LPA 1 may be potentially beneficial in GBM treatment, especially in preventing invasion, re-growth, and recurrence.
A novel potential therapeutic approach against GBM
Because LPA is well recognized to be involved in the tumorigenesis and metastasis of a variety of cancers, LPAR antagonists and LPA synthesis inhibitors (ATX inhibitors) have been proposed to be promising drugs for cancer treatment [8]. Almost half of all drugs in current use target members of the GPCR family, making LPARs attractive targets for therapeutic development. Structurefunction analysis, molecular modeling, and studies of receptor structure are already contributing to the development of novel receptor-selective antagonists [8]. As previously mentioned, the LPA 1 antagonist Ki16425 (Kirin Brewery Co., Takasaki, Japan) effectively suppresses the LPA-induced motility of glioblastoma cells [4]. Ki16198 (Kyowa Hakko Kirin Co., Ltd., Tokyo, Japan), which specifically inhibits LPA 1 and LPA 3 , is a promising orally active LPAR antagonist for inhibiting the invasion and metastasis of pancreatic cancer cells [111].
An LPA 4 agonist is another possibility. LPA 4 attenuates LPA 1 -driven migration and invasion, indicating functional antagonism between the two subtypes of LPAR [112]. Thus, an LPA 4 -selective agonist may have some beneficial effects for the treatment of GBM, although the expression levels of LPA 4 in GBM are relatively low [4]. Although LPA 4 (P2y 9 /GPR23) was originally isolated from brain, high expression of LPA 4 is not detected in brain [22]. This may be explained by the observation that specific types of cells in restricted areas express LPA 4 . Rhee et al. reported that in an immortalized hippocampal progenitor cell line, high-level expression of LPA 1 and moderate-level expression of LPA 4 were detected [113], suggesting that LPA 4 may affect LPA 1 activity in brain tumors and/or their environment. To determine the potential role of LPA 1 -and LPA 4 -targeted compounds in GBM, the following questions remain to be answered in future studies. (1) How does LPA 1 react to various species of LPA within the tumor environment, with and without VEGF inhibition? (2) Is LPA 4 expressed in neighboring astrocytes or the peritumoral environment with regard to the helper nature of mature astrocytes? (3) How will this expression affect the progression of GBM? LPA species with both saturated fatty acids (16:0, 18:0) and unsaturated fatty acids (16:1, 18:1, 18:2, 20:4) have been detected in serum, plasma, and activated platelets [114][115][116]. Interestingly, these LPA species exhibit different biological activities [117][118][119], possibly by the differential activation of the different LPARs. For example, LPA with an unsaturated fatty acid induces proliferation and de-differentiation of smooth muscle cells, whereas LPA with a saturated fatty acid does not [118,119]. These observations clearly indicate the biological significance of LPA species in vivo, and they may influence GBM. Whether transactivation of EGFR (by G 12/13 activation from LPA) may also be suppressed by an LPA 1 antagonist remains unknown and should be elucidated.
In addition to direct pharmacological modulation of LPARs, several groups have targeted the upstream enzyme ATX for potential therapeutics. As ATX expression accounts for at least half of plasma LPA levels [120], these drugs ultimately attenuate LPA signaling.
Several ATX inhibitors have been synthesized. These include the small molecules ONO-8430506 [121], PF-8380 [55], and gintoin, which is a plant-derived LPA/ginseng glycolipoprotein complex that results in feedback inhibition of ATX through LPAR signaling [122]. Whether these inhibitors are reversible may also be an important factor for actual clinical application.
A new type of multi-drug therapy in which several drugs with synergistic effects are administered simultaneously may be beneficial. The potential benefits of targeting ATX/LPAR were shown in murine breast cancer models using a combination of an ATX inhibitor and an LPAR antagonist [123]. Moreover, Schleicher et al. reported that BrP-LPA, a novel dual-function pan-LPA antagonist/ATX inhibitor, enhances radiation-induced endothelial cell death, disrupts endothelial cell biological function, and reduces glioma cell viability and migration [124]. BrP-LPA treatment prior to irradiation represses GBM tumor growth in vivo [124]. A monoclonal antibody that specifically binds and neutralizes LPA has been developed and is in preclinical development [125,126].
The predictable potential side effects of these drugs require careful attention. One report has shown that LPA 1 deficiency leads to a schizophrenia-type pathology in mice [85]. Such devastating side effects must be considered when developing new drugs. Designing new drugs that can retain the desired effects, but not cause any undesirable effects, may be feasible. Currently, no approved drugs targeting ATX/LPAR are available for clinical use. A detailed analysis of the pharmacological properties of synthetic inhibitors, including solubility, toxicity, pharmacokinetics, bioavailability, and permeability will be important in the effort to move these drugs into clinical use [127]. Clinical trials serve as the penultimate step on the path toward clinical use in the treatment of human cancers including GBM [128]. Several LPAR-specific analogues and small molecules have been synthesized. To date, at least three compounds have passed phase I and phase II clinical trials for idiopathic pulmonary fibrosis and systemic sclerosis [129][130][131][132]. Although no LPAR-targeting cancer drugs have reached clinical trial stages thus far, pharmaceutical investigation is progressing rapidly [21].
Finally, the author would like to propose a future putative protocol for treating GBM in the following order: (1) Maximum tumor resection using multimodal intraoperative information such as that provided by intraoperative neuroimaging, neuro-navigation, photodynamic diagnosis, neurophysiological monitoring, histology, and possibly photodynamic therapy during surgery; (2) conventional radiation/TMZ therapy; (3) ATX and/or LPAR-targeted therapy instead of TMZ maintenance therapy; and (4) tumor removal or no tumor removal followed by ATX and/or LPAR-targeted therapy in the case of tumor recurrence. Theoretically, this protocol may extend both progression-free survival and overall survival of GBM patients compared to the present standard therapy. The use of LPAR agonists/antagonists and/or ATX inhibitors seems to be an attractive strategy, and such drugs may be promising for the treatment of GBM.
Conclusion
Therapeutic approaches targeting the ATX-LPA-LRAR cascade may be a realistic addition to the treatment of GBM in the near future.
Competing interests
The author declares that he has no competing interests.
Authors' contributions ST wrote the manuscript and read and approved the final manuscript. | 2016-05-12T22:15:10.714Z | 2015-06-18T00:00:00.000 | {
"year": 2015,
"sha1": "8f1b6c73fda2039ad8999031fd10a33fd0234138",
"oa_license": "CCBY",
"oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/s12944-015-0059-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f1b6c73fda2039ad8999031fd10a33fd0234138",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
230799268 | pes2o/s2orc | v3-fos-license | (Non)local logistic equations with Neumann conditions
We consider here a problem of population dynamics modeled on a logistic equation with both classical and nonlocal diffusion, possibly in combination with a pollination term. The environment considered is a niche with zero-flux, according to a new type of Neumann condition. We discuss the situations that are more favorable for the survival of the species, in terms of the first positive eigenvalue. Quite surprisingly, the eigenvalue analysis for the one dimensional case is structurally different than the higher dimensional setting, and it sensibly depends on the nonlocal character of the dispersal. The mathematical framework of this problem takes into consideration the equation $$ -\alpha\Delta u +\beta(-\Delta)^su =(m-\mu u)u+\tau\;J\star u \qquad{\mbox{in }}\; \Omega,$$ where $m$ can change sign. This equation is endowed with a set of Neumann condition that combines the classical normal derivative prescription and the nonlocal condition introduced in [S. Dipierro, X. Ros-Oton, E. Valdinoci, Rev. Mat. Iberoam. (2017)]. We will establish the existence of a minimal solution for this problem and provide a throughout discussion on whether it is possible to obtain non-trivial solutions (corresponding to the survival of the population). The investigation will rely on a quantitative analysis of the first eigenvalue of the associated problem and on precise asymptotics for large lower and upper bounds of the resource. In this, we also analyze the role played by the optimization strategy in the distribution of the resources, showing concrete examples that are unfavorable for survival, in spite of the large resources that are available in the environment.
Introduction
We consider here a biological population with density u which is self-competing for the resources in a given environment Ω.
These resources are described by a function m, which is allowed to change sign: the positive values of m correspond to areas of the environment favorable for life and produce a positive birth rate, whereas the negative values model a hostile environment whose byproduct is a positive death rate of linear type.
The competition for the resource is encoded by a nonnegative function µ. Resources and competitions are combined into a standard logistic equation. In addition, the population is assumed to present a combination of classical and nonlocal diffusion (the cases of purely classical and purely nonlocal diffusions are also included in our setting, and the results obtained are new also for these cases). The population is also endowed with an additional birth rate possibly provided by pollination 1 and modeled by a convolution operator (the case of no pollination is also included in our setting, and the results obtained are new also for this case).
The environment Ω describes an ecological niche and is endowed by a zero-flux condition of Neumann type. Given the possible presence of both classical and nonlocal dispersal, this Neumann condition appears to be new in the literature: when the diffusion is of purely classical type this new prescription reduces to the standard normal derivative condition along ∂Ω, and when the diffusion is of purely nonlocal type it coincides with the nonlocal Neumann condition set in R n \ Ω that has been recently introduced in [DROV17] -but in case the population is subject to both the classical and the nonlocal dispersion processes the Neumann condition that we introduce here takes into account the combination of both the classical and the nonlocal prescriptions (interestingly, without producing an overdetermined, or ill-posed, problem).
The main question addressed in this paper is whether or not the environmental niche is suited for the survival of the population (notice that life is not always promoted by the ambient resource, since m can attain negative values). We will investigate this question by using spectral analysis and providing a detailed quantification of favorable and unfavorable scenarios in terms of the first eigenvalue compared with the resource and pollination parameters.
We observe that the operator in (1.4) is of mixed local and nonlocal type, and also of mixed fractional and integer order type. Interestingly, the nonlocal character of the operator is encoded both in the fractional Laplacian and in the convolution operator given by J.
The use of the convolution operator in biological models to comprise the interaction of the population with the resource at a certain range has a very consolidated tradition, see e.g. [ABVV10, CDM13, ACR13, Cov15, BCV16, BCL17, CDV17] and the references therein.
As for the nonlocal diffusive operator, for the sake of concreteness we stick here to the prototypical case of the fractional Laplacian, but the arguments that we develop are in fact usable in more general contexts including various interaction kernels of singular type.
We endow the problem in (1.4) with a set of Neumann boundary conditions that correspond to a "zero-flux" condition according to the stochastic process producing the diffusive operator in (1.4). This Neumann condition appears to be new in the literature and depends on the different ranges of α and β according to the following setting. If α = 0, we consider the nonlocal Neumann condition introduced in [DROV17], thus prescribing that We remark that the prescription in (1.8) is not an "overdetermined" condition (as it will be confirmed by the existence result in Theorem 1.1 below).
In this setting, the (α, β)-Neumann conditions provide an "ecological niche" for the population with density u, making Ω a natural environment in which a given species can live and compete for a resource m, according to a competition function µ. In this setting, the parameter τ , as modulated by the interaction kernel J, describes an additional birth rate due to further intercommunication than just with the closest neighbors, as it happens, for instance, in pollination. As a matter of fact, the role of the (α, β)-Neumann conditions is precisely to make the boundary and the exterior of the niche Ω "reflective": namely when an individual exits the niche, it is forced to immediately come back into the niche itself, following the same diffusive process, see Section 2 in [DROV17] (see also [Von19] for a thoroughgoing probabilistic discussion about this process).
As a technical remark, we also observe that our (α, β)-Neumann condition is structurally different (even when α = 0 and s = 1/2) from the case of bounded domains with reflecting barriers presented in [MPV13,PV18], and the diffusive operator taken into account in (1.9) cannot be obtained by the spectral decomposition of the classical Laplacian in Ω (except for the special case of periodic environments, see e.g. Section 2.3 and Appendix Q in [AV19a]).
The possible presence in (1.9) of two different diffusion operators, one of classical and the other of fractional flavor, has a clear biological interpretation, namely the population with density u can possibly alternate both short and long-range random walks, and this could be motivated, for instance, by a superposition between local exploration of the environment and hunting strategies (see e.g. [VAB + 96,DHMP98,CCL07,CHL08,CDM12,KLS12,CCLR12,SV17, MV17]). A detailed presentation of this superposition of stochastic processes will be presented in Appendix B; see also [DV] for the detailed description of the local/nonlocal reflecting barrier also in terms of the population dynamics model.
The notion of solution of (1.9) is intended here in the weak sense, as it will be discussed precisely in formula (2.5). See however [GM02,BDVV20] for a regularity theory for weak solutions of the equations driven by the mixed order operators as in (1.9).
Our first result in this setting is that the problem in (1.9) admits a minimal energy solution (under very natural and mild structural assumptions). To state it, it is convenient to define if β = 0 and n > 2s, 1 if β = 0 and n 2, or if β = 0 and n 2s, if β = 0 and n > 2s, 1 if β = 0 and n 2, or if β = 0 and n 2s. (1.10) As customary, the exponent 2 * s denotes the fractional Sobolev critical exponent for n > 2s and it is equal to 2n n−2s . Similarly, the exponent 2 * denotes the classical Sobolev critical exponent for n > 2 and it is equal to 2n n−2 . We remark that q n/2, and we have: Theorem 1.1. Assume that m ∈ L q (Ω), for some q ∈ q, +∞ and (m + τ ) 3 µ −2 ∈ L 1 (Ω).
Then, there exists a nonnegative solution of (1.9) which can be obtained as a minimum of an energy functional.
The precise definition of energy functional used in Theorem 1.1 will be presented in (3.1): roughly speaking, the energy associated to Theorem 1.1 will be the "natural" functional for the variational methods, and its Euler-Lagrange equation will correspond to the notion of weak solution.
While the functional analysis part of the proof of Theorem 1.1 relies on standard direct methods in the calculus of variations, the more interesting part of the argument makes use of a structural property of the nonlocal Neumann condition that will be presented in Theorem 2.1 (roughly speaking, the nonlocal Neumann condition in (1.6) will be instrumental to minimize the Gagliardo seminorm, thus clarifying the energetic role of the nonlocal reflection introduced in [DROV17]).
Though the result in Theorem 1.1 has an obvious interest in pure mathematics, our main analysis will focus on whether problem (1.9) does admit a nontrivial solution (notice indeed that u ≡ 0 is always a solution of (1.9)). In particular, in view of Theorem 1.1, a useful mathematical tool to detect nontrivial solutions consists in proving that the minimal energy configuration is not attained by the trivial solution (hence, in this case, the solution produced by Theorem 1.1 is nontrivial). The question of the existence of nontrivial solutions has a central importance for the mathematical model, since it corresponds to the possibility of a population to survive in the environmental condition provided by the niche. Interestingly, in our model, the survival of the population can be enhanced by the possibility of exploiting resources by long-range interactions. Indeed, we stress that the nonlocal resource m in (1.4) is not necessarily positive (hence, the natural environment can be "hostile" for the population): in this configuration, we show that the survival of the species is still possible if the "pollination" birth rate τ is sufficiently large. The quantitative result that we have is the following: Theorem 1.2. Assume that m ∈ L q (Ω), for some q ∈ q, +∞ and (m + τ ) 3 µ −2 ∈ L 1 (Ω).
Then,
(i) if m ≡ 0 and τ = 0, then the only solution of (1.9) is the one identically zero; then (1.9) admits a nonnegative solution u ≡ 0.
A particular case of Theorem 1.2 is when the resource m is nonnegative. In this situation, Theorem 1.2(i) gives that no survival is possible without resources and pollination, i.e. when both m and τ vanish identically (unless also µ vanishes identically, then reducing the problem to that of mixed operator harmonic functions), whereas Theorem 1.2(ii) guarantees survival if at least one between the environmental resource and the pollination is favorable to life. Precisely, one can immediately deduce from Theorem 1.2 the following result: Corollary 1.3. Assume that m ∈ L q (Ω), for some q ∈ q, +∞ , m is nonnegative, and (m + τ ) 3 µ −2 ∈ L 1 (Ω).
Problems related to Corollary 1.3 have been studied in [CDV17] under Dirichelet (rather than Neumann) boundary conditions.
From the biological point of view, assumption (1.11) states that the environment is "in average" favorable for the survival of the species. It is therefore a natural question to investigate the situation in which the environment is "mostly hostile to life". To study this phenomenon, when m ∈ L q with q > n/2, with m + ≡ 0 and Ω m(x) dx < 0, we denote 2 by λ 1 the first positive eigenvalue associated with the diffusive operator in (1.9). More precisely, we consider the weighted eigenvalue problem with (α, β)-Neumann condition.
As it will be discussed in detail in Proposition 4.1 here and in [DPLV], problem (1.13) admits the existence of two unbounded sequences of eigenvalues, one positive and one negative. In this setting, the smallest strictly positive eigengevalue will be denoted by λ 1 . When we want to emphasize the dependence of λ 1 on the resource m, we will write it as λ 1 (m). We also denote by e an eigenfunction corresponding to λ 1 normalized such that Ω m(x) e 2 (x) dx = 1.
The first eigenvalue will be an important threshold for the survival of the species, quantifying the role of the necessary pollination parameter τ in order to overcome the presence of an hostile behavior in average. The precise result that we obtain is the following one: Theorem 1.4. Assume that m ∈ L q (Ω), for some q ∈ q, +∞ , and (m + τ ) 3 µ −2 ∈ L 1 (Ω).
2 As customary, we freely use in this paper the standard notation Once again, in Theorem 1.4, the case described in (i) is the one less favorable to life, since the combination of both the resources and the pollination is in average negative, while the case in (ii) gives a lower bound of the pollination parameter τ which is needed for the survival of the species, as quantified by (1.15).
We recall that the link between the survival ability of a biological population and the analysis of the eigenvalues of a linearized problem is a classical topic in mathematical biology, see e.g. [Ske51,Bel97,BHR05a,BHR05b,KS07,KAS12,MPV13,Maz20a,Maz20b] (yet, we believe that this is the first place in which a detailed analysis of this type is carried over to the case of mixed operators with our new type of Neumann conditions).
In light of (1.15), a natural question consists in quantifying the size of the first eigenvalue. Roughly speaking, from (1.15), the smaller λ 1 , the smaller is the threshold for the pollination guaranteeing survival, hence configurations with small first eigenvalues correspond to the ones of better chances of life.
To address this problem, since the eigenvalue λ 1 = λ 1 (m) depends on the resource m, it is convenient to consider an optimization problem for λ 1 in terms of three structural parameters of the resource m, namely its minimum, its maximum and its average, in order to detect under which conditions on these parameters the first eigenvalue can be made conveniently small. More precisely, given m, m ∈ (0, +∞) and m 0 ∈ (−m, 0) we consider the class of resources (1.16) We will also consider the smallest possible first eigenvalue among all the resources in M , namely we set (1.17) λ := inf m∈M λ 1 (m).
When we want to emphasize the dependence of λ on the structural quantities m, m and m 0 that characterize M , we will adopt the explicit notation λ(m, m, m 0 ).
Our main objective will be to detect whether or not λ can be made arbitrarily small in a number of different regimes: we stress that the smallness of λ corresponds to a choice of an optimal distribution of resources that is particularly favorable for survival.
The first result that we present in this direction is a general estimate controlling λ with O 1 m , provided that the maximal hostility of the environment does not prevail with respect to the maximal and average resources. In terms of survival of the species, this is a rather encouraging outcome, since it allows the existence of nontrivial solutions provided that the maximal resource is sufficiently large. The precise result that we have is the following: A direct consequence of Theorem 1.5 gives that when the upper and lower bounds of the resource are the same and get arbitrarily large, then λ gets arbitrarily small (hence, in view of (1.15), there exists a resource distribution which is favorable to survival). More 3 precisely: 3 To avoid notational confusion, we reserved the name of m to the resource in (1.4) and we denoted by m a "free variable" dimensionally related to the resource. We now investigate the behavior of λ for large upper and lower bounds on the resource (maintaining constant the other parameters). Interestingly, this behavior sensibly depends on the dimension n. In this setting, we first consider the asymptotics in dimension n 2: we show that large upper and lower bounds are both favorable for life for a given m 0 < 0, according to the following two results: While Theorem 1.7 is somehow intuitive (large resources are favorable to survival), at a first glance Theorem 1.8 may look unintuitive, since it seems to suggest that a largely hostile environment is also favorable to survival: but we remark that in Theorem 1.8, being m 0 given, an optimal strategy for m may well correspond to a very harmful environment confined in a small portion of the domain, with a positive resource allowing for the survival of the species.
Quite surprisingly, the structural analysis developed in Theorems 1.7 and 1.8 is significantly different in dimension 1. Indeed, for n = 1, we have that λ does not become infinitesimal for large upper and lower bounds on the resource, unless the diffusion is purely nonlocal with strongly nonlocal fractional parameter. Namely, we have the following two results. An interesting feature of Corollary 1.6, Theorems 1.7 and 1.8, (1.25) and (1.26) in terms of real-world applications is that their proofs are based on the explicit constructions of suitable resources: though perhaps not optimal, these resources are sufficiently well located to ensure the maximal chances of survival for the population, and their explicit representation allows one to use them concretely and to build on this specific knowledge.
We also think that the phenomenon detected in Theorems 1.9 and 1.10 reveals an important role played by the nonlocal dispersal of the species in dimension 1: indeed, in this situation, the only configurations favorable to survival are the ones in (1.25) and (1.26), that are induced by purely nonlocal diffusion (that is α = 0) with a strongly nonlocal diffusion exponent (that is s 1/2, corresponding to very long flies in the underlying stochastic process).
To better visualize the results in Theorems 1.7, 1.8, 1.9 and 1.10, we summarize them in Table 1. For typographical convenience, in Table 1 we used the "check-symbol" ✓ to denote the cases in which λ gets as small as we wish (cases favorable to life) and the "x-symbol" ✗ to mark the situations in which λ remains bounded away from zero (cases unfavorable to life which require stronger pollination for survival).
Large m Large m n 2 ✓ ✓ n = 1 and α > 0 ✗ ✗ n = 1, α = 0 and s > 1/2 ✗ ✗ n = 1, α = 0 and s 1/2 ✓ ✓ We stress that the optimization of the resources plays a crucial role in the survival results provided by Corollary 1.6, Theorems 1.7 and 1.8, and formulas (1.25) and (1.26): that is, given m 0 < 0, very large but badly displayed resources may lead to non-negligible first eigenvalues (differently from the case of optimal distribution of resources discussed in Corollary 1.6, Theorems 1.7 and 1.8, and formulas (1.25) and (1.26)).
To state precisely this phenomenon, given m 0 < 0 and Λ > −4m 0 , we let Roughly speaking, the resources m in M ♯ Λ,m 0 have a prescribed average equal to m 0 and attain maximal positive and negative values comparable with a large parameter Λ, and a natural question in this case is whether large Λ's provide sufficient conditions for the survival of the species. The next result shows that this is not the case, namely the abundance of the resource without an optimal distribution strategy is not sufficient for prosperity: Theorem 1.11. Given m 0 < 0 and Λ > −4m 0 , we have that Interestingly, the proof of Theorem 1.11 will be "constructive", namely we will provide an explicit example of a sequence of badly displayed resources which make the first eigenvalue diverge: a telling feature of this sequence is that it is highly oscillatory, thus suggesting that a hectic and erratic alternation of highly positive resources with very harmful surroundings is potentially lethal for the development of the species.
We recall that the investigation of the roles of fragmentation and concentration for resources is a classical topic in mathematical biology, and, in this sense, our result in Theorem 1.11 confirms the main paradigm according to which concentrated resources favor survival (see e.g. [BHR05a,BHR05b,LLNP16]) -however, there are several circumstances in which this general paradigm is violated and fragmentation is better than concentration, see e.g. the small diffusivity regime analyzed in [LLL20,MRB20,LNY20]. In any case, the analysis of fragmentation and concentration for mixed operators with our Neumann condition is, to the best of our knowledge, completely new.
We also remark that the results presented here are new even in the simpler cases in which no classical diffusion and no pollination term is present in (1.4), as well as in the cases in which the death rate and the pollination functions are constant.
The rest of this paper is organized as follows. In Section 2 we will introduce the functional framework in which we work and the notion of weak solutions, also providing a new result showing that the nonlocal Neumann condition naturally produces functions with minimal Gagliardo seminorm (this is a nonlocal phenomenon, which has no counterpart in the classical setting, and will play a pivotal role in the minimization process).
Then, in Section 3 we prove the existence results in Theorems 1.1 and 1.2. In Section 4 we study the eigenvalue problemin (1.13), and we give the proof of Theorem 1.4. Not to overburden this paper, some technical proofs related to the spectral theory of the problem are deferred to the article [DPLV].
In Section 5, we deal with the proofs of Theorem 1.5, Theorems 1.7 and 1.8 when n 3, and Theorems 1.9 and 1.10.
When n = 2, the proofs of Theorems 1.7 and 1.8 require some technical modification of logarithmic type, hence their proofs is deferred to Appendix A.
The proof of Theorem 1.11 is contained in Section 6. Finally, Section B contains some probabilistic motivations related to the diffusive operators of mixed integer and fractional order.
Functional analysis setting
In this section we define the functional space in which we work. First, we recall the space H s Ω introduced in [DROV17] and defined as As customary, by u ∈ L 2 (Ω) in (2.1) we mean that the restriction of the function u to Ω belongs to L 2 (Ω) (we stress that functions in H s Ω are defined in the whole of R n ). Also, all functions considered will be implicitly assumed to be measurable.
Furthermore, we define In light of this definition, X α,β is a Hilbert space with respect to the scalar product for every u, v ∈ X α,β . We also define the seminorm From the compact embeddings of the spaces H 1 (Ω) and H s Ω (see e.g. Corollary 7.2 in [DNPV12] when α = 0), we deduce the compact embedding of X α,β into L p (Ω), for every p ∈ [1, 2 * ) if α = 0, and for every p ∈ [1, 2 * s ) if α = 0. We say that u ∈ X α,β is a solution of (1.9) if Now we show that among all the functions in H s Ω , the ones minimizing the Gagliardo seminorm are those satisfying the nonlocal Neumann condition in (1.6). This is a useful result in itself, which also clarifies the structural role of the Neumann condition introduced in [DROV17]: Then, if we define Also, the equality in (2.7) holds if and only if u satisfies (1.6).
Proof. We remark that the notation E 1 in (2.6) stands for E u when u ≡ 1. Moreover, without loss of generality, we can suppose that otherwise the claim in (2.7) is obviously true. In addition, Now, we observe that, for every y ∈ R n \ Ω, Accordingly, (2.9) becomes for every y ∈ R n \ Ω, and the equality holds if and only if ϕ(y) = 0. Integrating over R n \ Ω (or, equivalently, on R n \ Ω), we get and the equality holds if and only if ϕ ≡ 0 in R n \ Ω. From this observation and (2.8) we obtain (2.7), as desired.
3. Existence results and proofs of Theorems 1.1 and 1.2 The proof of Theorem 1.1 is based on a minimization argument. More precisely, given the functional setting introduced in Section 2 (recall in particular (2.2)), in order to deal with problem (1.9), we consider the energy functional E : X α,β → R defined as (3.1) As a technical remark, we observe that our objective here is to distinguish between trivial and nontrivial solutions, to detect appropriate conditions for the survival of the solutions and we do not indulge in the distinction nonnegative and nontrivial versus strictly positive solutions. For the reader interested in this point, we mention however that, under appropriate conditions, one could develop a regularity theory (see e.g. Theorems 3.1.11 and 3.1.12 in [GM02]) that allows the use of a strong maximum principle for smooth solutions (see e.g. Theorem 3.1.4 in [GM02]). Now, we prove that the functional in (3.1) is the one associated with (1.9): Lemma 3.1. The Euler-Lagrange equation associated to the energy functional E introduced in (3.1) at a non-negative function u is (1.9).
Proof. We compute the first variation of E , and we focus on the convolution term in (3.1) (being the computation for the other terms standard, see in particular Proposition 3.7 in [DROV17] to deal with the term involving the Gagliardo seminorm, which is the one producing the nonlocal Neumann condition).
For this, we set For any φ ∈ X α,β and ε ∈ (−1, 1), we have Accordingly, Now, since J is even (recall (1.2)), we see that Using this in (3.2) we obtain that which concludes the proof.
As a consequence of Lemma 3.1, to find solutions of (1.9), we will consider the minimizing problem for the functional E in (3.1). First, we show the following useful inequality: Proof. By the Cauchy-Schwarz Inequality, we have Now, using the Young Inequality for convolutions with exponents 1 and 2 (see e.g. Theorem 9.1 in [WZ15]), we obtain where (1.3) has been also used. This and (3.4) give (3.3), as desired.
We are now able to provide a minimization argument for the functional in (3.1): Proposition 3.3. Assume that m ∈ L q (Ω), for some q ∈ q, +∞ , where q has been introduced in (1.10), and that Let also Then, the functional E in (3.1) attains its minimum in X α,β . The minimal value is the same as the one occurring among the functions u ∈ L p (Ω) for which and such that N s u = 0 a.e. outside Ω.
Moreover, there exists a nonnegative minimizer u, and it is a solution of (1.9).
Moreover, we use the Young Inequality with exponents 3/2 and 3 to obtain that From this and (3.7) we have that (3.8) We point out that the quantity κ is finite, thanks to (3.5), and it does not depend on u.
Recalling (3.1), formula (3.8) implies that Now, we take a minimizing sequence u j , and we observe that, in light of Theorem 2.1, we can assume that We can also suppose that where (3.9) has been also exploited. This implies that As a consequence, Moreover, by the Hölder Inequality with exponents 3/2 and 3, From this and (3.11), and using compactness arguments, we can assume, up to a subsequence, that u j converges to some u ∈ L p (Ω) (for every p ∈ [1, 2 * s ) if α = 0, and for every p ∈ [1, 2 * ) if α = 0, see e.g. Corollary 7.2 in [DNPV12]) and a.e. in Ω, and also |u j | h for some h ∈ L p (Ω) for every j ∈ N (see e.g. Theorem IV.9 in [Bre83]).
Hence, if x ∈ R n \ Ω, by the Dominated Convergence Theorem, as j ր +∞. Accordingly, in light of (3.10), when x ∈ R n \ Ω, we have as j ր +∞ (we stress that till now u was only defined in Ω, hence the last step in (3.12) is instrumental to define u also outside Ω). As a consequence, we obtain that u j converges a.e. in R n . Now, recalling (3.6), we have that Also, From (3.13), (3.14) and (3.15) we conclude that We also have, by the Fatou Lemma and the lower semicontinuity of the L 2 -norm, Gathering together these observations, we conclude that and therefore u is the desired minimum. Also, since E (|u|) E (u), we can suppose that u is nonnegative. Finally, u is a solution of (1.9) thanks to Lemma 3.1.
The claim of Theorem 1.1 follows from Proposition 3.3. Now, we provide the proof of Theorem 1.2, relying also on the existence result in Theorem 1.1: Proof of Theorem 1.2. Thanks to Theorem 1.1, we know that there exists a nonnegative solution to (1.9).
We now prove the claim in (i). For this, we assume that m ≡ 0 and τ = 0, and we argue towards a contradiction, supposing that there exists a nontrivial solution u of (1.9).
We notice that, since u 0 and µ µ > 0 in Ω, As a consequence, taking v := u in (2.5) we obtain that which is a contradiction, and therefore the claim in (i) is proved. Now we deal with the claim in (ii). From Theorem 1.1 we know that there exists a nonnegative solution u to (1.9) which is obtained by the minimization of the functional E in (3.1) (recall Proposition 3.3). We claim that (3.16) u does not vanish identically.
4. Analysis of the eigenvalue problem in (1.13) and proof of Theorem 1.4 In this section we focus on the proof of Theorem 1.4. For this, we need to exploit the analysis of the eigenvalue problem in (1.13) (some technical details are deferred to the article [DPLV] for the reader's convenience).
The first result towards the proof of Theorem 1.4 concerns the existence of two unbounded sequences of eigenvalues, one positive and one negative: where q is given in (1.10). Suppose that m + , m − ≡ 0 and that Then, problem (1.13) admits two unbounded sequences of eigenvalues: In particular, if The proof of Proposition 4.1 is contained in [DPLV]. The first positive eigenvalue λ 1 , as given by Proposition 4.1, has the following properties: Then, the first positive eigenvalue λ 1 of (1.13) is simple, and the first eigenfunction e can be taken such that e 0.
See [DPLV] for the proof of Proposition 4.2.
With this, we are now ready to give the proof of Theorem 1.4: Proof of Theorem 1.4. Thanks to Theorem 1.1, we know that there exists a nonnegative solution to (1.9). We first prove the claim in (i). For this, we assume that m −τ , and we suppose by contradiction that there exists a nontrivial solution u of (1.9).
We observe that, applying (3.3) with v := u and w := u, Hence, taking u as a test function in (2.5), using (4.4) and recalling that u 0 and µ µ, we get This is a contradiction, whence the first claim is proved. Now we show the claim in (ii). From Theorem 1.1 we know that there exists a nonnegative solution u to (1.9) which is obtained by the minimization of the functional E in (3.1) (recall Proposition 3.3). We claim that (4.5) u does not vanish identically.
To prove this, we show that (4.6) 0 is not a minimizer for E .
For this, we take an eigenfunction e associated to the first positive eigenvalue λ 1 , as given by Proposition 4.2. Namely, we take e ∈ X α,β such that for every v ∈ X α,β . By taking v := e in (4.7), we obtain that We also remark that, thanks to (1.14), we can use the characterization of λ 1 given in formula (4.3) of Proposition 4.1, and hence we can normalize e in such a way that (4.9) Ω me 2 dx = 1.
5.
Optimization on m and proofs of Theorems 1.5, 1.7, 1.8, 1.9 and 1.10 This section is devoted to the understanding of the optimal configuration of the resource m, which is based on the analysis of the minimal eigenvalue problem given in (1.17).
First of all, we will see that the optimal resource distribution attaining the minimal eigenvalue in (1.17) is of bang-bang type, namely concentrated on its minimal and maximal values m and m. This property is based on the so called "bathtub principle", see Lemma 3.3 in [DGT10] (or [LL97,LY06]), that we recall here for the convenience of the reader: Proof. We defineλ := inf m∈M λ 1 (m).
and we claim that To this end, we observe that, sinceM ⊂ M , we have that Moreover, by the definition of λ in (1.17), we have that for every ε > 0 there exists m ε ∈ M such that λ + ε λ 1 (m ε ). Then, we denote by e ε the nonnegative eigenfunction associated to λ 1 (m ε ), and we conclude that We also observe that, in light of Lemma 5.1, for a suitable D ε ⊂ Ω satisfying (5.1). Plugging this information into (5.5), and letting m ⋆ ε := mχ Dε − mχ Ω\Dε , we obtain that Hence, taking the limit as ε goes to 0, we get that λ λ . This, combined with (5.4), establishes (5.3), as desired.
In light of Proposition 5.2, from now on, when optimizing the eigenvalue λ 1 (m) as in (1.17), we will suppose that m belongs to the setM introduced in (5.2). Now we provide the proof of Theorem 1.5.
Proof of Theorem 1.5. We take a ball B ⊂ Ω such that We can assume, up to a translation, that Ω ⊂ {x n > 0}, and, for every ξ 0, we define the set We observe that |Ω ξ | is nondecreasing with respect to ξ, and we define We claim that, for every ξ > 0, To this end, we first show that For this, we consider several cases. If x = (x ′ , x n ) ∈ Ω ξ , then either x ∈ B or x n < ξ. If x ∈ B, then x ∈ Ω ξ for each ξ > 0, and accordingly χ Ω ξ (x) = 1 = χ Ω ξ (x), which implies (5.8).
By (5.8) and the Dominated Convergence Theorem we obtain (5.7), as desired.
We also notice that if ξ = 0, then Ω ξ = B, and therefore, by (1.18) and (5.6), This and the continuity statement in (5.7) guarantee that ξ * > 0. Moreover, the continuity in (5.7) implies that Now, we set D := Ω ξ * , and we observe that D satisfies (5.1), thanks to (5.9). Also, we for some positive constant C depending on Ω and d 0 . This completes the proof of Theorem 1.5.
With the aid of Theorem 1.5 we now prove Corollary 1.6, by arguing as follows: Proof of Corollary 1.6. We notice that The next goal of this section is to prove Theorem 1.7. For concreteness, we give here the proof for n 3, and we defer the case n = 2 to Appendix A.
The idea to prove Theorem 1.7 is to use the function ϕ in (5.11) and the resource m = mχ D − mχ Ω\D , with D as in (5.13), as competitors for the minimization of λ in (1.17).
In this setting, we notice that, since m ∈M , recalling (5.1), This says that (5.14) sending m ր +∞ is equivalent to sending ρ ց 0, being m, m 0 and |Ω| fixed quantities in this argument.
In light of these observations, the next lemmata will be devoted to estimate in terms of ρ the quantities involving ϕ that appear in the minimization of λ.
We point out that, in dimension n = 2, the argument to prove Theorem 1.7 will be similar, but we will need to introduce a logarithmic-type function as in (A.1) instead of a polynomial-type function as in (5.11) (as it often happens when passing from dimension 2 to higher dimensions).
The first result that we have in this setting deals with the H 1 -seminorm of ϕ: Lemma 5.3. Let n 3 and ϕ be as in (5.11). Then, Proof. By the definition of ϕ in (5.11), we have that ∇ϕ = 0 only if x ∈ B 1 \ B ρ . Accordingly, using polar coordinates, Now, we point out that This and (5.15) entail that, for every γ > 0, which conludes the proof.
Now, we deal with the Gagliardo seminorm of ϕ. For this, we point out the following useful inequality: Lemma 5.4. Let x, y ∈ R n \ {0} and γ > 0. Then, there exists C γ > 0 such that Proof. We can assume that |x| |y|, being the other case analogous. In this way, formula (5.17) boils down to To prove (5.18), we first claim that, for every t 1, for a suitable C γ > 0. Indeed, we set f (t) := Ct + 1 t γ − (C + 1), for some positive constant C (to be chosen in what follows), and we observe that for any t 1. As a result, taking C := γ + 1, we obtain that f ′ (t) > 0. This and (5.20) give that f (t) 0 for every t 1, which implies (5.19).
Furthermore, recalling (5.11) and making use of (5.17), we have that Hence, noticing that, for every x ∈ B 1 \ B ρ and every y ∈ Ω \ B 1 , 1 − |x| |y| − |x| |x − y|, we conclude that Accordingly, recalling (5.16), we conclude that We now claim that For this, we observe that by (5.11) Hence, from (5.17) we get up to renaming C > 0.
(5.32) Moreover, we observe that as long as ρ is small enough. As a consequence, recalling (5.11) and (5.13), and using the Dominated Convergence Theorem, we find that From this and (5.32), and recalling (5.14), we conclude that which is positive, since m 0 ∈ (−m, 0), as desired.
We are now in the position to give the proof of Theorem 1.7 for n 3.
Proof of Theorem 1.7 when n 3. The strategy of the proof is to use the auxiliary function ϕ as defined in (5.11) and the resource m := mχ D − mχ Ω\D , with D as in (5.13), as a competitor in the minimization problem (1.17). Indeed, in this way we find that Moreover, Lemmata 5.3 and 5.5 give that This, combined with (5.14) and Lemma 5.6, gives the desired result.
Now we deal with the proof of Theorem 1.8. The main strategy is similar to that of the proof of Theorem 1.7, but in this setting we introduce a different auxiliary function (and this of course impacts the technical computations needed to obtain the desired results). Namely, we define We point out that c ♯ > 0, since m 0 < 0 < m. We also set (5.36) D := Ω \ B ρ .
We remark that, in this setting, since m ∈M , recalling (5.1), This says that (5.37) sending m ր +∞ is equivalent to sending ρ ց 0, being m, m 0 and |Ω| fixed quantities in this argument. The reader may compare the setting in (5.13) and (5.14) with the one in (5.36) and (5.37) to appreciate the structural difference between the two frameworks. Now, we list some useful properties of the auxiliary function ψ. Noticing that the function ψ in (5.34) differs by a constant from the function −ϕ in (5.11), we obtain the following two results directly from Lemmata 5.3 and 5.5: Lemma 5.7. Let n 3 and ψ be as in (5.34). Then, lim ρց0 Ω |∇ψ| 2 dx = 0.
Lemma 5.8. Let n 3 and ψ be as in (5.34). Then We now deal with the weighted L 2 -norm of the auxiliary function ψ: Lemma 5.9. Let n 3 and ψ be as in (5.34). Then, Proof. Recalling (5.34) and (5.36), we find that Hence, recalling (5.33) and using the Dominated Convergence Theorem and (5.35), we deduce that Moreover, recalling (5.36), (5.1) and (5.35), As a consequence of this and (5.38), and recalling (5.37), we have that which is positive, since m 0 < 0 < m. Now we are ready to give the proof of Theorem 1.8 for n 3.
Proof of Theorem 1.8. The strategy of the proof is to use the auxiliary function ψ as defined in (5.34) and the resource m := mχ D − mχ Ω\D , with D as in (5.36), as a competitor in the minimization problem (1.17). Indeed, in this way we find that Moreover, from Lemmata 5.7 and 5.8 we have that This, combined with (5.37) and Lemma 5.9 implies the desired result.
Having completed the cases n 3 and deferred the case n = 2 to Appendix A, we now focus on the case n = 1, by providing the proofs of Theorems 1.9 and 1.10.
For this, when n = 1 we first establish the following lower bound for λ (as defined in (1.17)): Lemma 5.10. Let n = 1 and α > 0. Then Proof. Without loss of generality, we can set α = 2. We take an arbitrary resource m in the setM defined in (5.2). Moreover, we denote by e an eigenfunction associated to the first eigengenvalue of problem (1.13), that is In light of Proposition 4.2 here and Corollary 1.4 in [DPLV], up to a sign change, we know that e is nonnegative and bounded, and therefore we set By construction, we have that a ∈ [0, b], and we can also normalize e such that b = 1; in this way We also take x k , y k ∈ Ω such that e(x k ) → a and e(y k ) → 1 as k ր +∞. We observe that if there existx andȳ such that |e(x) − e(ȳ)| 1 − a 10 which belong to the same connected component of Ω, then (1 − a) 2 C Ω |e ′ | 2 dx, for some C > 0. (5.42) Indeed, forx andȳ as in the assumption of (5.42) we have that for some positive C. Accordingly, we obtain the desired result in (5.42). Now we claim that To prove this claim, we need to consider different possibilities according to the possible lack of connectedness of Ω. For this, we first remark that, with no loss of generality, we can suppose that (5.44) a < 1, otherwise (5.43) is obviously satisfied.
Furthermore, being Ω a bounded set with C 1 boundary, necessarily it can have at most a finite number of connected components (otherwise, there would be accumulating components, violating the assumption in (1.1)). Hence, if Ω is not connected, we can define d 0 to be the smallest distance between the different connected components of Ω. We also let d 1 to be the diameter of Ω and d 2 the smallest diameter of all the connected components of Ω (of course, d 0 , d 1 and d 2 are structural constants, and the other constants are allowed to depend on them, but we will write d 0 , d 1 and d 2 explicitly in the forthcoming computations whenever needed to emphasize their roles). To prove (5.43), we distinguish two cases: the first case is when Ω has one connected component, or it has more than one connected component, with Let us first discuss case (5.45). If Ω has one connected component, then we can exploit (5.42) withx := x k andȳ := y k with k sufficiently large, and the claim in (5.43) plainly follows. Thus, to complete the study of (5.45), we suppose that Ω is not connected and, in the setting of (5.45), we findx,ȳ ∈ Ω with In this framework, we have that (5.48)x andȳ belong to the same connected component.
Having completed the analysis of case (5.45), we now focus on the setting provided by case (5.46) and we define (5.49) r := 1 We observe that, r > 0, due to (5.44), and, if ϑ ∈ Ω with |ϑ − x k | r, then Indeed, suppose not. Then, the assumption in (5.46) guarantees that and therefore against the contradiction assumption. This proves (5.50) and similarly one can show that if τ ∈ Ω with |τ − y k | r, then (5.51) |e(ϑ) − e(y k )| 1 − a 10 .
We also remark that, testing the weak formulation of (1.13) against a constant function, one sees that up to renaming C > 0. Furthermore, from (5.53) we know that Consequently, since the map [0, 1] ∋ t := (1−t) 3 1+t is decreasing, we discover that .
Combining this information and (5.56), we deduce that .
Taking the infimum of this expression, we find the desired result.
With this, we are in the position to give the proof of Theorem 1.9. Having established Theorem 1.9, we now deal with the case in which α = 0, namely when only the nonlocal dispersal is active. This case is considered in Theorem 1.10, according to two different ranges of the fractional parameter s. For this, we divide the proof of Theorem 1.10 into two parts.
Proof of Theorem 1.10 when s ∈ (1/2, 1). We denote by e the eigenfunction associated to the first eigengenvalue of problem (1.13), normalized such that We stress that, in view of (5.61), In particular, by (5.60), we can findx andȳ in Ω such that for some c ∈ (0, 1) depending only on s and Ω (in particular, this c is independent of m). Indeed, if the left hand side of (5.65) is larger that 1, we are done, therefore we can suppose, without loss of generality, that Q (e(x) − e(y)) 2 |x − y| 1+2s dx dy 1.
We also notice that, in view of (5.63), By inserting this inequality into (5.67), we conclude that Now we prove Theorem 1.10 in the case s ∈ (0, 1/2]. This case is somehow conceptually related to the case n 2, since the problem boils down to a subcritical situation.
We suppose without loss of generality that and we define the function Here, c ⋆ > 0 is the constant introduced in (5.12), and we set (5.73) D := B ρ .
For our purposes, we recall the following basic inequality: Lemma 5.11. For every x, y ∈ R n \ {0}, we have that Proof. Without loss of generality, we assume that |y| |x|. To check (5.74), we take t := |x| |y| −1, and we see that as desired.
With this, we now list some properties of the auxiliary function ϕ in (5.72).
Moreover, recalling (5.71), for some C > 0. This implies Now, we observe that for a suitable C > 0.
Furthermore, taking R > 0 such that Ω ⊂ B R , This and (5.81) give that Now, we take k ∈ N such that (5.83) 1 2 k+1 < ρ 1 2 k , and we observe that dx dy. (5.84) Moreover, we remark that if x ∈ B 1/2 i , y ∈ B 1/2 j+1 and |x| |y|, we have that We point out that, if x ∈ B 1/2 i and |y| |x|, then In light of this fact and (5.74), we have that (5.86) In addition, if x ∈ B 1/2 i and y ∈ B 1/2 j+1 and |x| |y|, As a result, Furthermore, if x ∈ B 1/2 i \ B 1/2 i+1 and y ∈ B 1/2 j , with i j − 4, we see that Then, we insert this information into (5.87) and we conclude that Hence, changing index of summation by posing ℓ := j − i + 1, We plug this information and (5.86) into (5.85) and we find that where C ⋆ := 2(C + 3 · 2 12−10s ). We observe that and consequently, by (5.88), From this and (5.83), it follows that This implies that We also remark that (5.89) log |x| log ρ 2 χ B 1 \Bρ 1.
We are now ready to complete the proof of Theorem 1.10 in the case s ∈ (0, 1/2]. 6. Badly displayed resources, hectic oscillations and proof of Theorem 1.11 This section contains the proof of Theorem 1.11, relying on an explicit example of sequence of highly oscillating resources which make the first eigenvalue diverge. The technical details go as follows. Proof of Theorem 1.11. We suppose that B 4 ⊂ Ω and we consider η ∈ C ∞ 0 (B 3/2 , [0, 1]) with η = 1 in B 1 and η C 1 (R n ) 8. We let and m(x) := m ω + Λη(x) sin(ωx 1 ), with ω > 0 to be taken arbitrarily large in what follows.
We remark that Moreover, integrating by parts, which is arbitrarily small provided that ω is large enough: in particular, we can suppose that Also, for every x ∈ Ω, Furthermore, for large ω we have that (6.4) In view of (1.27), (6.1), (6.3) and (6.4), we obtain that (6.5) m ∈ M ♯ Λ,m 0 . Now, we take into account a function ϕ ∈ X α,β such that Ω m(x)ϕ 2 (x) dx = 1.
Then, integrating by parts, we see that As a consequence, if 9Λ ω −m ω (which is the case for ω large, in view of (6.2)), and accordingly (6.6) Let now ζ ∈ (−1, 1) and E ′ ∈ R n−1 with |E ′ | |ζ| and E := (ζ, E ′ ) ∈ R n . We use the trigonometric identity cos(y) cos ζ − cos(y + ζ) sin ζ = sin y, for all ζ ∈ R \ (πZ) and y ∈ R, together with the notation Φ := ηϕ 2 and the change of variable to write that Since we thereby discover that, if E ∈ B 1 and ω 2, for some C > 0.
In particular, recalling also (6.2), it follows that there exists r 0 ∈ (0, 1), possibly depending on m 0 , Λ and n, such that, if ζ ∈ (−r 0 , r 0 ) and ω is sufficiently large, We also observe that, given an additional parameter κ > 0, to be taken conveniently small in what follows, Then, we plug this information into (6.7) and we conclude that, if r 0 is small enough and ω is large enough, (6.8) We also remark that, in our notation, E 1 = ζ, and accordingly for someC > 0.
Appendix A. Proofs of Theorems 1.7 and 1.8 when n = 2 The main strategy followed in this part is similar to the case n 3, but when n = 2 we have to define different auxiliary functions. We start with the proof of Theorem 1.7. For this, we recall the setting in (5.10) and (5.12), and we define Proof. We compute Lemma A.2. Let n = 2 and ϕ be as in (A.1). Then Proof. As for the proof of Lemma 5.5, we have to consider several integral contributions (given the different expressions of the competitors the technical computations here are different from those in Lemma 5.5). First of all, we have that Moreover, assuming ρ < 1/4, Consequently, from (5.74) (used here with |y| = ρ), (1 − |x|) 2 |x| 2 |x − y| 2+2s dx dy 1 (log ρ) 2 as ρ ց 0, where we took R > 2 sufficiently large such that Ω ⊂ B R . In addition, utilizing again (5.74), we first notice that Hence, in light of (5.12) and (5.14), With this preliminary work, we can complete the proof of Theorem 1.7 in dimension n = 2, by arguing as follows: Proof of Theorem 1.7 when n = 2. We use the function ϕ in (A.1) and the resource m := mχ D − mχ Ω\D , with D as in (A.2), as competitors in the minimization problem in (1.17). In this way, we find that Combining this with Lemma A.3 and (5.14), we obtain the desired result in Theorem 1.7.
We now focus on the proof of Theorem 1.8 when n = 2. For this, we introduce the function where c ♯ is the constant introduced in (5.35). Moreover, we set for a given α > 0, independent on the time step.
We observe that the case in (B.4) corresponds to having the closest neighborhood walk scaled by a suitably large factor (for small h), while the case in (B.5) corresponds to having the usual notion of closest neighborhood random walk, with the probability 1 − p that the particle follows it being large (for small h).
(B.6)
With a formal Taylor expansion, we observe that u(x + hj, t) + u(x − hj, t) − 2u(x, t) = h 2 D 2 x u(x, t)j · j + O(h 3 ), therefore the latter sum in (B.6) can be written as as N ր +∞ (i.e., as h ց 0). Hence, recognizing a Riemann sum in the first term of the right hand side of (B.6), taking the limit as h ց 0 (that is τ ց 0), we formally conclude that ∂ t u(x, t) = p 2c R n u(x + y, t) + u(x − y, t) − 2u(x, t) |y| n+2s dy + 1 − p 2n ∆u(x, t), which is precisely the heat equation associated to the operator in (1.9) (up to defining correctly the structural constants). | 2021-01-08T02:15:28.088Z | 2021-01-07T00:00:00.000 | {
"year": 2022,
"sha1": "613cb6ce40b0450ff48f72bb1a7e061c7b970db4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "613cb6ce40b0450ff48f72bb1a7e061c7b970db4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
53970819 | pes2o/s2orc | v3-fos-license | Quantifying and Mitigating Wind‐Induced Undercatch in Rainfall Measurements
Despite the apparent simplicity, it is notoriously difficult to measure rainfall accurately because of the challenging environment within which it is measured. Systematic bias caused by wind is inherent in rainfall measurement and introduces an inconvenient unknown into hydrological science that is generally ignored. This paper examines the role of rain gauge shape and mounting height on catch efficiency (CE), where CE is defined as the ratio between nonreference and reference rainfall measurements. Using a pit gauge as a reference, we have demonstrated that rainfall measurements from an exposed upland site, recorded by an adjacent conventional cylinder rain gauge mounted at 0.5 m, were underestimated by more than 23% on average. At an exposed lowland site, with lower wind speeds on average, the equivalent mean undercatch was 9.4% for an equivalent gauge pairing. An improved‐aerodynamic gauge shape enhanced CE when compared to a conventional cylinder gauge shape. For an improved‐aerodynamic gauge mounted at 0.5 m above the ground, the mean undercatch was 11.2% at the upland site and 3.4% at the lowland site. The mounting height of a rain gauge above the ground also affected CE due to the vertical wind gradient near to the ground. Identical rain gauges mounted at 0.5 and 1.5 m were compared at an upland site, resulting in a mean undercatch of 11.2% and 17.5%, respectively. By selecting three large rainfall events and splitting them into shorter‐duration intervals, a relationship explaining 81% of the variance was established between CE and wind speed.
Introduction
Despite the apparent simplicity, it has proven notoriously difficult to measure rainfall accurately because of the challenging environment within which it is measured, and in particular because of wind. Long-term reference rainfall measurements that reflect best practice are of critical importance. Applications ranging from flood risk management and water resources planning to numerical weather prediction and urban sewer design rely heavily on accurate rainfall measurements. However, systematic bias caused by wind is inherent in rainfall measurement and introduces an unquantified error into hydrological science that is generally ignored.
Despite technological developments over the past 50 years, Tipping Bucket Rain Gauges (TBRs) remain the most widely used and trusted providers of rainfall data. Their specific shortcomings are well documented (Habib et al., 2001(Habib et al., , 2008La Barbera et al., 2002;Molini et al., 2005), yet there is little choice but to retain them. They provide useful information and are cost effective. However, TBRs could be utilized more effectively with the international adoption of standardization procedures by rain gauge manufacturers and data practitioners, a process initiated by BSI (2012) and CEN (2012).
Rain gauge errors may be categorized broadly as instrumental or environmental. The former relate to the ability of an instrument to report correctly the rainfall and are resolved by appropriate laboratory calibration. Their nature and extent depends upon the measuring principle. For TBRs, instrumental errors are addressed comprehensively by Lanza and Stagi (2009). Environmental errors occur when rain collected by an exposed gauge is less than that which would have fallen on the ground if the gauge was not there. These type of errors are generic to all catching-type rain gauges mounted above the ground regardless of measuring principle and are often more challenging to resolve. Splash in/out and evaporation are examples of environmental error but the most significant is caused by wind (Rodda & Dixon, 2012).
Wind-induced undercatch is a well-known but poorly quantified phenomenon. The term is commonly used to explain why a rain gauge mounted with the orifice at ground level collects more rainfall than an adjacent gauge mounted above the ground. This observation is documented extensively in the literature (Alter, 1937;Goodison et al., 1998;Jevons, 1861;Kurtyka, 1953;Pollock et al., 2014;Rodda, 1968;Sevruk & Hamon, 1984). A generally accepted theory is that it is due to a combination of accelerated wind and increased turbulence above the gauge orifice, causing rain to be deflected away from an exposed rain gauge, with sitespecific turbulence complicating the relationship (Larson & Peck, 1974). The shape of the exposed gauge is reported to have a significant impact on the airflow above the gauge orifice (Folland, 1988). Computational Fluid Dynamics (CFD) simulations are consistent with this finding, showing also that improved-aerodynamic gauges reduce turbulence and produce recirculating airflow structures above the orifice, which is thought to improve Catch Efficiency (CE) (Colli et al., 2017). CE is defined as the ratio of a nonreference measurement to a reference measurement. However, evidence based on field observations quantifying the effect of gauge shape on CE is lacking. Sieck et al. (2007) reported that simple preventative measures using innovative gauge designs that are aerodynamically less intrusive result in improved data quality. Despite this, international uptake of designs such as by Folland (1988), Chang andFlannery (2001), andStrangeways (2004) has been limited.
The validity and intercomparability of rainfall studies relies heavily upon the derivation of the reference rainfall measurement. A broad range of shielding configurations are used to shelter rain gauges from wind (Yang et al., 1999). However, rain gauge windshields can be cumbersome and expensive. The most widely accepted method of minimizing the wind effect on rainfall measurements is to adopt the use of a pit rain gauge (CEN, 2010). The most recent international field intercomparison on rainfall intensity used this standard to develop a field reference measurement (Lanza & Stagi, 2009). However, it is rarely practicable to install a pit gauge and uptake has been limited.
Using a pit gauge as a reference, efforts have been made to correct for wind-induced undercatch. Duchon and Essenberg (2001) report 4% undercatch between pit and above-ground gauges during typical rainfall events but were unable to develop a relationship with wind speed. Sieck et al. (2007) observed differences in catch between reference and nonreference measurements typically of the order of 2-10%, but found that Ne spor and Sevruk's (1999) numerical-simulation derived correction technique using drop-size distribution to be less effective than a correction based on wind and rainfall rate only. Note that corrections have been developed for snow (Kochendorfer et al., 2017;Wolff et al., 2015;Yang, 2014). However, snow measurements are outside the scope of this study.
Hydrological modeling applications often assume rain gauge data are reference measurements, without understanding limitations, applying a suitable correction, or accounting for input-data uncertainty. The implications of a large unreported undercatch on applications such as real-time flood forecasting may be significant. In the UK and elsewhere, the windiest conditions often occur concurrently with large frontal rainfall events, such as those that caused devastation in the UK during the winter of 2015/2016, which are reported in this study. Archer et al. (2007) reported that during a large frontal event, the catchment water balance showed that the rainfall measured was less than the runoff generated. Globally, many floods originate in upland areas where rainfall measurements are sparse and subject to wind-induced undercatch. Understanding rainfall processes better in the uplands is critical to reducing flood risk.
This study's aim is to quantify the typical error in rainfall measured by TBRs in windy conditions, focusing on developing understanding of the wind-induced undercatch and identifying what practical measures can be taken to improve rainfall measurement accuracy. A data-recording scheme for TBR data along with datalogging equipment commonly used by National Meteorological and Hydrological Services (NMHSs) is adopted to help understand the extent and implications of wind-induced undercatch within their rain gauge networks. The impact of an improved-aerodynamic shape on rainfall CE is quantified by field observations using a pit gauge as a reference and a conventional cylinder shape gauge, used internationally, as a comparison. The impact of gauge mounting height above the ground surface is also considered, relevant due to the different mounting conventions used globally by NMHSs. This study will also attempt to develop a relationship between CE and wind speed at different time aggregations, presenting an estimate of the typical undercatch in rainfall measurements at different wind speeds, and a correction relationship.
Site Descriptions
The UK experiences frequent concurrent high wind and rain events enabling investigation of wind-induced undercatch. The climatology of the UK is such that prevailing winds from the west or southwest deliver more rainfall to the west, where it is orographically enhanced by mountainous terrain (Fairman et al., 2015).
Two sites, one in the lowlands and one in the uplands, were instrumented to investigate and quantify the wind effect on rainfall measurements. The lowland site is located at Nafferton Farm in the north-east of England, 20 km west of Newcastle upon Tyne (Figure 1, left plot). Its elevation is 110 m and it is categorized as a lowland site. The upland site is located at Talla Water in south-west Scotland (Figure 1, right plot); it is categorized as an upland site with an elevation of 440 m. These sites are henceforth referred to as lowland and upland, respectively. The annual number of rain-days greater than 10 mm is on average 20-25 at the lowland site, and more than 60 at the upland site. Long-term average annual rainfall is 700-1,000 mm at the lowland site, compared to 2,000-3,000 mm at the upland site. The two sites are only 110 km apart, but the large difference in heavy-rain-days is largely due to orographic enhancement of frontal rainfall in the west, and a rain shadow effect in the east. The average annual 2 m wind speed at the lowland site is estimated as 4-5 m/s, whereas at the upland site this figure is around 10 m/s (UK Met Office, 2017).
Near the center of the foreground in both plots in Figure 1 is a metal antisplash grid structure located at ground level. This is the reference pit containing rain gauges. The pit at the lowland site is large and contains two TBRs. The pit at the upland site is smaller and contains one TBR.
Rain Gauge Selection and Siting
Three types of TBR were used; they are included in Table 1 with the rationale for their selection. All three TBRs have a nominal bucket tip resolution of 0.2 mm. The ARG100 and the SBS500 have improved- aerodynamic profiles. The Casella has a conventional cylinder profile which is the most commonly used TBR gauge shape used globally. At the lowland site, two SBS500 rain gauges were mounted in the pit with gauge orifice height at 0.0 m, and one adjacent to the pit, several meters away, on the ground, with its rim at 0.5 m. The Casella rain gauge was also mounted at 0.5 m several meters from the pit. At the upland site, three ARG100 rain gauges were mounted; in the pit at 0.0 m, at 0.5 m and at 1.5 m. The Casella rain gauge was mounted at 0.5 m, also adjacent to the pit. At both lowland and upland sites the gauges mounted above the ground adjacent to the pit were located perpendicular to or downwind of the prevailing wind, to reduce their interference with the pit. Henceforth, the ARG100 is referred to as the ''ARG,'' the SBS500 as the ''SBS'' and the Casella as the ''CAS.'' The two pit gauges at the lowland site are referred to as ''Pit SBS 1'' and ''Pit SBS 2.'' While the ARG and the SBS are both treated as improved aerodynamic gauges, there are some differences in their respective characteristics, with the former being more prone to outsplash during higher intensity rainfall events (Strangeways, 2004). It was not possible to compare the ARG and SBS directly due to the different gauge type at each site.
Rain gauges mounted with the rim at 0.5 and 1.5 m are standard practice for many NMHSs, for example, the UK Met Office (UKMO) and KNMI in the Netherlands (both approximately 0.5 m). By using adjacent rain gauges of the same model with the same performance characteristics, calibrated in the same way, any differences above a residual instrumental error between the measurements captured above ground and those in the pit are primarily attributable to the wind effect. This is an assumption also used by Sieck et al. (2007). (Folland, 1988) Improved aerodynamic properties to reduce extent of wind-induced undercatch UK: 400 Improved-aerodynamic Global: 6000 ARG
Laboratory Calibration
All TBRs included in the experiment were calibrated individually by volumetric calibration. This process involved carefully balancing the bucket mechanism, passing 1 L of water through each gauge, counting the number of tips and calculating the exact resolution of the tipping bucket. The calibration intensity selected was 16 mm/h, considered a representative intensity for UK rainfall. Industry standard practice is not to use the specific calibration factor, instead using the nominal value of the tipping bucket, in this case 0.2 mm. However, to reduce the instrumental error, each gauge-specific calibration factor was applied in this study.
Data Collection
At both sites, the number of tips occurring in each minute was recorded using the Campbell Scientific CR1000 datalogger. These are, respectively, the TBR data-recording scheme and the datalogger used by the UKMO. Maximum and average measurements of wind at 0.5 and 2 m were available at 1 min resolution. The devices used were the Vaisala WXT520 and the Gill Instruments WindSonic V R , which both use ultrasonic technology to measure wind speed and direction. Ancillary meteorological measurements were also available; temperature was used in this study to determine whether a rainfall event could contain solid or mixed precipitation. At the lowland site, data were available between October 2014 and June 2016, spanning approximately 20 months. At the upland site, 14 months of data were available, from May 2015 to July 2016.
Rain Gauge Errors
As already discussed, types of error can be categorized into two groups, but errors specifically related to TBRs are outlined as follows. Instrumental errors include; mechanical error at different intensities, repeatability of the tipping bucket mechanism, gauge blockage, electronic and data logging errors. The instrumental errors were reduced in the laboratory by appropriate calibration and in the field by the use of quality equipment, maintained regularly. Moreover, the discrete sampling mechanism of the TBR results in local random quantization errors which are significant during light rainfall (Habib et al., 2013).
The TBR data collection strategy adopted in this study counted the number of tips that occurred within each minute. Local random errors are exacerbated by a discrete TBR data collection strategy which limits analysis of low intensity rainfall at short time scales (Ciach, 2003;Habib et al., 1999Habib et al., , 2001. However, it was adopted because it is commonly used in operational practice by NMHSs, therefore the analysis following also presents an appraisal of using this TBR data collection strategy. Environmental errors may include; evaporation of rainfall not yet accounted for (in the funnel or on the tipping bucket mechanism), splash in/out of rain drops, adhesion/wetting and the wind-induced error which is exacerbated by gauge shape and mounting height. For two adjacent gauges of the same model and mounting height, the environmental errors should be comparable in magnitude.
Rainfall Events Selection
Processing was carried out to retrieve rainfall events and remove periods of no rain. Rainfall events can be defined in many ways depending upon the purpose of a chosen application (Dunkerley, 2008a), but are defined in this study as periods of rainfall, detected by the pit gauge, prior to and after which there has been no rain for a specified period of time. This duration is known as the ''minimum inter-event time,'' MIT, and is usually between 0.25 and 24 h in most hydrological studies (Dunkerley, 2008b). An MIT of 4 h was selected as a compromise between too many events of inadequate size and too few events for meaningful statistics to be developed. A base data set was created for the lowland and upland sites comprising 52 and 83 events, respectively, with a minimum event threshold of 5 mm. Subsequent analyses splits these events into shorter-duration intervals so that the averaged wind speeds within each interval are more representative than the event-scale averages.
Rain Events
Plotting data for rain events at both sites provided empirical evidence of undercatch between the pit gauges and the gauges mounted above ground. Two example rain events from 2015, which occurred at the upland site, are displayed in Figure 2. The durations of the two frontal rain events shown are 42 and 28 h, respectively. The rain event in the bottom plot was named by the UKMO as Storm Frank. The duration of events selected for analysis in this study ranged between 1 and 72 h. The order of the rain gauges in terms of total accumulation remained relatively consistent, with the pit gauge (0.0 m ARG) recording the most rainfall.
Establishing the Magnitude of Differences Between Adjacent Gauges
First, the differences between two SBS-Pit gauge measurements for the lowland site were averaged over 35 concurrent rain events, and found to be 0.24 mm, or just over one tip. This shows the consistency of the SBS gauges and the calibration procedure employed. The pit gauge with the longer record of 53 storms was adopted as the reference gauge (the second SBS gauge was damaged in a bird attack, and had to be repaired and recalibrated, so was not used further as a reference gauge).
Next, we test the mean differences between the reference pit gauge and other gauges mounted above the ground for significance. The null hypothesis H 0 : d 5 0 was tested against the alternative hypothesis H 1 : d 6 ¼ 0 where d corresponds to P d i =N, where d i is the difference between the paired measurements and N is the number of rain events. A paired sample t test was used to determine if the mean of the differences between the paired observations was significantly different from zero. If the null hypothesis was not rejected, there was no statistically significant bias between two gauges.
The results of the paired sample t tests are presented in Table 2. All tests, with the exception of the 0.5 m aerodynamic SBS at the lowland site, show that the mean of the differences is significantly different from zero at the 99.9% level. Therefore, there is strong statistical evidence that the mean of the differences between a gauge mounted above the ground and a pit gauge is different from zero. The pit gauge measurement was always subtracted from the nonpit gauge measurement, and in all cases, the mean of the differences between paired observations was less than zero. This was the expected result because a pit rain gauge is designed to minimize the effect of wind and therefore catch more rainfall than gauges mounted above the ground, but it has been proven here through statistical significance testing. Figure 3 shows scatterplots of gauge comparisons with a simple linear regression fitted, in addition to a 1:1 line which represents complete agreement between the paired gauges. The two subplots on the top row are from the lowland site, and the three on the bottom row are from the upland site. All five subplots feature the reference pit gauge on the x axis. The regression takes the standard form Y t : b 0 1 b 1 X t 1 E t where Y t and X t are rainfall event totals for two gauges, b 0 and b 1 are intercept and slope coefficients, respectively, and E t are the random errors. Two assumptions were made concerning E t , that they were uncorrelated and had a Gaussian distribution with zero mean and unknown variance r 2 (Duchon & Essenberg, 2001). Undercatch is expected by the regression model when the slope coefficient is less than one. These coefficients show that the 0.5 m mounted cylinder CAS performs least well compared to the pit gauge at both sites, followed by the 1.5 m mounted improvedaerodynamic ARG at the upland site. Inter-Quartile-Range (IQR) is shown by the shaded areas. Boxplots with identical shading represent rain gauges of the same model. The IQRs of the 0.5 m mounted improved-aerodynamic gauges at both sites are closer to 1.0 than the IQRs of the conventional cylinder gauges. At the upland site, the IQR of the 0.5 m mounted ARG is closer to 1.0 than the IQR of the ARG mounted at 1.5 m. Table 3 presents a summary of the differences between nonreference measurements paired with the pit rain gauge measurements, at both sites. Where relevant, the 95% confidence intervals for the mean differences are included, and differences greater than 10% are marked in bold. The conventional cylinder gauge mounted at 0.5 m catches 9.4% and 23.8% less than the pit gauge on average, at the lowland and upland sites, respectively. The comparable figures for the 0.5 m mounted improved-aerodynamic gauge at both sites are 3.4% and 11.2%, respectively. The maximum percentage difference was 38.5%, exhibited by the 0.5 m mounted CAS at the upland site. The implications of the results presented in Figure 4 and Table 3 are that both the mounting height and gauge shape have a greater impact on the accuracy of rainfall data than is widely appreciated.
Quantifying the Wind-Induced Error
The aim of this section is to visualize and quantify the relationship between wind speeds and CE, and investigate whether it is viable to apply a multiplier to rainfall recorded by the best performing nonreference rain gauge, i.e., that with a mean CE closest to 1.0. For both sites, Figure 4 in the previous section shows that the best performing gauges were the improved-aerodynamic rain gauges mounted at 0.5 m. Therefore, the analyses in this section use the 0.5 m SBS and the 0.5 m ARG.
At this stage, it is an unproven hypothesis that the undercatch is associated with wind. However, using the same data set of N 5 52 and N 5 83 events, where event durations ranged between 1 and 72 h, there was no obvious relationship between CE and event-averaged wind speeds. It was presumed that an event-averaged wind statistic did not adequately represent the variability of wind during a rain event. Therefore, there is a need to examine shorter duration more homogenous periods. The 10 largest rain events for the upland and lowland sites were selected and split into uniform time periods, T, and the CE was calculated for each period. Due to TBR local random errors mentioned in section 2.5, a minimum interval T of 0.5 h and a minimum rainfall threshold (MRT) in each interval of 1 mm were applied, for the pit gauge. The CEs of the 0.5 m mounted improved-aerodynamic gauges for both sites were plotted against interval-averaged 1 min maximum wind speeds also measured at 0.5 m. Figure 5 shows these results for T values of 1 and 2 h. The subplots for T 5 0.5 h and a MRT of 1 mm are not presented in Figure 5 because a large amount of scatter was induced by local random errors, which could not be eliminated. However, it is important to emphasize that low CEs do occur at low rainfall rates and moderate wind speeds. Moreover, rain events also occur where CE > 1, which happens more frequently with Note. Errors greater than 10% are marked in bold. shorter values of T. This supports the hypothesis whereby local random errors cause some of the differences at low rain rates, rather than the wind.
Clustering of circles of the same color are evident in Figure 5, particularly at the upland site where the largest rain events with the longest duration are identified in brown, black, and gray. However, no clear relationship between CE and averaged wind speeds is immediately evident and all subplots exhibit a large amount of scatter.
The subplots comprising Figure 5 indicate that the limits of MRT and T imposed may not be adequate to reduce sufficiently the local random quantization errors. Therefore, the MRT is increased to 2.5 mm, and the minimum value of T is set to 1 h. Moreover, a subset of the data comprising the three largest frontal rain events from the upland site, with total rainfall recorded by the pit gauge in excess of 300 mm, were selected for further analysis. Note that the largest of these three storms, Storm Frank, is plotted in section 3.1 and Figure 2 (bottom plot).
The four subplots comprising Figure 6 show the CEs for the subset plotted against the 1 min maximum wind speeds averaged over T, measured above the ground at heights of H 5 0.5 m and H 5 2 m. These correspond to the top and bottom rows of Figure 6, respectively. Wind speed at 2 m is plotted in order to provide a regression at the same height as most operational wind measurements, and also to examine whether a reduction in the coefficient of determination could be observed compared to the wind speed measured at 0.5 m. Also plotted are the linear regressions for T 5 1 h (left column) and T 5 2 h (right . The number of subevents of duration T is given by N. The improvement in correlation and the reduction in scatter can be seen clearly in the subplots of Figure 6, compared to Figure 5. The regression model of CE for the 0.5 m improved-aerodynamic ARG on wind speeds at 0.5 m, using T 5 2 h, explained 81% of the variance. When the interval-averaged wind speed was 6 m/s at 0.5 m, this model predicts an undercatch of 16.7%. When T is reduced to 1 h, the explained variance of the model is reduced to 58%. For the same gauge but using 2 m wind speed, the goodness of fit was comparable but is reduced to 80% and 54%, for 2 and 1 h, respectively. All four linear regressions demonstrate evidence for statistical significance with P values < 0.0001. The attributes of this model are such that when 2 h accumulations from the 0.5 m mounted improvedaerodynamic ARG during large midlatitude frontal events at the upland site were between 2.6 and 21.4 mm, a linear model using wind speeds at 0.5 m predicted the undercatch to within a residual CE standard error of 0.017. However, attributing to wind speed the additional scatter exhibited by Figure 6 is complicated by a lack of information. Analysis undertaken when MRT < 2.5 mm was compromised by local random errors, but other factors may have contributed to the additional scatter. The averaging carried out may have partly disguised the relationship with wind speed because CE is determined by short-term wind turbulence and its characteristics. The arbitrary time-based method of sampling to 1 or 2 h may not be optimally representative of the variability of wind speeds. By identifying periods with low wind variability and splitting the rainfall into these intervals, while maintaining an appropriate MRT in each interval, the model fit may be improved. However, this would be a less practical approach. Furthermore, the drop-size distribution (DSD) affects CE because smaller and lighter rain droplets are affected more by wind than larger heavier droplets (Ne spor & Sevruk, 1999).
Next, the same subset of data were used to establish CEs for the 1.5 m improved-aerodynamic ARG and the 0.5 m mounted conventional cylinder CAS. The R 2 values for T 5 2 h and wind speed height H 5 0.5 m for the ARG at 1.5 m and the CAS at 0.5 m were 0.506 and 0.103, respectively. The results of these are not shown in Figure 6. For the ARG mounted at 1.5 m, it is hypothesized that the enhanced turbulence intensity created due to higher wind speeds at 1.5 m contributed to a reduced R 2 . For the CAS mounted at 0.5 m, where the wind speeds are theoretically the same as for the 0.5 m mounted ARG, it is posited that the reduction in R 2 is due to the less-aerodynamic CAS shape creating more turbulence (Colli et al., 2017). In addition, it is theorized that the local random errors described by Habib et al. (1999Habib et al. ( , 2001 and Ciach (2003) contribute to the reduced goodness of fit, particularly as these random errors may be exacerbated by the effect of the turbulence component. Moreover, the CAS has an orifice area and tipping bucket mechanism that is different to the ARG. This means that the buckets are balanced to receive a different nominal quantity of water. Therefore, tips occur at different moments in time compared to the ARG. The characteristics of the local random errors typically exhibited by the ARG may be different to the CAS. At the event-scale this is not relevant. However, for low intensity rainfall over short durations, the local random errors between two different models of TBR are likely to be greater. Therefore, comparison of the two gauges at resolutions of T < 2 h may not be appropriate.
For rain event durations where T > 1 h, the rain gauge exposure problem mainly lies in systematic components of the distorted wind flow over the gauge. Horizontal acceleration and induced upward components together contribute to the losses of incident rainfall. Turbulence is likely to have nonlinear effects on raindrop losses, which are particularly important for short duration events where T < 1 h. Therefore, it is critical that the role of turbulence is investigated in applications where short duration (<1 h) rain events are important.
It was possible to improve the model fit by applying a multiple regression using rainfall intensity and temperature as additional variables. However, without further observations it was not possible to identify causes and effects. There was also the risk of parameter interaction through multicollinearity, and the loss of physical significance. Therefore, the model presented in Figure 6 using wind speed as the sole independent predictor variable over uniform time intervals was preferred because it is both simple and practical. This section demonstrates that it is viable to apply a multiplier to a 0.5 m mounted ARG during large frontal rainfall events for time intervals where the rainfall recorded by the gauge is at least 2.5 mm and the interval is at least 1 h.
Conclusions and Recommendations
Systematic bias caused by wind is inherent within rainfall measurements and wind is therefore the most important variable required to understand the extent of undercatch on rainfall measurements. Using a pit Water Resources Research 10.1029/2017WR022421 gauge as a reference, this study demonstrated that rainfall measurements from an exposed upland site, recorded by an adjacent conventional cylinder rain gauge mounted at 0.5 m, were underestimated by more than 23% on average. At a well-exposed lowland site, where wind speeds were lower on average, the equivalent mean undercatch was 9.4% for the same commonly used conventional cylinder gauge.
An improved-aerodynamic shape rain gauge enhanced rainfall catch when compared to a conventional cylinder gauge shape. The mean undercatch for an improved-aerodynamic gauge mounted at 0.5 m above the ground was 11.2% at the upland site and 3.4% at the lowland site. Gauge mounting height above the ground also had a significant impact on rainfall catch, due to the vertical wind gradient. Identical improvedaerodynamic rain gauges mounted adjacent to one another at 0.5 and 1.5 m were compared at an upland site, resulting in a mean undercatch of 11.2% and 17.5%, respectively.
By selecting three large rainfall events, splitting them into intervals of uniform time duration, T, and imposing a minimum rainfall threshold (MRT) within each interval, a statistically significant (P < 0.0001) relationship explaining 81% of the variance was established between CE and wind speed. However, reducing T and the MRT exposed local random quantization errors, which increased the scatter and thus reduced the R 2 value.
A discrete data-recording strategy based on counting the number of tips in each 1 min was adopted in this study because it is used operationally by many NMHSs. There is an increasing requirement for highresolution rainfall data sets (Blenkinsop et al., 2017), for example in climate research into changes in rainfall extremes (Chan et al., 2016;Lenderink et al., 2017) and urban hydrology (Ochoa-Rodriguez et al., 2015). TBR local random errors are exacerbated by a discrete data-recording strategy for low intensity rainfall over short time scales (Habib et al., 2013). For coarser resolution data (>1 h), it may be justifiable to ignore these local random errors because they are averaged over a longer interval. However, in the context of increased demand for higher resolution rainfall products, quantification of these errors is critical. Moreover, to improve the resolution of subhourly rainfall measurements from TBRs it is a recommendation that rain gauge network operators in midlatitude regions should adapt their TBR data-recording strategy to record the time of bucket tip. This maximizes the quantity of information which can be taken from TBRs, with the user able to decide which interpolation technique to implement.
Field research undertaken in this study supports the results of CFD simulations presented by Colli et al. (2017), where the turbulence component above the orifice of a rain gauge was reported to rise nonlinearly with increasing wind speeds. Three gauges used in that study were also used in this study, with the SBS improved-aerodynamic shape exhibiting the lowest increase in turbulence with increasing wind speeds.
A general conclusion from the work conducted here is a reinforcement of the point that using an aerodynamic rain gauge is the simplest and cheapest practical way to improve rainfall collection efficiency. Despite the clear benefits of using an improved-aerodynamic profile, uptake is relatively low globally among NMHSs. The UK Met Office and the Scottish Environment Protection Agency are exceptions. Using a pit gauge is the ideal solution for measuring rainfall in situ. However, mounting a rain gauge in a pit is not a practicable solution in most cases.
The results presented herein provide a preliminary set of corrections for a 0.5 m mounted improvedaerodynamic rain gauge at a temporal resolution of 1 or 2 h based on 0.5 or 2 m wind speed. These corrections should be tested at other sites with a pit gauge, preferably using the same equipment. The corrections were developed for a well-exposed midlatitude upland site during large frontal events where hourly rainfall totals are at least 2.5 mm.
A number of improvements could be made to continue the work undertaken in this study. These are listed below in order of decreasing priority.
Measurement of Drop-Size Distribution (DSD)
It is shown that wind speed is the most important variable to measure for a correction to be applied. However, a quantitative assessment of the DSD using a disdrometer would also be useful. Alternatively, using high-resolution rainfall intensity measurements and qualitative information of the rainfall type may form the basis of a practical proxy estimation of the DSD. Further research should be undertaken to assess whether this is viable.
Measuring Near-Instantaneous Rainfall Intensity and Reducing Local Random Errors
Improving the data acquisition procedures from TBRs to accurately record rainfall intensity would facilitate assessment of the wind-induced undercatch at time scales finer than those used in this study (<1 h). This would involve recording the time of bucket tip, with rainfall measurements calculated from these using an interpolation technique, such as those presented by Wang et al. (2008), Fiser andWilfert (2009), andColli et al. (2013). Moreover, devices capable of measuring rainfall intensities precisely and accurately at a fine time resolution (<10 s), in particular during low rain rates, would be useful. For example, ''drop-counter'' rain gauges are known to demonstrate high accuracy at low rain rates and a fine time resolution (Colli et al., 2013;Norbury & White, 1971). With depth increments of these devices of the order of 0.005 mm, local random quantization errors may be significantly reduced. Furthermore, the introduction of near-instantaneous rainfall intensity (integration time < 10 s) as a variable affecting the wind-induced undercatch could be investigated comprehensively.
Measurement of Wind Speed above the Rain Gauge Orifice
Recording wind speeds in 3-D above the rain gauge orifice would be an advance on the present work, while a practical compromise for further research could be to measure in 2-D. Measuring this wind speed and comparing it to surrounding concurrent wind speeds nearby to the rain gauge and at the same height above the orifice, would also provide empirical validation for the Colli et al. (2017) study.
CFD modeling and wind tunnel tests carried out interactively with ambitious field experiments, incorporating points 4.1, 4.2 and 4.3 above, may be the optimal way to make vital progress toward improved corrections for wind-induced undercatch. The CFD modeling would include optimizing of the aerodynamic profile and modeling particle trajectories, the wind tunnel testing would include introducing and tracking water droplets, while concurrent field experiments would involve similar shapes to the SBS, with larger diameter sizes. | 2018-11-22T23:10:18.516Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "75def73167fa3e1e397d0d5b803cb55c321def14",
"oa_license": "CCBY",
"oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2017WR022421",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "fb65105883388e61eb48253c9b1609bd2a4db5eb",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
253098459 | pes2o/s2orc | v3-fos-license | An Opponent-Aware Reinforcement Learning Method for Team-to-Team Multi-Vehicle Pursuit via Maximizing Mutual Information Indicator
The pursuit-evasion game in Smart City brings a profound impact on the Multi-vehicle Pursuit (MVP) problem, when police cars cooperatively pursue suspected vehicles. Existing studies on the MVP problems tend to set evading vehicles to move randomly or in a fixed prescribed route. The opponent modeling method has proven considerable promise in tackling the non-stationary caused by the adversary agent. However, most of them focus on two-player competitive games and easy scenarios without the interference of environments. This paper considers a Team-to-Team Multi-vehicle Pursuit (T2TMVP) problem in the complicated urban traffic scene where the evading vehicles adopt the pre-trained dynamic strategies to execute decisions intelligently. To solve this problem, we propose an opponent-aware reinforcement learning via maximizing mutual information indicator (OARLM2I2) method to improve pursuit efficiency in the complicated environment. First, a sequential encoding-based opponents joint strategy modeling (SEOJSM) mechanism is proposed to generate evading vehicles' joint strategy model, which assists the multi-agent decision-making process based on deep Q-network (DQN). Then, we design a mutual information-united loss, simultaneously considering the reward fed back from the environment and the effectiveness of opponents' joint strategy model, to update pursuing vehicles' decision-making process. Extensive experiments based on SUMO demonstrate our method outperforms other baselines by 21.48% on average in reducing pursuit time. The code is available at \url{https://github.com/ANT-ITS/OARLM2I2}.
I. INTRODUCTION
With the development of Smart City, Intelligent Transportation System (ITS) [1] effectively leveraging the Internet of Vehicles (IoV) technology brings a profound impact on people's lives [2], [3]. Multi-vehicle pursuit (MVP), a special and realistically meaningful problem in ITS, has been widely attracted. For example, the vehicle pursuit guideline [4] has been published by the New York police department details the tactical operations to improve pursuit efficiency while cooperatively pursuing suspected vehicles.
Essentially, the MVP problem can be modeled as pursuitevasion game (PEG). In recent years, multi-agent reinforcement learning (MARL), showing significant advances in intelligent decision-making, has proven to be a fruitful method This work was supported by the National Natural Science Foundation of China (Grant No. 62071179) and project A02B01C01-201916D2 in PEG. Aiming at improving the cooperation between pursuers, [5], [6] separately introduced curriculum learning and cross-task transfer learning in PEG. [7] proposed attentionenhanced reinforcement learning to address communication issues for multi-agent cooperation. As for homogeneous agents in MVP, [8] proposed a transformer-based time and team reinforcement learning scheme. In addition to cooperation, some studies focus on the influence of opponents. [9] focused on predicting the future trajectory of the opponent to promote pursuit efficiency. However, these studies ignore the influence of the opponent's strategy, especially when the opponent is characterized by a dynamic strategy which will bring extreme non-stationarity to the pursuit and thus increase the difficulty as well as randomness to a successful capture.
The opponent modeling method is integrated into MARL as a promising solution [10] for building up the cognition of the opponent's dynamic strategy and alleviating the nonstationarity during the pursuit. In self-play scenarios, [11] recursive reasons the opponent's reactions to the protagonist's potential behaviors and finds the best response. Targeting the non-stationarity brought by opponent's changing behaviors, [12] learned a general policy adaptive to changeable strategies. [13] used policy distillation method to realize accurate policy detection and reuse in face of non-stationary opponents. [14] learned low-level latent dynamics of the opponent, and leveraged the stability reward to stabilize the opponent strategy reducing the non-stationarity in tasks. However, the aforementioned methods suffer from a non-adaptation to the team-toteam multi-vehicle pursuit problem. On the one hand, state-ofthe-art methods only focused on the two-player game and were difficultly adaptive to team-to-team competitions for that both generating and modeling complex strategies of opponents are challenging. On the other hand, the existed opponent modeling methods based on MARL is rarely applied to MVP scenario with complicated road structures and traffic restrictions.
This paper considers a team-to-team multi-vehicle pursuit problem (T2TMVP) in the complicated urban traffic scene. The evading vehicles adopt the pre-trained policy to choose the optimal actions rather than move randomly or in a fixed route, which is what we call dynamic strategies. The main target of this paper is allivating the non-stationarity brought by dynamic strategies of evading vehicles and further improve pursuit efficiency. For this purpose, an opponent-aware (a) Complicated urban traffic scene for T2TMVP problem, consisting of traffic lights, background vehicles, evading vehicles, and pursuing vehicles. (b) SEOJSM mechanism. This mechanism models the dynamic strategies of opponents assisted by mutual information-united loss. (c) Multi-agent reinforcement learning framework for pursuing agents. Each pursuing agent adopts DQN to make decisions with the assistance of the opponents joint strategy model. (d) State-sensitive joint dynamic strategy of opponents. Each evading agent leverages Q-learning to select actions with the highest Q-values. reinforcement learning via maximizing mutual information indicator (OARLM 2 I 2 ) method is proposed to improve pursuit efficiency as shown in Fig. 1. OARLM 2 I 2 is equipped with the sequential encoding-based opponents joint strategy modeling (SEOJSM) mechanism to extract the joint features of dynamic strategies of evading vehicles based on Q-learning. Meanwhile, the DQN based pursuing vehicles implement efficient decision-making by leveraging the joint partial observation and the joint strategy model of evading vehicles, and the mutual information between them is served as an indicator to update the SEOJSM mechanism. The main contributions of this paper are as follows: 1. This paper models the team-to-team multi-vehicle pursuit (T2TMVP) problem in a complicated urban traffic scene. Two competitive teams, pursuing vehicle team and evading vehicle team, separately make flexible decisions according to intelligent dynamic strategies.
2. This paper proposes an opponent-aware reinforcement learning via maximizing mutual information indicator (OARLM 2 I 2 ) method to improve the pursuit efficiency for the T2TMVP problem. A sequential encoding-based opponents' joint strategy modeling (SEOJSM) mechanism is deliberately designed to assist in tackling the non-stationarity brought by dynamic strategies of evading vehicles.
3. This paper leverages the novel mutual information-united loss to train our OARLM 2 I 2 . The mutual information-united loss comprehensively considers the effectiveness of decisionmaking network and opponents' joint strategy model. The outline of this article is given as follows. Section II introduces the T2TMVP problem statement and problem instantiation. In Section III, the state-sensitive joint dynamic strategy of evading vehicles is introduced, and the SEOJSM mechanism is proposed. Section IV details the deep Q-network for pursuing agents and the training process with the mutual information-united loss. Section V provides experiment settings and sufficient experiments to verify the effectiveness of the proposed OARLM 2 I 2 method. Finally, conclusion and future work are presented in Section VI.
II. T2TMVP PROBLEM STATEMENT AND INSTANTIATION
In this section, we first state the T2TMVP problem. Then, we instantiate the T2TMVP problem as a partially observed Markov decision processes (POMDP).
A. T2TMVP Problem Statement
This paper considers a team-to-team multi-vehicle pursuit (T2TMVP) problem in a complicated urban traffic scene as shown in Fig. 2. Competition is the vital theme of T2TMVP, and two competitive teams of vehicles make intelligent decisions to separately accomplish their own goals. Different from the traditional MVP, in the T2TMVP problem, evading vehicles adopt the pre-trained policy to choose the optimal actions rather than move randomly or in a fixed route. As for the pursuing vehicles, the policy is constantly updated in the interactions with the environment. I intersections and L lanes form the structured bidirectional traffic topology. For lane l ∈ {1, 2, . . . , L}, the adjacent lanes ahead are represented by l rig , l lef , l str , whose subscripts means relative positions with lane l. In the complicated urban traffic scene, B background vehicles exist similar to the real traffic scenario, and all vehicles are restricted to obey the following traffic rules in our simulation.
(1) Vehicles should follow traffic lights and drive in a single lane, turning is not allowed before reaching an intersection.
(2) Vehicles cannot exceed the speed limit, so both pursuing vehicles and evading vehicles are set the same acceleration ac max and the maximum speed permitted v max .
(3) Collisions are considered when two vehicles get too close, the vehicle behind would decelerate at the maximum deceleration de max to prevent accidents.
In such a complicated urban traffic scene, an efficient pursuit is rather difficult. For one thing, the complex road structure and traffic regulations bring a lot of interference to the pursuit. For another, the non-stationarity caused by dynamic strategies of evading vehicles will impose extreme difficulties for pursuing vehicles learning optimal policies. Therefore, restricted by the complicated urban traffic scene, solving the non-stationarity issue caused by opponents is crucial for an efficient pursuit in the T2TMVP problem.
B. T2TMVP Problem Instantiation
In this paper, all vehicles, except for the background vehicles, make decisions based on the current partial observations restricted by the urban traffic scene. Therefore, the decisionmaking process of both pursuing vehicles and evading vehicles can be formulated as a partially observed Markov decision process (POMDP) define by a tuple S, O, A, P, R . s ∈ S, a ∈ A represents the global state space and action space. o ∈ O is the partial observation of each agent. During the interaction with environment at time step t, each agent k ∈ {1, 2, . . . , K} chooses an action a t k based on the obtained partial observation o t k and forming the joint action a t . Then the environment generates the next state s t+1 according to the state transition function P s t+1 | s, a t : And reward r t k as the feedback of action selection is given from the environment. The goal of each agent is to generate a optimal policy π t k maximizing the discounted reward R t = ∞ i=0 γ i r t+i and γ ∈ [0, 1] is the discount factor. In the T2TMVP problem, the position of N pursuing vehicles and M evading vehicles are initialized randomly on the lanes. The goal of pursuing vehicles is to capture all evading vehicles in the shortest time possible, and the evading vehicles intend to escape accordingly. We consider a capture successful if, at any time step t during the pursuit, the distance between an evading vehicle m and at least one of the pursuing vehicles n is less than a given collision radius dis t n,m < dis cap . To be more realistic, the observations of pursuing vehicles op t n ∈ OP, n ∈ {1, 2, . . . , N } and that of evading vehicles oe t m ∈ OE, m ∈ {1, 2, . . . , M } are all restricted to be partial, and observations are shared within the homogeneous vehicles forming the joint observation op t ∈ OP and oe t ∈ OE. When a vehicle encounters an intersection Inter i , i ∈ {1, 2, . . . , I}, decision-making is needed. The evading vehicle m adopts an action a t m ∈ A according to pre-trained state-sensitive joint dynamic strategy π t e based on Q-learning. And the pursuing vehicle n executes an action a t n ∈ A through DQN against the non-stationary brought by opponents' dynamic strategies. Empirically, in the process of driving, the observations encountered are generally limited, which is consistent with the limited state space of the Q-learning algorithm. We use the finite state-action pairs in Q-leaning to simulate the situation of adopting corresponding strategies for different observations in the driving process, which is what we call dynamic strategies. However, the state space of DQN is infinite and can not generate denumerable strategies for evading vehicles. For pursuing vehicles, we use the DQN algorithm to make decisions. On the one hand, DQN could make more refined decisions for the current observations. On the other hand, it is easy to compare with state-of-the-art algorithms.
III. OPPONENT MODELING
This section first introduces the generating process of evading vehicles' joint dynamic strategy based on Q-learning. Then, the SEOJSM mechanism is introduced.
A. Joint Dynamic Strategy of Evading Vehicles
In the traditional opponent modeling methods, an evading vehicle tends to choose a strategy from a few preset fixed strategies based on the current observation. The previous adversary modeling is in the scenario of two-agent and the state space of decision-making is less, but the state space of decision making increases exponentially in the T2TMVP problem, therefore the preset strategies are not enough to make effective decisions. However, due to the huge state space of evading vehicles in the continuous scene of complicated urban traffic, the above evading strategies are not applicable. Moreover, preset strategies focus on dealing with a few simple cases and it is difficult for them to make cooperative decisions from the perspective of a single agent. To this end, this paper delicately designs a novel strategy-generating approach for multiple evading vehicles in the T2TMVP problem as shown in Fig. 3 (a), (b). Q-learning is one of the effective algorithms of reinforcement learning. It introduces the mapping Q-table between the state-action pairs and the corresponding estimated future rewards into the action selection process of an agent. According to the current state s, an agent selects the action a following policy π. If the state-action pair is not contained in the Qtable, then the action-utility function Q(s, a) will be updated. Inspired by this, this paper leverages the Q-learning method to generate the joint dynamic strategy of evading vehicles. Thus, evading vehicles enable intelligent executions based on the current states.
This paper uses a cell state representation approach to model the state space of evading vehicles. We divide each lane in an agent's visual field into two cells as shown in Fig. 3 (a). For an evading agent m on lane l, the lane l and the connected lane ahead l str are fully observed, and the connected lanes l rig , l lef in the lateral visual field are restricted to half of the lane length for realism. Therefore, lanes in the visual field are divided into six cells Cel 1 l , Cel 2 l , Cel 1 lstr , Cel 2 lstr , Cel 1 rig , Cel 1 lef , as shown in Fig. 3 (a). Moreover, we let the partial observation oe t m , at time step t, consist of the number of pursuing agents on every cell in the visual field, forming oe t m = [num t In the training process of evading agents, with the same complicated urban traffic scene as mentioned before, pursuing agents are set to randomly move, and evading agents choose the optimal actions using the Q-learning, as shown in Fig. 3 (b). Decision-making will only take place when vehicles reach an intersection. Therefore, three actions, going straight, turning right, and turning left, consist of the action space A. At time step t, via cooperation among the evading vehicles, the partial observations {oe t 1 , oe t 2 , . . . , oe t M } form the joint observation oe t . The evading agent m selects an action a t m ∈ A and performs it condition on the current joint observation oe t . After receiving the environment reward r t m , Q value is updated based on the following Bellman equation: where α is the learning rate, and γ is the discount factor. During the training process, Q-table is updated constantly with new state-action pairs the evading vehicles will encounter, and the Q-value corresponding to a state-action pair is replaced by a higher one. Each evading vehicle selects the action with the highest Q-value based on the current joint partial observation. That is exactly the state-sensitive joint dynamic strategy of evading vehicles which guides the competition with pursuing vehicles. It provides a dynamic strategy based on the current observation for evading vehicles but also considers the team tactics.
B. Sequential Encoding-Based Opponents' Joint Strategy Modeling Mechanism
This paper proposes the SEOJSM mechanism to learn the joint strategy model of evading vehicles, as shown in Fig. 3 (c). We leverage multi-layer perceptron (MLP), consisting of multiple fully connected layers and the activation function ELU, serves as an encoder to build up the cognition of evading vehicles' joint dynamic strategy.
In the T2TMVP problem, due to cooperation existing in both pursuing team and evading team, clear position representation is important to obtaining effective information and further improving pursuit efficiency. At time step t, the position representation of vehicle k is given by Loc k = Emb l , Emb lstr , Emb l lef , Emb lrig , dis t k,l . Here, Emb l is the one-hot encoding of the lane l on which the agent k is located, Emb lstr , Emb l lef , Emb lrig represent the one-hot encoding of the lanes the agent k is access to by executing going straight, turning left and turning right, respectively. dis t k,l is the distance between the position of agent k and the start of the located lane l. M v t n . In the SEOJSM mechanism, we feed the it with the joint historical partial observations in the past h time steps of all pursuing agents op t−3 , op t−2 , op t−1 . And the joint strategy model of all evading vehicles π t e is output, realizing the strategy cognition building up towards evading vehicles.
Our key insight is that building evading vehicles' joint strategy model only using partial observation is a concise, realistic, and effective method. Knowing the likely strategy of opponents influences a pursuing vehicle's beliefs over environmental states and thus informs its planning of future actions. The reason for generating a joint strategy model instead of separate strategy models for every evading agent is that evading team also works collaboratively, thus pursuing agents can not only infer the single evading agent' strategy but also recognize the tactical of the whole team from the joint strategy model.
IV. DEEP Q-NETWORK WITH UNITED LOSS FOR PURSUING VEHICLES
In this section, we first introduce the deliberately designed ingredients of deep Q-networks for pursuing vehicles in the T2TMVP problem. Then we illustrate the training process with the mutual information-united loss.
A. Opponent-Aware Deep Q-networks for Pursuing Vehicles
DQN, as an upgraded version of Q-learning, is widely used in intelligent decision-making with discrete action space. This paper leverages DQN to provide decision-making for each pursuing vehicle in the T2TMVP problem. Based on Qlearning, DQN sets a neural network to estimate the current action-utility function Q and outputs the Q-value of each action condition on the current state. The DQN-based agent implements optimal decision-making by selecting the action with the highest Q-value. In this paper, we adapt DQN to the T2TMVP problem with the following paradigm setting of reinforcement learning, including the state representation, the action space, and the reward structure.
In the T2TMVP problem, the evading vehicles conduct flexible decisions according to the current state making them elusive for pursuing vehicles, thus the dynamic strategy of evading vehicles brings extreme non-stationarity to the pursuit task. In this paper, we feed two parts of input into DQN for efficient decision-making, consisting of the joint partial observation of pursuing vehicles and the joint strategy model of evading vehicles. As described in Section III, the joint partial observation of pursuing vehicle at time step t is represented as op t = Loc N , Loc M v t total , adj . And the joint strategy model of evading vehicles output by the SEOJSM mechanism is π t e . We concatenate the above two parts forming the state s t = [op t , π t e ], then leverage the concatenation jointly predicting the Q-value.
In the decision-making process of DQN, the neural network eventually outputs the Q-value for each action indicating the maximized future rewards if implementing the action. In the T2TMVP problem, action execution takes place only when vehicles reach the intersection. Therefore, the action space is set as the general intuition A = {a 1 , a 2 , a 3 }, containing going straight a1, turning left a2, and turning right a3.
At each time step t, the pursuing vehicle individually receives a reward designed to incentive the capture of evading vehicles. For pursuing vehicle n, the formulation of the reward function r n is as follows: Here, the reward function is deliberately designed in three aspects. The distance-based reward r t n,dis = −λ dis t n,m -dis t−1 n,m is responsible impelling pursuing vehicle n to reduce the distance with the nearest evading vehicle m in the visual field and continuously move towards the opponent. λ is the distance-based reward factor. To incentive faster pursuit, a time-based reward r t n,time = −c works by imposing a negative reward c every time step until completing a successful pursuit. When an evading agent is captured, all pursuing agents will be given a task-based reward r t n,task indicating the effectiveness of cooperation.
B. Training with Mutual Information-United Loss
The training regime for OARLM 2 I 2 is identical to the original DQN. DQN adopts the double-network structure. The online network with the parameter θ Q approximates the Q (s t n , a t n ) and update the parameter θ Q , and the target network with the parameter θ Q calculates the Q-target y (t) = r t n + max a t+1 n Q s t+1 n , a t+1 n | θ Q and updates the parameter θ Q with θ Q at regular intervals. The double-network structure avoids the instability caused by updating the Q-function while obtaining the Q-value, thus making the update smooth and accelerating the convergence of the algorithm.
In the original DQN, the Q-function can be learned by minimizing the following MSE loss function between the Qtarget y (t) and Q (s t n , a t n ). The expectation term is approximated by sampling a batch uniformly at random from a replay buffer containing past transition tuples. The original optimizing objective is as follows: Noting that the SEOJSM mechanism cannot guarantee the anticipation of opponents' strategies models, we introduce an explicit regularization to guide the modeling process. Mutual information (MI) measures the information shared by two variables, i.e. the degree to which the uncertainty of variable X is reduced by obtaining variable Y . The similarity of X and Y would be improved if the mutual information increases.
Concerning the T2TMVP problem, the pursuing vehicles are eager for the evading vehicles' escaping strategies at the next intersection to improve pursuit efficiency. As such, the quality of the opponents' strategy model depends on whether it can accurately infer the opponents' next strategy. Taking Algorithm 1: Training process of OARLM 2 I 2 1 Initialize replay buffer R; 2 Initialize action-value function Q with random weights θ Q ; 3 Initialize target action-value functionQ with random weights θ Q ← θ Q ; 4 for episode = 1 to Ep do 5 Receive initial state s 1 = op t , π 1 e ; 6 Initialize and store π 1 e in observation pool H; 7 for timestep = 1 to T do 8 for each pursuing agent n do 9 Choose action a t n = π op t | θ Q and exploration; Update θ Q by minimizing the loss Equation (5);
20
Update the parameters of target action-value function θ Q = θ Q with period C; 21 s t = s t+1 , π t e = π t+1 e 22 end 23 end this cue, this paper optimizes the opponent modeling model by maximizing the mutual information between the evading vehicles' joint strategy model π t e and pursuing vehicles' joint observations op t . The calculating formulation is as follows: .
If the knowledge of entropy H(π t e ) = does not provide any information about the entropy H((op t ), the mutual information would become zero, which means the failure of the opponent modeling model. Therefore, the overall optimizing objective is: The overall training process of OARLM 2 I 2 is demonstrated in Algorithm 1. At the beginning of each episode, the joint partial observation of all pursuing vehicles op 1 and the joint strategy model of evading vehicles π 1 e are initialized. At each time step t, each pursuing vehicle n select an action a t n according to the current policy with 1 − probability, and execute random choice with probability. The immediate reward r t n will be provided by the environment. And new partial observation op t+1 n is received forming the joint partial observation op t+1 , which is stored in the observation pool H. Then, the SEOJSM mechanism takes h-step joint partial observation op t−2 , op t−1 , op t as input to output the joint strategy model of evading vehicles π t+1 e which forms the state s t+1 n with the joint partial observation op t . The transition s t n , a t n , r t n , s t+1 n is then stored in the replay buffer R. Finally, samples are selected from the replay buffer R to update all networks.
V. PERFORMANCE COMPARISON AND ANALYSIS
This section discusses the performance of our OARLM 2 I 2 method in the complicated urban traffic scene. First, we introduce the experience setting of the simulation. Then, we compare the proposed OARLM 2 I 2 with state-of-the-art RL methods DQN, PPO, and QMIX, as well as OARLM 2 I 2 inaccessible with the road topology information, in the longterm reward, discounted reward, convergence, and optimal performance four aspects. The result analysis is detailed in subsections.
A. Simulation Settings
To train and evaluate our OARLM 2 I 2 , we propose T2TMVP in the complicated urban traffic setting on SUMO. We construct an urban traffic scene with 3 × 3 interactions and 48 lanes, where background vehicle flows follow the preset routes. 2 evading vehicles and 4 pursuing vehicles are competing in the scene. The parameters concerning the pursuit scenario are demonstrated in the upper part of Table I. The SEOJSM mechanism is designed with three fullyconnected hidden layers with 128 units. ELU activation function is used for each hidden layer. In Deep Q-network, the maximum steps of an episode are restricted. The batch size is set as 32, the learning rate is set as 0.001. The replay buffer capacity is 10000 and the Adam optimizer is used during training. The GPU used in training is NVIDIA Tesla T4. The parameters of OARLM 2 I 2 is shown in the below part of Table I.
B. Ablation Analysis
This subsection analyzes the effectiveness of the SEOJSM mechanism and the mutual information-united loss from several perspectives. We first analyze the optimal performance including the best undiscounted return R t = ∞ i=0 r t+i and the minimum time steps finishing the pursuit, as shown in Table II. The best undiscounted reward represents the overall performance related to three aspects: time, distance and task. The best time step is the embodiment of pursuit efficiency. The proposed OARLM 2 I 2 separately realizes 4.54% and 27.83% outperforming DQN on the best undiscounted reward and best pursuit time steps, respectively. The convincing results indicate that the SEOJSM mechanism and the mutual information-united loss jointly boost the pursuit performance in both reward and efficiency perspectives. In particular, the OARLM 2 I 2 without road topology information surprisingly achieves 8.86% faster pursuit than DQN, despite being inferior to OARLM 2 I 2 , which confirms that the OARLM 2 I 2 enable better ability dealing with complex pursuit task equipped with the joint strategy model output by SEOJSM mechanism.
The undiscounted reward enables an intuitive description of the direct feedback from the environment in training.
In this part, we analyze the ablation experiment based on the undiscounted return as shown in Fig. 4. Compared with DQN, OARLM 2 I 2 combined with the SEOJSM mechanism and mutual information-united loss, which are considered inseparable in this paper. Depicted as Fig. 4, OARLM 2 I 2 outperforms DQN in general. Especially in the training pe- riods of beginning and convergent, the superior performance indicates the SEOJSM mechanism and mutual informationunited loss play an important role in the initial exploration and final performance. Moreover, the OARLM 2 I 2 -withoutadj eventually achieves the competitive performance with OARLM 2 I 2 proving the effective assistance of OARLM 2 I 2 in pursuing vehicles' decision-making process despite in the information-inaccessible situation.
Convergence is a vital factor in measuring the efficiency of an algorithm. As depicted in Fig. 5, all experimental methods present a good convergence trend in our scenario. It is obvious that the proposed OARLM 2 I 2 method achieves remarkable superiority over DQN. This confirms that the mutual information-united loss could assist in converging to a great extent. It is reasonable to infer that OARLM 2 I 2 could generate effective policies for pursuing vehicles more quickly by inference intention of opponents' strategy to decrease the non-stationarity. We also compare the loss of OARLM 2 I 2 inaccessible with road topology information with OARLM 2 I 2 and DQN. In the conspicuous training period from around step 300*10 to 600*10, the loss of OARLM 2 I 2 -without-adj soars and loses the original advantage. This could be interpreted as the exploration based on greedy strategy, the action space sampling increases resulting in large fluctuations in the descent. But as shown in the final result, the convergence of OARLM 2 I 2 -without-adj shows a little difference with OARLM 2 I 2 . It indicates that OARLM 2 I 2 without topology information need time to constantly establish the cognition of the environment, and eventually, OARLM 2 I 2 can suppress the uncertainty of complicated environment effectively via competitive decision-making.
C. Comparison among Algorithms
This paper compares the optimal performance of each algorithm in the best undiscounted return and the minimum time steps finishing the pursuit as shown in Table II. Analyze from the specific data, the best undiscounted return is 2.03% higher than QMIX with the second performance and 17.81% higher than PPO with the worst performance. In the absence of road topology information, OARLM 2 I 2 could still realize the 7.33% advantage over PPO. Thus, inference on the opponents' dynamic strategy is vital in addressing the non-stationary pursuit task. In terms of the pursuit efficiency, our proposed OARLM 2 I 2 has made significant advances with 18.83% and 17.78% outperforming PPO, and QMIX separately. And the proposed OARLM 2 I 2 outperforms other algorithms by 21.48% on average. In general, OARLM 2 I 2 outperforms other algorithms by achieving the highest discounted return and accomplishing the pursuit in the shortest time. From the above analysis, the superiority of the proposed OARLM 2 I 2 indicates the effectiveness of alleviating the non-stationarity brought by the dynamic strategy of opponents through the SEOJSM mechanism.
More intuitively, this paper compares the discounted reward of the proposed OARLM 2 I 2 and the state-of-the-art methods of PPO and QMIX to analyze the long-term performance as shown in Fig.6. The discount reward, R t = ∞ i=0 γ i r t+i , γ ∈ [0, 1], which can avoid the algorithm falling into local optimization and form a long-term policy, is a great index comparing the long-term performance. The OARLM 2 I 2 converges to the highest reward, followed by the QMIX and PPO. It is worth noting that, although the reward setting is rather harsh for the high proportion of the time-based reward, it is obvious that OARLM 2 I 2 presents an impressive advance at first. At the beginning of the pursuit, pursuing vehicles obtain a little experience towards the opponents where uncertainty reaches the highest, thus the competitive performance indicates that OARLM 2 I 2 can cope with the non-stationarity of opponents to enhance the decision-making and effective policy-generating.
VI. CONCLUSION AND FUTURE WORKS
This paper focuses on the T2TMVP problem in a complicated traffic scene with background vehicle flows and traffic lights. We propose an opponent-aware reinforcement learning via maximizing mutual information indicator (OARLM 2 I 2 ) method to improve pursuit efficiency by tackling the non-stationary brought by opponents' dynamic strategy. A SEO-JSM mechanism is proposed to assist the decision-making process of pursuing vehicles by building up the cognition of evading vehicles' dynamic strategy. Moreover, this paper proposes the mutual information-united loss synchronously update the SEOJSM mechanism and DQN-based multi-agent decision-making model to accelerate the convergence of OARLM 2 I 2 . Finally, we verify the OARLM 2 I 2 method in a simulated complicated traffic scene based on SUMO. Extensive experiments demonstrate our approach outperforms other baselines by 21.48% on average in reducing pursuit time and presents better convergence. Our future works mainly focus on exploring more complex scenarios, such as larger traffic scenes, and different ratios of pursuing vehicles and evading vehicles. Another interesting direction lies in adversarial reinforcement learning, i.e. training the pursuing vehicles and evading vehicles simultaneously, which will impose challenges by introducing more non-stationarity. | 2022-10-25T01:16:15.329Z | 2022-10-24T00:00:00.000 | {
"year": 2022,
"sha1": "a03a73e9380ba233d37c4ea511059781b0ad28b1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a03a73e9380ba233d37c4ea511059781b0ad28b1",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119034245 | pes2o/s2orc | v3-fos-license | Strong cosmic censorship: taking the rough with the smooth
It has been argued that the strong cosmic censorship conjecture is violated by Reissner-Nordstr\"om-de Sitter black holes: for near-extremal black holes, generic scalar field perturbations arising from smooth initial data have finite energy at the Cauchy horizon even though they are not continuously differentiable there. In this paper, we consider the analogous problem for coupled gravitational and electromagnetic perturbations. We find that such perturbations exhibit a much worse violation of strong cosmic censorship: for a sufficiently large near-extremal black hole, perturbations arising from smooth initial data can be extended through the Cauchy horizon in an arbitrarily smooth way. This is in apparent contradiction with an old argument in favour of strong cosmic censorship. We resolve this contradiction by showing that this old argument is valid only for initial data that is not smooth. This is in agreement with the recent proposal that, to recover strong cosmic censorship, one must allow rough initial data.
Introduction
The strong cosmic censorship conjecture [1] asserts that, in some physically relevant class of initial data for Einstein's equation (e.g. smooth, complete, asymptotically flat), the maximal Cauchy development is, generically, inextendible. In other words, classical physics is predictable from the initial data. The Reissner-Nordström and Kerr solutions of the vacuum Einstein equation (with vanishing cosmological constant Λ) admit Cauchy horizons.
Consistency with the conjecture requires that such a Cauchy horizon is non-generic: it is expected that, if the initial data is perturbed, then generically the resulting perturbed spacetime will not admit a Cauchy horizon [2][3][4][5].
Making this conjecture precise is surprisingly subtle. 1 Various arguments indicate that, when the initial data is perturbed, the spacetime metric (and other fields) can be extended continuously across a Cauchy horizon [7][8][9][10]. For the Kerr solution, this has been proved recently [11]. So the "C 0 formulation" of the strong cosmic censorship conjecture (where "inextendible" means "inextendible with continuous metric") is false. However, it has also been argued that, generically, curvature invariants diverge at the Cauchy horizon, so the extended spacetime cannot be C 2 there [5]. Hence the C 2 formulation of strong cosmic censorship appears to be true. 2 This is not the end of the story because the total tidal distortion experienced by an observer crossing the Cauchy horizon can remain finite, so the divergence in curvature might not be strong enough to destroy a macroscopic observer [8]. Therefore demanding a C 2 metric appears to be too strong a requirement.
Ultimately, the question of whether or not an observer can cross the Cauchy horizon depends on the equations of motion for the matter that the observer is made of. And of course the observer will have an effect on the geometry determined by the Einstein equation. This motivates formulating the strong cosmic censorship conjecture as the statement that the maximal Cauchy development should be inextendible as a solution of the equations of motion. Since the equations of motion are second order, one might think this implies that the fields should be C 2 . However, one can still make sense of the equations of motion with lower smoothness than this by considering weak solutions. 3 Weak solutions have important physical applications e.g. they describe shocks in a compressible perfect fluid. For the vacuum Einstein equation, a weak solution must have locally square integrable Christoffel symbols in some chart. This leads to Christodoulou's formulation [13] of the strong cosmic censorship conjecture, that, generically the maximal Cauchy development is intextendible as a spacetime with locally square integrable Christoffel symbols. If this is correct then, generically, one cannot extend beyond the Cauchy horizon consistently with the classical equation of motion.
A popular toy model for studying strong cosmic censorship is a linear massless scalar field. In this case, the analogue of the Christodoulou formulation of strong cosmic censorship is that, for generic smooth initial data, at the Cauchy horizon the scalar field will not belong to the Sobolev space H 1 loc of functions that are locally square integrable with a locally square integrable gradient. 4 More informally, the energy of the scalar field will diverge at the Cauchy horizon. Here "energy" refers to the energy on a spacelike surface intersecting the Cauchy horizon, according to an observer with velocity normal to the surface. This linear version of the Christodoulou formulation of the strong cosmic censorship conjecture has been proved to be true for Reissner-Nordström [14] and Kerr [15] black holes with Λ = 0.
The instability of the Cauchy horizon arises from a blue-shifting of perturbations entering the black hole at late time. It was observed long ago that this effect is weaker for Λ > 0 because there is a competing red-shifting of late time perturbations since such perturbations can disperse by falling across the cosmological horizon. 5 This led to the claim that the C 2 version of strong cosmic censorship is violated for near-extremal Reissner-Nordström-de Sitter (RNdS) [16] or Kerr-de Sitter (Kerr-dS) [17] black holes. However, subsequent work [18] argued that this conclusion is invalid because it neglects backscattering of outgoing radiation just inside the event horizon. It was argued that, in the presence of such outgoing radiation, the Cauchy horizon instability is still strong enough to ensure that the C 2 formulation of strong cosmic censorship is respected. 6 Nevertheless, it has been conjectured that the Christodoulou formulation would be violated for near-extremal RNdS and Kerr-dS black holes [6]. As we have discussed above, this formulation seems more relevant than the C 2 formulation.
Interest in this topic has been revived by recent work of Cardoso et al [19]. This work considered linear massless scalar field perturbations of a RNdS black hole. It was found that, for a near-extremal black hole, such perturbations have finite energy at the Cauchy horizon and therefore violate the toy model of strong cosmic censorship discussed above. Going beyond the toy model, one can consider the backreaction of the scalar field on the geometry using nonlinear results of Refs. [20][21][22]. Cardoso et al argued that, at the nonlinear level, such perturbations would respect the C 2 formulation of strong cosmic censorship but, for a near-extremal black hole, the Christodoulou formulation would be violated, in agreement with the conjecture of Ref. [6].
This raises the question of whether the same worrying behaviour is exhibited in the more physical case of Kerr-dS black holes. Surprisingly, the answer appears to be negative: Ref. [23] argued that the Christodoulou formulation of strong cosmic censorship is respected by gravitational (or massless scalar field) perturbations of such black holes, even close to extremality. Thus the evidence suggests that, for Λ > 0, the Christodoulou formulation of strong cosmic censorship is respected by the vacuum Einstein equation but not by the Einstein-Maxwell-massless scalar field equations! 7 Our discussion so far has concerned only perturbations arising from smooth initial data. Very recently, Dafermos and Shlapentokh-Rothman (DSR) [25] have suggested a way of rescuing strong cosmic censorship with Λ > 0, namely to consider initial perturbations 5 For Λ < 0 one would expect the Cauchy horizon instability to be stronger than for Λ = 0 because perturbations outside a black hole decay very slowly. It has been suggested that the C 0 formulation of strong cosmic censorship might be valid for Λ < 0 [11]. 6 There is a problem with this claim which we will discuss below. 7 For massless scalar field perturbations, it has been argued that a near-extremal Kerr-Newman-dS black hole respects strong cosmic censorship provided that it rotates sufficiently rapidly [24]. The latter condition cannot be relaxed because the zero rotation limit gives RNdS, for which strong cosmic censorship is violated.
which are not smooth. As discussed above, the equations of motion can be formulated even with low differentiability. For linear massless scalar field perturbations of RNdS, DSR proved that, generically, the solution at the Cauchy horizon is less regular (in the sense of Sobolev spaces) than the initial data. Now there will be some minimum level of regularity which is acceptable, either physically or mathematically, e.g. for finiteness of energy or (in the nonlinear context) for local well-posedness of the initial value problem. The DSR result suggests a "rough" (i.e. non-smooth) formulation of the strong cosmic censorship conjecture: if one has an initial perturbation with the minimum acceptable level of regularity then, generically, the perturbation at the Cauchy horizon will not have this minimum acceptable regularity [25].
A lack of smoothness of the initial perturbation was already present, although not noticed, in the earlier work of Ref. [18]. As we will show in section 2, the argument of Ref. [18] overlooks a subtlety which implies that this argument only works for initial data that is not C 1 at the event horizon. Thus the work of Ref. [18] does not establish that the C 2 formulation of strong cosmic censorship is respected, because the initial perturbation does not belong to C 2 . Instead, as we will explain, the argument of Ref. [18] is evidence in favour of the rough version of strong cosmic censorship proposed by DSR.
In this paper, we will hammer a few more nails into the coffin of the smooth versions of strong cosmic censorship for RNdS. We will study linearized electromagnetic and gravitational perturbations of a RNdS black hole. Our results assume that the perturbation arises from smooth initial data. We will show that, near extremality, Christodoulou's formulation of strong cosmic censorship is violated by such perturbations. This is analogous to the massless scalar field results of Ref. [19]. However, in contrast with that case, our results show that, in pure Einstein-Maxwell theory, the C 2 version of strong cosmic censorship is also violated near extremality. In fact, generic perturbations arising from smooth initial data can be arbitrarily smooth at the Cauchy horizon. More precisely, if one desires that every perturbation arising from smooth initial data is C r at the Cauchy horizon then this can be achieved by taking the black hole to be close enough to extremality and large enough. Hence, in pure Einstein-Maxwell theory with Λ > 0, not only are the Christodoulou and C 2 formulations of strong cosmic censorship violated (for smooth initial data), but so is the C r formulation for any r ≥ 2! This paper is organized as follows. In section 2 we review the RNdS solution and discuss the arguments of Refs. [16,18]. We will explain the connection between strong cosmic censorship and quasinormal modes of the RNdS solution. In sections 3-6 we discuss linearized electromagnetic and gravitational perturbations of RNdS. We will study these perturbations using the Kodama-Ishisbashi (KI) formalism [26]. In section 3 we determine the condition for a linearized gravitoelectromagnetic perturbation to be extendible across the Cauchy horizon as a weak solution of the equations of motion. In section 4 we give the KI master equations and boundary conditions that we later solve analytically and numerically. We also show that vector-type and scalar-type perturbations in RNdS are isospectral, i.e. they have the same frequency spectrum. In section 5, we show that RNdS gravitoelectromagnetic quasinormal modes fall into three familes, as in the case of the quasinormal modes of a scalar field discussed in [19]. For all of them, there are regimes in the parameter space where we can derive some analytical approximations. We compare them with the exact numerical data and this proves valuable to identify and classify the quasinormal mode families. Finally, in section 6 we present our main results for the spectral gap of gravitational and electromagnetic perturbations. Section 7 contains further discussion of the implications of our results.
where R is the Ricci scalar of the metric g and F = dA is the Maxwell field strength associated to the potential 1-form A. We define the de Sitter radius L by In static coordinates (t, r, θ, φ), the Reissner-Nordström de Sitter (RNdS) solution with mass and charge parameters M and Q is with dΩ 2 2 being the line element of a unit radius S 2 (parametrized by θ and φ) and For an appropriate range of parameters the function f has 3 positive roots r − ≤ r + ≤ r c corresponding to the Cauchy horizon CH + , event horizon H + R and cosmological horizon H + C respectively. We will denote the (positive) surface gravities associated to each of these three horizons as κ − , κ + and κ c , respectively. For any non-extremal RNdS black hole it can be shown that [18] κ − > κ + .
The extremal configuration occurs when κ + and κ − vanish. This happens when Q = Q ext where When presenting many of our results and associated plots we will parametrize the RNdS solution using the dimensionless parameters Q/Q ext and y + . The causal structure of a non-extremal RNdS black hole is shown in Fig. 1. Region I is the region with r + < r < r c between the event horizon and cosmological horizon, i.e. the black hole exterior. Region II is the black hole interior, where r − < r < r + .
In region I we define the tortoise coordinate r * by and we fix the constant of integration by imposing r * = 0 at r = (r + + r c )/2. We then define Eddington-Finkelstein coordinates in region I by u = t − r * and v = t + r * . In ingoing Eddington-Finkelstein coordinates (v, r, θ, φ) the metric takes the form This metric can be analytically extended into region II so these coordinates cover regions I and II of Fig. 1. We will also make use of Kruskal coordinates near the event horizon. These are defined in region I by and these coordinates also allow the metric to be analytically extended into region II (where U + > 0, V + > 0) as well as two further regions not shown in Fig. 1. The future event horizon H + R is the surface U + = 0. In region II, we have V + = e κ + v and we define the coordinate u in this region by Note that u → +∞ as we approach H + R in either region I or region II. In region II we define t and r * by u = t − r * and v = t + r * . The coordinate r * ranges from −∞ at the event horizons H + L and H + R to +∞ at the Cauchy horizons CH + L and CH + R (see Fig. 1). In region II, the ingoing Eddington-Finkelstein coordinates are smooth at the "left" component CH + L of the Cauchy horizon. We will be interested in the "right" component of the Cauchy horizon CH + R . To introduce coordinates regular there, we use outgoing Eddington-Finkelstein coordinates (u, r, θ, φ). The metric is (2.10) The Cauchy horizon CH + R is the surface r = r − in these coordinates. In region II, we define Kruskal coordinates near the Cauchy horizon as The Cauchy horizon CH + R is the surface V − = 0 in these coordinates. Finally, in region I, we define Kruskal coordinates at the cosmological horizon by (2.12) The future cosmological horizon H + C is the surface V c = 0.
The work of Moss et al.
Strong cosmic censorship for RNdS black holes was first studied by Moss and collaborators in a series of papers. In this section we will review the arguments of Moss et al presented in Refs. [16,27,18]. The analysis of [16] concluded that strong cosmic censorship is violated by some RNdS black holes. However, this conclusion was modified in Refs. [27,18], resulting in the revised conclusion of Ref. [18] that in fact (the C 2 version of) strong cosmic censorship is never violated by RNdS black holes. We will explain why this latter conclusion is valid only if one allows non-smooth initial data. We will consider perturbations by a scalar field Φ although the results of Moss et al apply also to the case of coupled electromagnetic and gravitational perturbations, which we will study later. One can prescribe initial data for the scalar field on the surface Σ of Fig. 1 since this is a Cauchy surface for regions I and II. Equivalently, one can prescribe (characteristic) initial data for the scalar field on the null surface H + L ∪ H − ∪ H − c . We will follow the latter approach. Given generic initial data for the scalar field, we want to know how the field behaves at the Cauchy horizon CH + R . This problem was first investigated by Mellor and Moss (MM) [16]. Their results (rederived below) indicate that the scalar field will fail to be C 1 at the Cauchy horizon if there exists a sufficiently slowly decaying quasinormal mode. More precisely, let α be the spectral gap, i.e. the distance from the real axis in frequency space to the lowest (slowest decaying) quasinormal frequency. Define MM showed that if β < 1 then the scalar field fails to be C 1 at CH + R . When gravitational backreaction is included, the blow-up of the derivatives of Φ at CH + R is expected to cause a blow up of curvature. Thus if β < 1 for all black holes then the C 2 version of strong cosmic censorship is expected to hold. However, by studying quasinormal modes, MM argued that RNdS black holes with |Q| ≈ M have β > 1, so the scalar field is C 1 at CH + R , which is evidence for a violation of the C 2 version of strong cosmic censorship.
MM modified this claim in Ref. [27]. They were motivated by earlier work on a toy model (null dust) [28] which suggested that the analysis of Ref. [16] missed an important effect arising from late-time ingoing radiation propagating along the cosmological horizon H + c . MM argued that, in the presence of such radiation, the scalar field will fail to be Subsequently, Brady, Moss and Myers (BMM) [18] argued that one must also include the effect of scattering of outgoing radiation propagating near H + R and in this case the scalar field will fail to be C 1 at CH + R if β < 1 where β = min(α, κ + , κ c )/κ − . In view of (2.4), this gives β < 1 for any non-extremal RNdS black hole and so BMM concluded that the C 2 version of strong cosmic censorship is always respected.
We will show that the arguments of Refs [27,18] are valid only for initial data which is not smooth, in fact not even C 1 , at, respectively, the future cosmological horizon H + C or future event horizon H + R . Hence this work cannot be regarded as evidence in favour of the C 2 version of strong cosmic censorship because the initial data is not in C 2 . However, we will show that these arguments can be reinterpreted as evidence in favour of the rough version of strong cosmic censorship proposed in Ref. [25]. If one insists on smooth initial data then the original conclusion of MM is still valid: it is simply the quasinormal modes which determine whether or not strong cosmic censorship (in either the C 2 or Christodoulou formulation) is violated.
We will consider solutions which can be written as superpositions of mode solutions. A mode solution has the separable form where Y m is a spherical harmonic. Substituting this into the wave equation or Klein Gordon equation (if the field is massive) one finds that the function R satisfies an equation of the form where the potential V (r) is independent of ω and vanishes exponentially fast as a function of r * as r * → ±∞ in either region I or II. We will start by considering solutions in region II. For real ω, by reformulating (2.15) as an integral equation, one can define two linearly independent solutions with the following behaviour as r * → −∞ in region II [4,16,29,25] 8 : R out,+ gives a scalar field solution Φ smooth on H + L and R in,+ gives a solution smooth on H + R (see Fig. 1). Similarly as r * → ∞ we can define two linearly independent solutions by R out,− ∼ e iωr * , R in,− ∼ e −iωr * , (2.17) 8 We use the notation of Ref. [25] although our mode functions differ from theirs by a factor of r. For us, Rout,+ ∼ e iωr * means Rout,+ = e iωr * R out,+ whereRout,+ is a real analytic function of r for r− < r < rc withRout,+(r+) = 1. and these give scalar field solutions that are smooth at CH + R and CH + L , respectively. We can now write where A and B are the transmission and reflection coefficients for fixed frequency scattering of waves propagating out from H + L andÃ,B are the transmission and reflection coefficients for scattering of waves propagating in from H + R . In region II, initial data can be specified on the characteristic hypersurface H + L ∪ H + R . We assume that the data on H + L is a wavepacket with Fourier transform Z(ω): 19) and the data on H + R is a wavepacket with Fourier transformZ(ω): It follows that the solution in region II is These are, respectively, the parts of Φ that are outgoing and ingoing near the Cauchy horizon. The outgoing part is smooth at CH + R and the ingoing part is smooth at CH + L . We are interested in how smooth the ingoing part is at CH + R where r * → ∞ and we have and hence, taking a derivative w.r.t. the Kruskal coordinate V − that is smooth at CH + R , We now want to examine whether To do this we need to determine whether or not the integral decays faster than e −κ − v as v → ∞. To determine the decay of the integral, we can deform the contour of integration into a line of constant Im(ω) in the lower half complex ω plane. How far we can deform the contour depends on the analyticity properties of the quantity F(ω), which we will now investigate, following [4,16,29].
First, to calculate B andà we proceed as follows. For functions f (r * ) and g(r * ) the Wronskian is (a prime denotes a derivative w.r.t. r * ) (2.26) and this is constant (in r) if f, g are solutions of (2.15). We now havẽ where the latter expression follows from evaluating the Wronskian in the denominator at r * → ∞. Similarly, The analyticity properties of the solutions of the radial equation have been determined in Refs. [4,29,25]. The result is that R in,+ (ω, r) can be analytically continued to the complex ω plane, except for simple poles at negative integer multiples of iκ + . Similarly, R out,+ has simple poles at positive integer multiples of iκ + and R out,− has simple poles at negative integer multiples of iκ − . Using (2.4), it follows that, in the lower half-plane, the first pole ofà is at −iκ + and the first pole of B is at −iκ − . Consider first the case in which the wavepackets on H + L and H + R are compactly supported in u and v respectively. Then Z(ω) andZ(ω) are entire functions. Using (2.4) it then follows that we can deform our contour of integration to a line of constant Im(ω) in the lower half-plane, until we hit a pole inÃ(ω) at ω = −iκ + . Now, if the wavepacket on H + R is generic thenZ(−iκ + ) = 0 and so this pole will also be a pole of F(ω) with residue proportional toZ(−iκ + ). Hence (2.29) Using (2.4), the above quantity diverges at CH + R where v → ∞. Hence, for generic compactly supported smooth initial data prescribed on H + L ∪ H + R the solution will not be C 1 at CH + R , in apparent support of strong cosmic censorship. It turns out that this argument is too quick because, in the problem of interest, we are not free to prescribe the initial data on H + R . Instead, this data is determined by the solution outside the black hole, i.e. in region I. We will now review the argument of Ref. [16] that shows that in factZ(−iκ + ) vanishes, invalidating the above argument. This analysis will reveal instead that the question of strong cosmic censorship depends on quasinormal modes of the black hole.
First we need to define the mode functions in region I. We define two linearly independent solutions R in,+ and R out,+ of equation (2.15) using exactly the same conditions (2.16) as before except that now these conditions are being applied in region I instead of region II. We define a second pair of linearly independent solutions R in,c and R out,c in region I in terms of their behaviour at the cosmological horizon r * → ∞ R out,c ∼ e iωr * , R in,c ∼ e −iωr * . (2.30) We can expand R in,+ in terms of these solutions as Here, T and R are the transmission and reflection coefficients for scattering of waves incident from H − c . Similarly, we can write whereT andR are the transmission and reflection coefficients for waves progating out of H − . In region I, initial data can be specified on the characteristic hypersurface H − ∪ H − c . We assume that the data on H − is a wavepacket with Fourier transform X(ω): and the data on H − c is a wavepacket with Fourier transformX(ω): It follows that the solution in region I is We can now evaluate this on the event horizon H + R , where r * → −∞. The first term vanishes there provided our initial outgoing wavepacket on H − vanishes on the black hole bifurcation sphere, as it must for the Fourier transform to be well-defined. This leaves where in the final step we evaluated the numerator at r * → ∞. Similarly, Consider initial data which is compactly supported on H − and H − c (w.r.t. u, v respectively), so X(ω) andX(ω) are entire functions. Recall that the analytic continuation of R in,+ has simple poles at negative integer multiples of iκ + . It follows that the analytic continuations of T andR have zeroes at these locations. Hence, for this initial data,Z(ω)(−iκ + ) = 0, as first explained by Mellor and Moss [16].
Recall that the behaviour of Φ near CH + R is determined by the analyticity properties of F(ω). From the above we have We start by considering the case in which the initial data on H + L and H − are compactly supported functions of u, and the initial data on H − c is a compactly supported function of v, so Z(ω), X(ω) andX(ω) are entire functions. In the above expression, the mode functions with poles in the lower half plane are R in,+ (at negative integer multiples of iκ + ), R out,− (at negative integer multiples of iκ − ) and R out,c (at negative integer multiples of iκ c ). However, in F(ω) the poles associated to R in,+ and R out,c will cancel out in the ratios of Wronskians. Therefore singularities of F(ω) in the lower half plane can only arise from the poles in R out,− and where W [R in,+ , R out,c ] = 0 .
This is the condition for R in,+ and R out,c to be linearly dependent, the defining condition of a quasinormal mode. The corresponding values of ω are called quasinormal frequencies.
We see that, for compactly supported initial data, F(ω) is analytic in the lower half-plane except for poles at quasinormal frequencies and at negative integer multiples of iκ − . As discussed above, the spectral gap α is defined as the infimum (smallest value) of −Im(ω) over all quasinormal modes. Deform the contour of integration in (2.24) the line Im(ω) = −α + for arbitrarily small > 0. In other words, we push the contour of integration down until just before it hits the "lowest" (i.e. slowest decaying) quasinormal mode(s). In doing this we may pick up contributions from poles at multiples of −iκ − if these lie closer to the real axis than the lowest quasinormal mode. However, the contribution from such poles to the integral of (2.24) will have v-dependence e −nκ − (for positive integer n), and the contribution to (2.24) will behave as e (1−n)κ − v = (−V − ) n−1 , which is smooth at CH + R . The non-smooth part of (2.24) arises from the integral along the new contour of integration. This integral decays as e −αv for large v. Hence the non-smooth part of (2.24) is proportional to where β is defined in (2.13). If β < 1 then the scalar field is not C 1 at the Cauchy horizon CH + R (where V − = 0). If β < 1/2 then it does not even have locally square integrable derivatives, i.e. it does not have locally finite energy. On the other had, if β ≥ r for some positive integer r then the above result is consistent with the scalar field being C r at the Cauchy horizon. So, for compactly supported initial data, the question of strong cosmic censorship reduces to identifying the most slowly decaying quasinormal modes of the black hole [16].
We now investigate what happens when we relax the condition that the initial data on H − c has compact support. For now we continue to assume compact support on H − and H − L . Solutions arising from such initial data were first considered by Mellor and Moss [27]. They argued that late time ingoing radiation propagating along H + c will lead to an additional pole in F(ω) at ω = −iκ c . Their argument goes as follows. Assume that the wavepacket on H − c is smooth at the cosmological bifurcation sphere B c (U c = V c = 0). The wavepacket must vanish there (otherwise it cannot be built as a superposition of modes as assumed above). Demanding that it does so smoothly leads to the condition Φ ∝ V c as V c → 0 on H − c (i.e. on U c = 0), which implies Φ ∝ e −κcv for large v on H − c . This implies that the Fourier transformX(ω) is analytic in the strip −iκ c < Im(ω) ≤ 0 but X(ω) generically has a simple pole at ω = −iκ c . This is the basis of the claim in Ref. [27] that F(ω) has a pole at ω = −iκ c . However, this claim is incorrect because, in (2.40), this pole inX(ω) is cancelled by a corresponding pole in W [R in,+ , R out,c ] arising from the pole in R out,c at ω = −iκ c . In other words, this pole is cancelled by a corresponding zero in the transmission coefficient T (ω). 9 We see that considering this data with non-compact support on H − c does not change our conclusions above: it is still the quasinormal modes which determine whether or not strong cosmic censorship is violated. However, in making this statement we have assumed that our initial data is smooth at the cosmological bifurcation sphere. If we allow nonsmooth data, as advocated in Ref. [25], then late-time ingoing radiation does lead to a new effect. Consider Φ ∝ e −γκcv for large v on H − c , i.e. Φ ∝ V γ c on H − c , with 0 < γ < 1. Clearly such data is not differentiable at V c = 0, but it has locally finite energy if γ > 1/2 since this is the condition for the gradient of Φ to be locally square integrable. For such data, X(ω) has a pole at ω = −iγκ c and, in the expression for F(ω), this is not cancelled by a zero of T (ω). Hence at CH where δ = γκ c /κ − . Locally finite energy at the Cauchy horizon requires δ > 1/2. If κ c < 2κ − then we can choose γ > 1/2 such that δ < 1/2. 10 In other words, for a RNdS black hole with κ c < 2κ − , ingoing wavepackets with locally finite energy on H − c give solutions whose energy is not locally finite at CH + R . However, we emphasize that such wavepackets are not smooth at the cosmological bifurcation sphere.
Next we consider relaxing the condition that the wavepacket on H + L has compact support. This was important in the argument of Ref. [18] asserting that (the C 2 version of) strong cosmic censorship is respected for any RNdS black hole. Once again, we will first 9 More generally, writing the initial data on H − c as Φ = f (Vc) = f (e −κcv ), for smooth f , taking the Fourier transform and repeatedly integrating by parts one can see thatX(ω) can have poles at negative integer multiples of iκc. These are all cancelled by corresponding zeros in T (ω). 10 More mathematically, the initial data is such that the solution initially belongs to H 1 loc but the solution at CH + R does not belong to H 1 loc .
consider the case of smooth initial data. On H + L ∪ H − (i.e. the line V + = 0) we will assume that the data vanishes at the bifurcation sphere B, i.e. at U + = 0, (which is required for the Fourier transforms Z(ω) and X(ω) to exist as functions) but has non-vanishing derivative there, so for small U + we have, for some constant k (2.43) In region II this gives Φ| H + It follows that Z(ω) and X(ω) both have poles at ω = −iκ + , with equal and opposite residues. Hence it appears that F(ω) will have a pole at ω = −iκ + [18]. But we will now show that the poles in Z and X cancel out in the expression for F(ω). First, note that if there is a pole at ω = −iκ + in F then the residue of this pole is proportional to Recall that R in,+ has a simple pole at ω = −iκ + , i.e.
where g(ω, r) is analytic at ω = −iκ + . The solution R in,+ is obtained by converting (2.15) to an integral equation, and solving by iteration [4,29]. Indeed this is how one sees that it has a simple pole at ω = −iκ + . One can also see from this procedure that the residue h(r) can be expressed as a series in e κ + r * , and is proportional to e κ + r * as r * → −∞. Now, h(r) must satisfy (2.15) with ω = −iκ + . But the solution of (2.15) with behaviour e κr * = e iωr * as r * → −∞ is R out,+ (−iκ + , r). Hence we have 11 for some constant of proportionality c. It turns out that c has opposite signs in regions I and II because of the way we defined R out,+ . To see this, note that e −iωt R in,+ is smooth at H + R hence e −κ + t h(r) should be smooth at H + R . But in region I near H + R we have e −κ + t R out,+ (−iκ + , r) ∼ e −κ + u = −U + whereas in region II we have e −κ + t R out,+ (−iκ + , r) ∼ e −κ + u = +U + . Hence smoothness implies that the constant c has equal magnitude but opposite sign in regions I and II. It follows that, since the numerator is evaluated in region II and the denominator in region I, we have and so the residue (2.44) vanishes. Hence F(ω) does not have a pole at ω = −iκ + . Similarly, it does not have a pole at any negative integer multiple of iκ + . 12 Hence, once 11 At the special values ω = −inκ+ (n = 1, 2, 3, . . .), Rout,+ gives mode solutions that can be smoothly extended through H + R , proportional to U n + near H + R , and the second linearly independent solution of (2.15) gives non-smooth mode solutions involving log U+. 12 One can argue as in footnote 9 that X(ω) and Z(ω) can have poles ω = −inκ+ for n = 1, 2, 3, . . ..
Their residues are related by a factor of (−1) n . This is cancelled by a corresponding factor of (−1) n relating the constant c in regions I and II. The residue in F(ω) then vanishes exactly as for the n = 1 case.
again, we find that for smooth initial data, relaxing the condition of compact support does not lead to anything new, in contrast to the claim of Ref. [18]. The reason that the argument of Ref. [18] fails is that the poles in Z and X at ω = −iκ + cancelled out in F(ω). This cancellation arose because we assumed that the first derivative of Φ was continuous at B, i.e. that the inital data is C 1 there. In order to avoid such a cancellation we have to consider initial data that is not C 1 , i.e. we have to consider rough initial data, as proposed in Ref. [25]. For example, consider initial data which vanishes on H − and H − c , i.e. X(ω) =X(ω) = 0. It follows that the resulting solution will vanish throughout region I. On H + L we take initial data Φ| H + L ∝ U γ + as U + → 0+, where 0 < γ ≤ 1 and hence Clearly our initial data is continuous, but not C 1 , at U + = 0. The resulting solution will fail to be C 1 at U + = 0, i.e. along the event horizon H + R . In terms of u, our data behaves as e −γκ + u as u → ∞ on H + L so Z(ω) has a pole at ω = −iγκ + and hence, even for γ = 1, F(ω) has a pole at the same location. It then follows that at CH + R we have , we see that the solution at CH + R is less smooth than the initial data. In particular, the condition for the initial data to have locally square integrable first derivatives (i.e. finite energy) is γ > 1/2 whereas the condition for the solution at CH + R to have locally square integrable first derivatives is δ > 1/2. For any non-extremal RNdS black hole, we can choose our initial data so that γ > 1/2 but δ < 1/2. Hence one can find an initial wavepacket with finite energy that has infinite energy at the Cauchy horizon. So if we allow such rough initial data then the Christodoulou version of strong cosmic censorship is respected, as argued in Ref. [25].
Once one is prepared to contemplate non-smooth initial data, there is no reason to work with wavepackets to show that this version of strong cosmic censorship is respected. One can work just as well with an outgoing mode solution in region II with complex frequency ω = ω 1 − iγκ + (as was done in Ref. [25] for ingoing mode solutions in region I). In region II, R out,+ can be analytically continued to complex ω, as long as ω is not a positive integer multiple of iκ + . These mode solutions behave as e −iωu near H + R . Now hence such modes vanish on H + R (i.e. U + = 0) if γ > 0. We extend the mode into region I simply by taking it to vanish in region I, i.e. we take vanishing initial data on H − and H − c . At the Cauchy horizon the reflected part of the mode behaves as with δ given by (2.50). As before, (2.4) implies that we can choose γ > 1/2 such that δ < 1/2. The initial data then has locally finite energy but the energy diverges at the Cauchy horizon. 13 In summary, we have seen that the argument of Ref. [18] does not support the strong cosmic censorship conjecture for smooth initial data. However, a modification of this argument can be viewed as supporting the strong cosmic censorship conjecture for rough initial data formulated in Ref. [25]: initial data with locally finite energy generically gives a solution whose energy is not locally finite at the Cauchy horizon.
Recent work on strong cosmic censorship with Λ > 0
For smooth initial data, we have explained why the conclusion of Ref. [16] remains valid and so whether or not strong cosmic censorship is respected can decided by looking at quasinormal modes. However, one deficiency of the above analysis is the assumption that the initial data vanishes at the bifurcation spheres B and B c . This assumption is required so that the Fourier transforms Z(ω), X(ω) andX(ω) are functions, rather than distributions. This assumption has been eliminated by more recent work in the mathematics literature [30], which proves that, for any smooth initial data, if β > 1 then the scalar field is C 1 at the Cauchy horizon and if β > 1/2 then the scalar field has finite local energy at CH + R . The recent numerical study of Ref. [19] showed that massless scalar field perturbations of RNdS black holes always have β < 1 so generic scalar field perturbations are not C 1 at the Cauchy horizon, which supports the C 2 formulation of strong cosmic censorship for the Einstein-Maxwell-massless scalar field theory. However, it was also found that nearextremal RNdS black holes have β > 1/2 and so, for smooth initial data, the Christodoulou version of strong cosmic censorship is violated in this theory.
Surprisingly, this conclusion does not hold for Kerr-dS black holes. Indeed, Ref. [23] showed that Kerr-dS black holes always have β < 1/2 and so, for smooth initial data, the Christodoulou version of strong cosmic censorship is respected for such black holes in Einstein gravity coupled to a massless scalar field. In fact, it was shown that the same result holds for linearized gravitational perturbations so it was argued that the Christodoulou version of strong cosmic censorship, with smooth initial data, is satisfied by the vacuum Einstein equations.
Finally, we should mention the work of Ref. [22]. This studies spherically symmetric perturbations of RNdS in the nonlinear Einstein-Maxwell-scalar field system. For this system, it is proved that the smoothness at CH + R is determined by how fast perturbations decay at late time along the event horizon H + R . Since linear theory should be reliable for determining the latter, this work provides justification for believing that nonlinear effects will not invalidate the conclusions of a linear analysis of the behaviour near CH + R .
Bound for weak solutions of linearized Einstein-Maxwell equations
As discussed previously, the spectral gap α is defined as the infimum (smallest value) of −Im(ω) over all quasinormal frequencies ω. Defining β = α/κ − as in (2.13), we showed above that if β < 1/2 then generic scalar field perturbations arising from smooth initial data do not have locally square integrable derivatives (i.e. locally finite energy) at the Cauchy horizon. What about gravitoelectromagnetic modes? What condition yields a linearized gravitoelectromagnetic perturbation that constitutes a weak solution of the equations of motion at the Cauchy horizon? Is the critical value still β = 1/2? In this section we will show that the answer to the latter question is positive. The analysis is rather technical so the reader may wish to skip to the summary in subsection 3.3.
Coupled linear gravitational and electromagnetic perturbations of RNdS can be studied using the Kodama-Ishisbashi (KI) formalism [26]. This formalism divides linearized gravitoelectromagnetic perturbations into perturbations arising from vector spherical harmonics and those arising from scalar spherical harmonics (there are no tensor spherical harmonics in 4d). We will consider the vector sector first (subsection 3.1) and then the scalar sector (subsection 3.2). The main conclusions are summarized in subsection 3.3.
Vector-type gravitoelectromagnetic perturbations of RNdS
Vector perturbations of the background (2.2) are described by [26] where f a , H T and A are functions of {x a } = {t, r}. Additionally, D j is the covariant derivative with respect to the unit S 2 metric γ ij and V i is a vector spherical harmonic, i.e. a regular solution of Here, 2 ≡ γ ij D i D j and regularity requires that the eigenvalues k 2 V are quantized as 3) The case v = 1 (k 2 V = 1) is special case since in this case V i is a Killing vector on the S 2 and thus D (i V j) = 0. Consequently, from (3.1) it follows that the metric components δg ij on S 2 are not perturbed.
For v > 1, all the information about the perturbations can be encoded in two gauge invariant variables Ω and A. The latter was introduced in (3.1) while the former is defined in terms of f a , H T via where ab denotes the anti-symmetric tensor on the 2-dimensional orbit spacetime. These two gauge invariant variables obey a coupled system of two master equations [26] 14 Once we have solved (3.5), we can reconstruct the original metric perturbations (3.1a) using the map [26] This determines δg µν up to a gauge transformation (infinitesimal diffeomorphism) corresponding to H T . A convenient choice of gauge is H T = 0. Note that the Maxwell perturbation is gauge invariant in the vector sector [26]. We will be interested in quasinormal modes, for which we have where ≡ v and the frequency ω is determined in terms of v and a radial "overtone" number n = 0, 1, 2, . . .. The quantized spectrum of frequencies is determined requiring that the perturbations are ingoing at the future event horizon H + R and outgoing at the future cosmological horizon H + c (see Fig. 1). In general, quasinormal frequencies are complex, ω = ω R + iω I , with ω I < 0 so that quasinormal modes decay exponentially with time outside the black hole.
For the regularity analysis at H + R it is convenient to work in ingoing coordinates (v, r, θ, φ) since they are regular both in regions I and II of Fig. 1. Then, a quasinormal mode is an analytic function of these coordinates in region I and can be analytically continued into region II. In these ingoing coordinates, a quasinormal mode has time dependence e −iωv , and thus it diverges as v → −∞, i.e. along the red line on Fig. 1. We will determine the frequency spectrum of vector quasinormal modes in Section 4.
As reviewed above, the behaviour at the Cauchy horizon CH + R of a generic perturbation arising from smooth initial data is determined by the lowest quasinormal mode [16]. Therefore we need to determine the smoothness at CH + R of the metric and Maxwell perturbations of our quasinormal modes. To do this, it is convenient to use outgoing coordinates in the black hole interior. Converting (3.7) to these outgoing coordinates in region II yields for some functionsΩ ω andà ω . A Frobenius analysis of (3.5) about the right Cauchy horizon CH + R , dictates that there is a pair {Ω (1) , Ω (2) } of linearly independent solutions for 14 In our conventions the parameter κ of [26] is equal to √ 2.
Ω and another pair {A (1) , A (2) } for A. These two pairs of linearly independent solutions behave as , where Ω (1,2) and A (1,2) denote non-vanishing smooth functions at r = r − . The solutions labelled (1) are outgoing at CH + R . These are smooth at CH + R . The solutions labelled (2) are ingoing at CH + R . These are not smooth at CH + R . Our quasinormal mode will be a superposition of the ingoing and outgoing solutions at CH + R . Given the behaviours (3.9) for the master variables, what is the corresponding behaviour of the metric and Maxwell perturbations at the Cauchy horizon? Again, we work in outgoing coordinates {x µ } = {u, r, θ, φ} and write the metric perturbation in these coordinates as δg µν . The KI formalism maintains covariance w.r.t. diffeomorphisms on S 2 and on the transverse 2d orbit space. Hence δg µν takes the same form as in (3.1a) with f a replaced by the quantityf a obtained from f a by the 2d coordinate transformation (t, r) → (u, r), andH T = H T . Choosing the gaugeH T = 0, we find that the two linearly independent solutions forf a have the following behaviour near Cauchy horizoñ . The behaviour at the Cauchy horizon of the Maxwell perturbation δF µν follow straightforwardly from (3.1b) and (3.9b).
Note that the outgoing solutionsf a are not. This holds in the gaugeH T = 0. We will now determine how much smoother we can make the solution using a gauge transformation.
In the vector sector, an infinitesimal gauge vector ξ has a harmonic decomposition Under such gauge transformation the metric perturbation transforms according to 12) and the Maxwell perturbation δF is invariant: see (3.1b) and recall that A is, by construction, a gauge invariant variable. We now assume and we want to choose the coefficients L (k) to make (3.10) as smooth as possible at r = r − . We find that L (0) can be chosen to setf (2;0) r = 0 in (3.10). We can then choose L (1) to setf (2;1) r = 0. But this choice then dictates that f (2) u and H (2) T behave as (r − r − ) iω/κ − because the gauge parameters L (k) with k ≥ 2 do not appear at this order. Altogether, we can find a gauge where the two linearly independent gravitoelectromagnetic solutions at the Cauchy horizon have the leading behaviour where α a = {0, 1} for a = {u, r}, respectively and f a , H T and δ F ai , δ F ij are functions that are smooth at r = r − (recall that δ F ab components are not excited in the vector sector; see (3.1)). At CH + R , our gravitoelectromagnetic quasinormal mode is some linear combination of the smooth outgoing solution (1) and the non-smooth ingoing solution (2). There is no reason for the coefficients in this linear combination to vanish. Therefore, the regularity of the quasinormal mode is determined by the ingoing solutions.
For the vacuum Einstein equation, the regularity of the metric required for a weak solution is that the Christoffel symbols should be square integrable in some chart [13]. By linearizing this condition, or by considering second order perturbation theory [23], the corresponding condition for a linearized metric perturbation to constitute a weak solution is that, in some gauge, the perturbation, and its first derivatives, should be locally square integrable, i.e. the perturbation should belong to the Sobolev space H 1 loc . In Einstein-Maxwell theory, the corresponding statement is that, in some gauge, the metric perturbation should belong to H 1 loc and the Maxwell field strength perturbation should be locally square integrable (i.e. belong to L 2 loc ). From (3.14) we see that we can reach a gauge for which the least smooth components of the metric perturbation behave as δ g (2) Similarly, (3.14) shows that the least smooth components of the Maxwell field strength perturbation behave as δ F (2) ∼ (r − r − ) p−1 (again with p = iω/κ − ). Once again this is locally square integrable if, and only if, 2(γ − 1) > −1 (again with γ = Re(p)). Hence, the condition for a vector-type gravitoelectromagnetic quasinormal mode to constitute a weak solution at the Cauchy horizon is γ > 1/2, i.e.
The above analysis shows that this condition is sufficient for the mode to constitute a weak solution at the Cauchy horizon. We believe it is also a necessary condition, and this can probably be proved along similiar lines to the argument in Ref. [23], exploiting gauge invariance of the KI variables. However, since we are mainly interested in violation of strong cosmic censorship, we will not perform such an analysis here. The above analysis was for the case v > 1. For the special case v = 1, the field H T is not defined since D (i V j) = 0. It follows that the two quantities defined by the RHS of (3.4) are no longer gauge invariant (and thus, neither is Ω). There is a single gauge invariant quantity (denoted by F 1 = ab rD a (f b /r) in (4.8) of [26]) and the map that reconstructs δg µν and δF µν from the gauge invariant quantity is (necessarily) different from the one described above for the > 1 case: in the end of the day A is the only dynamical field although it still obeys the wave equation (3.5b) (with k 2 V = 1) [26]. We have done this analysis and gravitoelectromagnetic field reconstruction 15 and we find that the condition for a v = 1 vector-type gravitoelectromagnetic quasinormal mode to constitute a weak solution at the Cauchy horizon is still given by (3.15).
Scalar-type gravitoelectromagnetic perturbations of RNdS
Scalar perturbations of the background (2.2) take the form [26] δg with f ab , f a , H T , H L , E and E b being functions of {x a } = {t, r} and ab is the anti-symmetric unit tensor. Moreover, E 0 = Q/r 2 was introduced in (2.3) and we have defined The scalar spherical harmonics S, and the associated scalar-type vector harmonic S i and traceless scalar-type tensor harmonic S ij are defined by (note that S i i = 0) The eigenvalues are quantized as Harmonics with s = 0 are non dynamical -they correspond to variations of the black hole parameters M, Q. Harmonics with s = 1 are special because S ij vanishes for these harmonics. For now we assume s > 1 and comment on the case s = 1 at the end of this section. Gauge invariant variables for the scalar perturbations are E, E a − already introduced in (3.16) − and, for s > 1, F and F ab defined as [26] The Bianchi identity requires that F b a is traceless, The reader can find the full details in the discussions (4.8)-(4.15) and (4.31)-(4.33) of [26].
The equations of motion imply that the gauge invariant quantities E and E a can be expressed in terms of a single KI master variable A as On the other hand, introducing where here and henceforward, we assume that all perturbed quantities Q(t, r) have the Fourier decomposition Q(t, r) = e −iωt Q(r) with ω being the associated frequency. The KI master variables Φ and A obey the following coupled system of equations [26] f where f is defined in (2.3). The potential V S and source term S Φ (Φ, A) are lengthy expressions given in equations (5.42)-(5.44) of [26]. The auxiliary quantity P Z is given in (C.8) of [26]. Given a solution of the above equations we will need to reconstruct the metric and Maxwell field perturbations in terms of the master variables Φ and A. For that, we first write the variables X, Y and Z in terms of Φ and A and their derivatives as [26] where the coefficients P X0 , P X1 , P XA , P Y 0 , P Y 1 , P Y A and P Z are functions of r that can be found in equations (C.4)-(C.10) and (C.11)-(C.16) of [26]. It follows from the equations of motion, including the Bianchi identity (3.21), that f ab and H L can be written as a function of X, Y, Z (i.e. of Φ, A, their radial first derivative and ω) and of f a , H T and their radial derivatives. To simplify our task (and without prejudice since we will consider gauge transformations later) we can fix the gauge as Then, the metric functions f ab and H L depend only on X, Y, Z. That is to say, via (3.26) and the Bianchi identity (3.21) they can be written solely in terms of the master variables Φ, A and their radial derivative as (3.28) We will find the frequency spectrum of scalar quasinormal modes in Section 4. But first we must discuss the behaviour of the scalar-type perturbations at the Cauchy horizon CH + R . The discussion of regularity at this null hypersurface proceeds very similarly to the vector sector case. Namely, the master variables Ω and A for the scalar quasinormal modes also admit the Fourier decomposition (3.7) and, when analytically continued into region II and converted to outgoing coordinates, these master variables also behave as (3.8). Moreover, a Frobenius analysis of (3.25) around CH + R , dictates that there is a pair {Ω (1) , Ω (2) } of linearly independent solutions for Ω and another pair {A (1) , A (2) } for A. These two pairs of linearly independent solutions still behave as described in (3.9) (with the identification ≡ s ).
Given these behaviours for the KI scalar master variables Ω and A, we can now find the behaviour of the metric and Maxwell perturbations for the outgoing and ingoing modes near CH + R . Just as we did for vector-type perturbations, in region II we transform to outgoing coordinates {x µ } = {u, r, θ, φ} in which the metric perturbation δg ab takes the same form as in (3.16), with f a replaced by the quantityf a obtained from f a via the coordinate transformation from (t, r) to (u, r), f ab is similarly replaced byf ab , butH L = H L and H T = H T are unchanged. Similarly, the Maxwell perturbation δF µν is written in terms of E a andẼ = E.
Choosing the gauge (3.27) (which translates intof a = 0,H T = 0), we find that the outgoing (smooth) and ingoing (non-smooth) solutions forf ab ,H L ,Ẽ andẼ a have, respectively, the following expansions about the Cauchy horizon: rr ). We will now show that we can make the outgoing solution smooth, and the ingoing solutions smoother at the Cauchy horizon with a gauge transformation. In the scalar sector, an infinitesimal gauge vector ξ has the harmonic decomposition Under such gauge transformation the metric and Maxwell perturbations transform according to We assume the following expansions for the functions appearing in the gauge transformation and we now try to choose the constants {N a , L (k) } to eliminate as much as we can the leading terms in (3.29) that are responsible for the lack of smoothness at the Cauchy horizon. Consider first the ingoing solution (labelled by superscript (2) ). We find that a choice of P and also to eliminate the leading term, proportional to (r a becomes non-zero as a result of the gauge transformation; the term (r − r − ) iω/κ − −1 in f (2) r has a contribution due to P (0) r and another due to L (0) ). We can now choose P and to eliminate the term proportional to (r r . But this choice then dictates that the leading term of f (2) uu , f (2) u and H (2) L,T is (r − r − ) iω/κ − because these terms in these quantities do not depend on the higher order gauge parameters and we have no more gauge freedom to avoid such powers.
Consider now the outgoing solution (labelled by superscript (1) ) in (3.29). With a choice of gauge parameters {N and also to eliminate all the terms (r − r − ) 0 log(r − r − ) that typically appear in the fields f L,T and δ F T (these fields become non-zero as a result of the gauge transformation). But with this choice it follows that the leading term of f (1) uu and f (1) u is (r − r − ) 0 since these terms do not depend on the higher order gauge parameters, i.e. we have no further gauge freedom to eliminate such terms. After these gauge transformations, the electromagnetic fields δ F (1) ab , δ F (1) ai also behave as (r − r − ) 0 .
Altogether, our analysis shows that we can find a gauge where the two linearly independent gravitoelectromagnetic solutions at the Cauchy horizon have the leading behaviour where α a = {0, 1} for a = {u, r} (respectively) and α ab = {0, 1, 1} for ab = {uu, ur, rr} (respectively), and f a , f ab , H L,T and δ F ab , δ F ai are smooth functions that depend on ω and s (recall that δ F ij is not excited in the scalar sector; see (3.16)). Note that the outgoing solution is manifestly smooth at the Cauchy horizon. As explained above, for a weak solution we need the metric perturbation and its first derivative to be locally square integrable, and the Maxwell field strength perturbation to be locally square integrable. Using the above results, we can repeat the argument we used for vector-type perturbations to see that the condition for a scalar-type quasinormal mode to be extendible as a weak solution across the Cauchy horizon is exactly the same condition (3.15) that we obtained for vector-type perturbations.
Finally, in this section we have so far assumed s > 1. Harmonics with s = 1 are special because S ij vanishes for these harmonics; as a consequence, the field H T is not defined. It follows that, for s = 1, the fields F and F ab defined in (3.20) are no longer gauge invariant [26]. Additionally, the Bianchi identity no longer implies (3.21) and it turns out that only the electromagnetic field is dynamical [26]. For our purposes, a pragmatic way to deal with this s = 1 case, as suggested in [26], is to impose (3.21) as a gauge condition and then fix a residual gauge freedoom at our convenience. 16 We can then reconstruct the gravitoelectromagnetic fields δg µν and δF µν in this particular gauge following steps similar to those described above for the s > 1. Finally, we add again gauge transformations to make our solutions smoother. In the end of the day, we find that the condition for a s = 1 scalar-type gravitoelectromagnetic quasinormal mode to constitute a weak solution at the Cauchy horizon is still given by (3.15).
Conclusions
We have shown that the condition for a linearized gravitoelectromagnetic mode solution to be extendible as a weak solution across the Cauchy horizon is (3.15). We define β in terms of the spectral gap α as in (2.13). If β < 1/2 then there exists a quasinormal mode which violates (3.15). One can add an arbitrary multiple of this quasinormal mode to any other linear perturbation. Hence if β < 1/2 then a generic linear perturbation cannot be extended as a weak solution across the Cauchy horizon. So if β < 1/2 then the Christodoulou formulation of strong cosmic censorship is respected.
Conversely, if β > 1/2 then all quasinormal modes respect (3.15). Since the behaviour at the Cauchy horizon is determined by the slowest decaying quasinormal mode, in this case, any linearized gravitoelectromagnetic perturbation arising from smooth initial data can be extended across CH + R as a weak solution of the equation of motion, so the Christodoulou version of strong cosmic censorship is violated for smooth initial data.
Finally, we can consider extendibility in C r . By this we mean that there exists a gauge so that, at CH + R , the metric is C r and the Maxwell field strength is C r−1 (so the Maxwell potential is C r in some gauge). It is easy to see from the above analysis that a quasinormal mode is extendible in C r across CH + R if −Im(ω)/κ − ≥ r. Thus, in Einstein-Maxwell theory, the C r version of strong cosmic censorship is respected if β < r and violated if β > r.
Computing the gravitoelectromagnetic quasinormal modes
In this section, we first discuss (subsection 4.1) the Kodama-Ishisbashi (KI) master equations [26] and boundary conditions of the quasinormal mode problem that we later solve analytically and numerically. We will also prove that vector-type and scalar-type modes of RNdS have the same frequency spectrum, i.e. they are isospectral (subsection 4.2).
Vector-type modes
The vector equations (3.5) describe a pair of coupled ODEs for the gauge invariant variables Ω and A. They can be rewritten as a pair of two decoupled ODEs for a pair of master variables Φ ± . These are linear combinations of the original gauge invariant variables, namely Φ ± = a ± r −1 Ω + b ± A (4.1) where a ± and b ± are functions of M, Q, given in equations (4.35)-(4.36) of [26]. Under (4.1), (3.5) tranform into the KI vector master equations where the potentials are given by When Q = 0, Φ − and Φ + are simply proportional to Ω and A, respectively. Thus, in the neutral limit, Φ − and Φ + represent, respectively, the gravitational and electromagnetic modes of the Schwarzschild black hole. Note that Φ + modes have V = 1, 2, 3 . . . whereas Φ − modes have V = 2, 3, 4, . . .. Vector quasinormal modes are solutions of (4.2) that obey ingoing boundary conditions at the black hole horizon and outgoing boundary conditions at the cosmological horizon. More concretely, at the black hole horizon r = r + a Frobenius analysis yields the expansion where Φ is either Φ + or Φ − . Regularity at the event horizon, which follows from demanding a smooth expansion in ingoing coordinates (v, r, θ, φ) around H + R , requires that we discard the solution with the positive sign. Similarly, a Frobenius expansion at the cosmological horizon r = r c yields the two possible solutions and imposing outgoing boundary conditions at the cosmological horizon H c R requires that we discard the irregular solution with plus sign. We are thus lead to introduce the field redefinition: whereΦ ± (r) is a smooth function at r = r + and at r = r c . This effectively imposes the desired boundary conditions since our numerical method can only search for smooth functionsΦ ± (r). Inserting (4.6) into (4.2) we get a pair of decoupled ODEs forΦ ± . Each of these ODEs is quadratic in the frequency ω. That is to say, for each we have to solve a quadratic eigenvalue problem to find the eigenvalue ω and the associated eigenfunctionΦ − (or ω and Φ + ). The boundary conditions forΦ ± (r) follow directly from doing a Taylor expansion of the master equation about the black hole and cosmological horizons. These reveals that at both horizons we have a Robin boundary condition, i.e. of the type (4.7) where Q +,1 , Q +,0 , Q c,1 and Q c,0 are known functions which are at most second order polynomials in ω.
It is also convenient to use a radial coordinate whose range is independent of the black hole parameters. We define such that y ∈ [0, 1] with y = 0 (y = 1) corresponding to the event (cosmological) horizon. 17 The resulting equation forΦ − (orΦ + ) can now be solved using a pseudospectral grid discretization (with the methods reviewed in [31]) as a standard quadratic eigenvalue problem or employing a Newton-Raphson algorithm. In the former method one writes the equation as a quadratic eigenvalue problem for the frequency ω, which is then solved using Mathematica's built-in routine Eigensystem. More details of this method and the discretization scheme can be found e.g. in [32]. The second method is based on an application of the Newton-Raphson root-finding algorithm, and is detailed in [33,31]. The advantage of the first method is that it gives all modes simultaneously. The second method computes a single mode at a time, and only when a seed is known that is sufficiently close to the true answer. However, this method is much quicker as both the size of the grid and numerical precision increases, and can be used to push the numerics to extreme regions of the parameter space.
Scalar-type modes
The pair of coupled ODEs (3.25) for the scalar gauge invariant variables Φ and A can be rewritten as a pair of two decoupled ODEs for a pair of scalar master variables Φ ± . The latter are given by the linear combinations where a ± and b ± are functions of M, Q, given in equations (5.57)-(5.58) of [26]. Inserting (4.9) into (3.25) yields the KI scalar master equations where the potentials V s± are given by equations (5.60)-(5.63) of [26]. When Q = 0, Φ − is proportional to Φ and Φ + is proportional to A. Hence, in the neutral limit, Φ − and Φ + represent, respectively, the gravitational and electromagnetic scalar modes of the Schwarzschild black hole. Note that Φ + modes have S = 1, 2, 3 . . . whereas Φ − modes have S = 2, 3, 4, . . .. Scalar quasinormal modes are solutions of (4.10) that obey ingoing boundary conditions at the black hole horizon and outgoing boundary conditions at the cosmological horizon. The analysis of these boundary conditions is very much similar to the one done for the KI vector sector. In fact equations (4.4) to (4.7) and the subsequent discussion apply without change to the scalar sector of perturbations.
Isospectrality
As discussed in previous sections, gravitoelectromagnetic perturbations of RNdS black holes come in two classes: vector-type and scalar-type. Although they obey two seemingly distinct equations of motion, it turns out they have the same quasinormal mode spectra. For this reason, the spectrum of quasinormal modes of RNdS black holes is said to be isospectral. This is a classical result in the context of asymptotically flat RN black holes, which was first uncovered by Chandrasekhar in [34]. It turns out the same result applies in the context of RNdS black holes, but with more involved algebra.
Just as in [34], we start by noting that the scalar potential V s± (r) − introduced in (4.10) − can be written in the following compact manner where , (4.12c) and f (r) is given in (2.3). Rather remarkably, the vector potential (4.3) takes a similar form with the same quantities defined in (4.12).
Because of this simple relation between the scalar and vector potentials, one can relate solutions of the vector equation to solutions of the scalar equation (and vice versa), via the map
where, momentarily, we added the subscripts s and v to distinguish between scalar and vector perturbations. Maps between solutions might not take physical solutions into physical solutions since one has to check that the maps preserve the relevant boundary conditions. This is the case (i.e. the map (4.14) preserves the boundary conditions) for asymptotically flat RN black holes and RNdS black holes, but it is not the case for RN black holes with anti de-Sitter boundary conditions [35]. For this reason isospectrality occurs in the former two cases, but not in the latter. Note that the differential map (4.14) alone is not enough to guarantee that the critical β bound (3.15) found for vector-type modes also holds for scalar-type perturbations, since the two types of metric perturbations are orthogonal to each other. For this reason, in Section 3 we had to do the analysis that finds the bound (3.15) for the vector and scalar-type of perturbations independently. We concluded that it turns out that (3.15) holds for both sectors.
Classifying the families of quasinormal modes and analytical results
Cardoso et al found that massless scalar field quasinormal modes of RNdS can be classified into three families [19]. We find that the same is true for gravitoelectromagnetic quasinormal modes. The three families are 1) "photon sphere" modes, 2) "de Sitter" modes and 3) "near-extremal" modes. The "photon sphere" modes are identified in the geometric optics limit, 1, and are related to the properties of the unstable circular photon orbits in the equatorial plane of the black hole background (subsection 5.1).
The de Sitter modes reduce, when M and Q vanish, to the gravitational and electromagnetic quasinormal modes of de Sitter spacetime (subsection 5.2). Finally, the "nearextremal" modes have their wavefunction peaked near the horizon and an approximate expression for these modes (strictly valid in the extremal limit) can be obtained analysing the perturbations in the near-horizon geometry of a near-extremal RNdS black hole (subsection 5.3).
In the previous Section 4.2 we found that the spectra of vector-type and scalar-type of quasinormal modes is isospectral. It follows that for each family of modes we just have to consider two sectors (not four) of perturbations corresponding to perturbations for each of the gauge invariant variables Φ − and Φ + . As a test of our numerical code, we did several checks (i.e. for different black holes) that the frequency eingenvalues of the vector-type equation of motion are indeed the same as those that solve the scalar-type equation of motion.
In this section we will obtain approximate analytical expressions for the three families of modes (that are valid at least in a certain region of the RNdS parameter space). Then we compare these analytical results with the exact data that results from our numerical search of the frequency spectra in the full RNdS parameter space 0 ≤ y + ≤ 1 and 0 < Q/Q ext ≤ 1.
Photon sphere family of modes and its geometric optics limit
In this subsection we will find an analytical expression for the photon sphere quasinormal modes in the geometric optics limit, i.e. in the WKB limit → ∞. We find that this analytical expression gives an imaginary part of the frequency that matches very well the numerical results even for = 1 (the real part is not such a good approximation for low ). Our geometric optics results are independent of the spin of the perturbing field and so they should agree with the geometric optics results for massless scalar field photon sphere modes in Ref. [19]. Consider a null geodesic x µ (τ ) of a RNdS black hole. By spherical symmetry there is no loss of generality in assuming that the geodesic is confined to the equatorial plane θ = π/2. There are conserved quantities associated to the Killing fields K = ∂/∂t and χ = ∂/∂φ: e ≡ −K µẋ µ and j ≡ χ µẋ µ , where the dot represents derivative with respect to the affine parameter τ . This giveṡ The radial motion is governed byṙ where we can check that r + ≤ r s ≤ r c .
The orbital angular velocity (Kepler frequency) of our null circular photon orbit can now be computed using (5.2), (5.5) and (5.7) yielding We now have to compute the largest Lyapunov exponent λ L , measured in units of t, associated with perturbations of an unstable circular photon orbit r(τ ) = r s . This is done considering perturbations r(τ ) = r s + δr(τ ) of the radial geodesic equation (5.3). Small deviations obey the linearized equation which has solution being the desired (largest) Lyapunov exponent. Note that C is an integration constant and the unstable photon orbit parameters r s and b s are given in terms of the RNdS parameters {L, M, Q} by (5.7). Finally, one can reconstruct the spectrum of the photon sphere family of quasinormal modes with 1 using [36][37][38][39][40][41][42][43][44] ω WKB ≈ Ω c − i n + 1 2 λ L , (5.11) where n = 0, 1, 2, . . . is the radial overtone. Note that this geometric optics/WKB approximation is universal in the sense that it is blind the particular sector of perturbations we look at. That is, it is expected to be a good approximation to both photon sphere modes Φ ± (or for a massless scalar field [19]). Note that, at this order, Im(ω WKB ) is independent of (assuming 1) while Re(ω WKB ) does depend on . One might wonder whether next-to-leading order corrections to this result might change significantly (5.11), especially near extremality. However, the corrections to Im(ω) are of order 1/ so, for any fixed background, the corrections to Im(ω) can be made arbitrarily small by taking sufficiently large. 18 So the WKB results for Im(ω) should be reliable for sufficiently large .
We can now analyse −Im(ω WKB )/κ − . In the left panel of Fig. 2 we plot this quantity for n = 0 (which yields the smallest value) as a function of the horizon radii ratio y + = r + /r c and charge ratio Q/Q ext . Over most of the RNdS moduli space we have −Im(ω WKB )/κ − < 1/2. Since we expect our result for to be exact as → ∞, we must therefore have β < 1/2 over most of the RNdS moduli space [19]. Thus the Christodoulou version of strong cosmic censorship is respected by most RNdS black holes. However, for any fixed y + , there is always a critical value for Q/Q ext (close to extremality) above which −Im(ω WKB )/κ − > 1/2. So there is the possibility of a violation of strong cosmic censorship by near-extremal RNdS black holes. We will now compare the WKB prediction with our numerical results for the quasinormal frequencies of photon sphere modes. In the right panel of Fig. 2 we compare the n = 0 WKB result with our numerical results for −Im(ω)/κ − for the Φ − photon sphere quasinormal modes (with n = 0). From the plot we see that, when = 10, the WKB prediction is in excellent agreement with our numerical results. In fact even for = 2 the plot shows that the WKB prediction is in very good agreement with our numerical results. This agreement extends to other values of y + not shown in the plot. Note that, as expected, the agreement is very good for the imaginary part of the frequency but not so good for the real part (not shown in the plot). As a check of our numerical computations we have also confirmed that we reproduce some (the ones we searched for in our tests) of the quasinormal frequencies listed in [16] (note that this reference only computed what we call photon sphere modes).
Recall that to compute β defined in (2.13) we need to determine the spectral gap α. To determine α we need to find the slowest decaying quasinormal mode, i.e. the one with the smallest value of −Im(ω). We will now discuss which of the photon sphere modes has the smallest value of −Im(ω). There are two types of photon sphere modes: one corresponding to Φ − and another to Φ + . Our numerical results indicate that, for each type, the lower and n modes dominate. Therefore the slowest decaying photon sphere mode must be one of the following (with n = 0): (1) Φ − , = 2, or (2) Φ + , = 1.
Which of these two modes decays most slowly? For most of the black hole parameter space we find that the Φ + modes with = 1 decay most slowly. To illustrate this, in the left panel of Fig. 3 we plot −Im(ω)/κ − vs Q/Q ext at fixed y + for Φ + modes with = 1 (and n = 0) and Φ − modes with = 2 (and n = 0). We see that Φ + modes with = 1 typically have lower −Im(ω)/κ − (for fixed background parameters) than Φ − modes with = 2. However, there are small islands in the parameter space where the opposite occurs: see curve y + = 0.1 (red disks/circles) for Q/Q ext 0.9. A similar conclusion is reached from the right panel of Fig. 3. Here we plot the same modes but this time for RNdS with fixed Q and varying y + . We see that typically the Φ + , = 1 modes dominate over the Φ − , = 2 modes. However, for small y + there is a crossover and the = 2 modes become dominant.
These crossovers will not be a problem for our purposes. For each RNdS black hole we will compute numerically the two types (Φ ± ) of photon sphere quasinormal mode and then pick the one with lowest −Im(ω). This can then be compared with the results from the other families (dS and near-extremal) of quasinormal modes in order to calculate the spectral gap.
de Sitter family of modes
In the de Sitter limit, M = 0, Q = 0, the master equations for Φ + and Φ − are the same. To find the spectrum, we just need to take (4.2) or (4.10) and set M = 0, Q = 0. Using the radial coordinate (4.8) this yields the master equation where we have introduced the dimensionless frequencyω = ω r c (with r c = L for the dS solution). Note that = 1, 2, 3, . . . for electromagnetic modes Φ + and = 2, 3, 4, . . . for gravitational modes Φ − . The general solution of (5.12) is for arbitrary amplitudes A and B, with 2 F 1 (a, b, c; z) being the Gaussian Hypergeometric function. At the origin this solution behaves as Φ ± y=0 ≈ A y +1 + B y − and regularity at y = 0 thus requires that we set B = 0. On the other hand, a Taylor expansion about the cosmological horizon y = 1 yields . (5.14) Requiring outgoing boundary conditions demands that we discard the (1 − y) iω 2 solution. This can be done using the property Γ(−n) = ∞, n ∈ N 0 , i.e. requiring that Γ with = 1, 2, 3, · · · for Φ + modes and at = 2, 3, · · · for Φ − modes. So far we have restricted our attention to the dS limit (M = 0 = Q) of the RNdS solution. Naturally, RNdS has quasinormal modes Φ ± that in the dS limit reduce to (5.15). These are what we call the dS family of RNdS quasinormal modes. Numerically we find that these modes have purely imaginary frequencies and their wavefunctions are localized near the cosmological horizon. Fig. 4 shows some numerical results for the dS family of modes. For concreteness we do this illustration for modes Φ − with { , n} = {2, 0}. In the left panel we fix Q/Q ext and we plot the imaginary part of the frequency Im(ω r c ) as a function of the dimensionless ratio y + = r + /r c . By definition, dS quasinormal frequencies must approach (5.15) as y + → 0 and this is indeed the case (see red diamond). Note that the frequency changes substantially with y + . However, if we instead fix y + and vary Q then we find that the frequency does not change that much as Q/Q ext increases from 0 up to 1. This is illustrated in the right panel Imaginary part of the frequency as a function of Q/Q ext for fixed y + = 0.01 (green squares), y + = 0.05 (brown diamonds) and y + = 0.1 (black disks). Figure 5. Left panel: de Sitter gravitoelectromagnetic mode Φ − with = 2 and n = 0: −Im(ω)/κ − as a function of Q/Q ext for fixed y + = 0.01 (green squares), y + = 0.05 (brown diamonds) and y + = 0.1 (black disks). Right panel: The ratio between the frequency Im(ω dS ) of de Sitter mode of the left panel with y + = 0.01 and the imaginary part of the geometric optics WKB frequency prediction (5.11) for the photon sphere modes of the same black holes. We see that, for a small black hole, −Im(ω dS ) is smaller than −Im(ω WKB ) for the full range of Q/Q ext .
of Fig. 4. This is similar to what was found for massless scalar field quasinormal modes in Ref. [19]. In particular, note that the result (5.15) works well for any small (y + 1) black hole, independently of Q.
Ultimately we will be interested in the ratio −Im(ω)/κ − . In the left panel of Fig. 5, we plot this quantity for the modes displayed in the right panel of Fig. 4. This plot illustrates that for the dS family, −Im(ω)/κ − can attain large values well above 1/2 or 1. The reason we choose to display data with small y + is because this is the region where the slowest decaying quasinormal modes belong to the dS family (as will be clear later, in Fig. 8).
For a small black hole, we can compare our analytical formula (5.15) for the slowest decaying ( = 1, n = 0) de Sitter modes with our WKB prediction (5.11) for the photon sphere modes. The latter is strictly valid for 1 but we found it worked well even for small . We find that in this small black hole limit, the de Sitter modes always decay more slowly than the WKB prediction for the photon sphere modes. This is illustrated in the right panel of Fig. 5 for the black hole family with y + = 0.01 (the same green square solutions shown in the left panel of the same figure). Thus for small black holes the = 1, n = 0 de Sitter mode is the slowest decaying mode belonging to either the de Sitter or photon sphere families.
Near-extremal family of modes and its near-horizon limit
The third family of quasinormal modes for RNdS black holes is called the near-extremal family since these modes are continuously connected to modes that can be identified in the near-horizon limit of the (near-)extremal RNdS solution, i.e. as r − → r + . The analytical analysis of the near-extremal modes of this subsection (and the near-Nariai modes of the next one) is very much inspired by ideas from Appendix A of [45] and [46,47]. This family of near-extremal modes is also present in the case of massless scalar field perturbations of a RNdS black hole [19].
In this subsection we will first perform an approximate analytical calculation of the near-extremal quasinormal modes using the near-horizon limit. We will then compare this to numerical results for these modes.
It is convenient to define the dimensionless quantities where σ ≥ 0 vanishes at extremality. The idea is to use the manifest SL(2, R) symmetry of the AdS 2 × S 2 near horizon geometry of an extremal RNdS black hole to simplify our calculation. The modes we seek, in the near extremal limit, are supported near the black hole horizon. So the limit we want to take has to accomplish two things: approach extremality, and zoom in near the black hole horizon. This can be achieved by sending σ → 0 while keeping z = x/σ fixed. We can anticipate that ω will vanish linearly as σ, so we define ω r c = δω σ and solve for δω in what follows.
We set and expand (4.2) − or (4.10) since the vector-type and scalar-type modes are isospectral − to leading order in σ. The resulting equation takes a simple form where we defined ϕ = y + Ξ δω , Note thatφ depends on δω, butη ± does not. The expression forη ± is easily shown to be real, and it is then manifestly positive. This will play an important role in what follows. Equation (5.18) can be readily solved in terms of Gaussian Hypergometric functions 2 F 1 via the following combination ± + 2 iφ ; 1 + 2 iφ ; z , (5.20) whereĈ (1) ± andĈ (2) ± are integration constants to be fixed via boundary conditions and a (1) We want to impose ingoing boundary conditions at the event horizon, i.e. regularity in ingoing Eddington-Finkelstein coordinates. This is equivalent to settingĈ (2) ± = 0. Next we need to impose a boundary condition at large −z. In principle this should be done by matching to a solution that is outgoing at the cosmological horizon. But we will follow the simpler approach of simply demanding that the solution vanishes at large −z. This can be motivated by the observation that near-extremal modes are highly localized near the event horizon and are therefore very small at large −z. Ultimately the justification for this boundary condition is that it gives quasinormal frequencies that match very well our numerical results.
At large negative values of z, we get The expansion (5.22) diverges at large positive values of (−z) because of the term proportional to (−z) This can be avoided if we set of the Gamma functions in the denominator to have a pole, which occurs for Γ(−n), with n ∈ N 0 = {0, 1, 2, . . .}. In particular, we quantize the frequency by demanding b (2) with n ∈ N 0 . This equation can be readily solved for δω and hence for ω: which simplifies considerably when written in terms of κ − : whereη ± is defined in (5.19). Note that these quasinormal frequencies are purely imaginary and that they all have −Im(ω)/κ − > 1/2. Which of these modes decays most slowly?
The imaginary part of the frequency increases with overtone number n so consider the fundamental (n = 0) modes. For given , we haveη − <η + , so the Φ − modes decay more slowly than the Φ + modes. It can also be checked that, for any y + ,η ± is an increasing function of . It follows that the slowest decaying modes covered by the above analysis are either the Φ − modes with = 2 or the Φ + modes with = 1 (as there are no Φ − modes with = 1). It is easy to show from (5.26) that it is always the Φ − modes with = 2 which decay the most slowly. The above calculation is, at best, valid only in the near-extremal limit, σ 1, and for small frequencies, |ω r c | 1. In the derivation of (5.26) we have only used the properties of the RNdS near-horizon geometry but no use of the full geometry or its far region was made. So we might question the validity of this approximation. To address this question, in Fig. 6 we compare (5.26) with the exact numerical data for the quasinormal mode family (with purely imaginary frequency) that we henceforth call the near-extremal modes. For illustrative purposes, we do this for the Φ − mode with = 2 and radial overtone n = 0. In the left panel of Fig. 6, we fix Q/Q ext = 0.999 and we plot −Im(ω)/κ − as a function of y + . Since we are very close to extremality we expect that (5.26) should be a good approximation. This is indeed what we find. The red dots representing the numerical data agree very well with the green curve corresponding to (5.26). On the other hand, as expected, the analytical approximation (5.26) becomes increasingly poor as we move away from extremality, i.e. as Q/Q ext moves further away from unity. This is illustrated in the right panel of Fig. 6, where we fix y + = 0.5 and see that the prediction (5.25) (green dashed curve) is an excellent approximation when Q ≈ Q ext but quickly becomes a bad approximation as Q decreases.
The validity of the approximation that leads to (5.26) was also tested in the following way. The fact that we just use the near-horizon geometry to get (5.26) suggests that these quasinormal modes have to be localized near the event horizon and very quickly decay away Figure 6. Near-extremal modes for the gravitoelectromagnetic mode Φ − with = 2 and n = 0. In both plots, the dashed green line is the analytical prediction (5.26), or (5.25), and the red dots are our numerical results. Left panel: −Im(ω)/κ − as a function of y + for near-extremal modes at fixed Q/Q ext = 0.999. The dashed blue curve is the WKB prediction (5.11) for the photon sphere modes (also with Q/Q ext = 0.999). This WKB blue curve continues to increase monotonically as y + decreases. Right panel: Im(ω r c ) as a function of Q/Q ext for near-extremal modes at fixed y + = 1/2. The dashed blue curve is again the WKB prediction (5.11) for the photon sphere modes. We see that for a wide range of charge Q the photon sphere modes decay more slowly than the near-extremal modes but, above a critical charge ratio of Q/Q ext ∼ 0.98, the opposite happens.
from it. Our numerical results confirm that this is the case: the numerical near-extremal mode wavefunctions are indeed localized near the event horizon, r = r + , becoming more localized as extremality is approached.
In summary, we find that the analytical prediction (5.25) works very well for nearextremal modes of near-extremal black holes. It is interesting to compare this analytical prediction, for the dominant Φ − , = 2 modes, to the extremal limit of our WKB prediction (5.11) for the photon sphere modes. This comparison is shown in the left panel of Fig. 6 for Q/Q ext = 0.999. If we go even closer to extremality then the blue curve moves to the right, and −Im(ω WKB )/κ − diverges in the extremal limit. Thus we see that, sufficiently close to extremality, the near-extremal modes always decay more slowly than the WKB prediction for the photon sphere modes. Thus, to the extent that the WKB prediction is valid at small (and, as we have seen, it seems to work well), our analytical results predict that, in a neighbourhood of extremality, the Φ − , = 2 near-extremal modes should be the slowest decaying modes belonging to either the near-extremal or photon sphere families. This is further illustrated in the right panel of Fig. 6 where we are at fixed y + = 0.5 and vary Q/Q ext : as we approach extremality, there is a critical value of the charge ratio above which the near-extremal modes indeed become more slowly decaying than the WKB photon sphere modes.
We can also compare the near-extremal family of modes to the de Sitter family. For the slowest decaying de Sitter modes, we see from Fig. 4 (right panel) that Im(ωr c ) does not vary much as we approach extremality. It follows that −Im(ω)/κ − diverges for the de Sitter modes as we approach extremality. This ratio remains finite for the near-extremal modes, hence the near-extremal modes decay more slowly than the de Sitter modes in a neighbourhood of extremality.
In summary, a combination of analytical and numerical calculations indicates that, in a neighbourhood of extremality, the slowest decaying quasinormal mode across all families is the Φ − near-extremal mode with = 2 and n = 0. Furthermore, we have an analytical prediction from (5.25) for the frequency of this mode. Hence (5.25) gives us an analytical prediction for the behaviour of β as we approach extremality. This is the green curve in the left panel of Fig. 6. We will discuss the implications of this below.
Nariai modes
RNdS black holes have three horizons, r − , r + and r c . In the previous subsection we considered the extremal limit where r − → r + . There is however another interesting limit − the Nariai limit − which occurs when r + → r c . The surface gravity remains non-zero in this limit. It is natural to wonder wether there is a fourth family of RNdS quasinormal modes that reduce to Nariai quasinormal modes in this limit.
For massless scalar field perturbations, the results of Ref. [19] suggest that these "Nariai modes" are a subset of photon sphere modes, rather than constituting a distinct fourth family of modes. In the Appendix, we will show that this is indeed the case for gravitoelectromagnetic modes. Therefore we do not need to consider the Narai modes as a distinct family.
Results
As explained above, for each type of perturbation (Φ + or Φ − ) we expect quasinormal modes to fall into three families (dS, photon sphere and near-extremal). Furthermore, from the discussion above, we expect that the slowest decaying quasinormal modes for each family and each type of perturbation to be given by the modes with the lowest allowed value of for that type of perturbation (this will be illustrated later in Table 1 for a particular black hole). Therefore our numerical calculations of quasinormal modes have focused on the two gravitoelectromagnetic sectors {Φ − , = 2} and {Φ + , = 1} since other sectors are expected to give more rapidly decaying modes.
As an example of how we classify the quasinormal modes emerging from our numerical calculations, we will consider the family of "lukewarm" RNdS black holes [48,49]. This is the 1-parameter subfamily of RNdS black holes that are in thermal equilibrium since the temperature of the event and cosmological horizons are the same i.e. κ + = κ c 19 . It turns out that this is equivalent to M = |Q| [48]. For a lukewarm hole Q Q ext = 1 1 + y + 3y 2 + + 2y + + 1 1 + 2y + (6.1) with Q/Q ext = 1/ √ 2 ∼ 0.707 for y + = 1 and Q/Q ext = 1 for y + = 0. We have discretized the lukewarm RNdS family with a numerical grid of 100 points for 0 ≤ y + ≤ 1, and we searched for the full spectra of frequencies solving each one of the relevant two perturbation equations as a quadratic eigenvalue problem for ω 2 . To evaluate the numerical convergence of our results we then took the frequency spectrum of each lukewarm solution and inserted it as a seed in a Newton-Raphson code, and we progressively increased the number of grid points along the radial direction 0 ≤ y ≤ 1 − see (4.8) − until we got the desired accuracy for the quasinormal frequency. Figure 7. Results for the Φ − quasinormal modes with = 2 for lukewarm RNdS black holes.
Left panel: The filled marks identify the fundamental (n = 0) modes of the three families, namely photon sphere (black disks), near-extremal (red diamonds), and de Sitter (blue squares). The black circles represent the next 15 photon sphere overtones (n = 1, · · · , 15) and the 16 blue dotted lines represent the WKB approximation Im (ω WKB ) (n), n = 0, · · · , 15, for the photon sphere modes (valid for 1). The red diamond (in the de Sitter curve) represents the n = 0 pure de Sitter frequency Im(ω r c )| dS = −3. The green triangle (in the near-extremal curve) represents the n = 0 analytical approximation Im(ω r c )| NE = −2 in the limit where Q = Q ext , which for lukewarm RNdS occurs when y + → 0. Right panel: The three families of fundamental (n = 0) quasinormal modes. Here we plot −Im(ω)/κ − against Q/Q ext . The colour code is the same as for the left panel.
As an example, in the left panel of Fig. 7 we give our results for the imaginary part of the frequency for the Φ − modes with = 2. The black disks are the fundamental (n = 0) photon sphere modes. This identification emerges from the fact that they match the geometric optics/WKB approximation (5.11) for Im(ω WKB ) (blue dotted line). These modes also have Re(ω r c ) = 0 which distinguishes them from the purely imaginary dS and near-extremal modes. In the same figure, below this n = 0 photon sphere curve, we identify a total of 15 more curves with black circles. From the left/top to the right/bottom these are the photon sphere overtones n = 1, 2, · · · , 15. This identification follows from: 1) the fact that they match the geometric optics/WKB approximation (5.11) (see the associated 15 blue dashed curves 20 ), and 2) the number of radial zeros in the real and imaginary parts of the associated eigenvectors increases by one unit as n increases by one unit. For clarity of our presentation we decided not to plot the photon sphere modes with n ≥ 16. From the figure the reader can however understand that these accumulate on the right side of the plot. 21 Also on the left panel of Fig. 7 we also see a line of red diamonds. This is the fundamental (n = 0) near-extremal mode of the lukewarm RNdS family. 22 This identification emerges from the fact that: 1) these frequencies are purely imaginary, 2) they converge to Im(ω r c )| NE = −2 in the lukewarm extremal limit y + → 0 (see the green triangle), as dictated by the analytical analysis (5.26), and 3) the eigenvectors of these modes (real functions) are very localized near the event horizon.
Also on the left panel of Fig. 7 there is a curve of blue squares. This is the n = 0 de Sitter family of modes because: 1) these modes are purely imaginary, 2) they converge to Im(ω r c )| dS = −3 as y + → 0 (see the red diamond), in agreement with the analytical analysis (5.15), and 3) the eigenvectors of these modes (real functions) are very much localized near the cosmological horizon. 23 To conclude our analysis of the left panel of Fig. 7, the numerical solution of the quadratic eigenvalue problem gives the full spectrum of eigenfrequencies and associated eigenvectors. We have identified each family of modes that appears in the spectrum using the information discussed in section 5. All the numerical data fits in one of the three classes of modes (de Sitter, photon sphere or near-extremal). Still in the lukewarm family of RNdS, we did a similar analysis for the other relevant sector of perturbations, {Φ + , = 1}, with similar results.
Recall, that we are studying the quasinormal spectra of RNdS to find the spectral gap α in order to calculate β defined by (2.13). To calculate α we need to determine the slowest decaying quasinormal mode across the two types of perturbation (i.e. Φ + and Φ − ) in all three families of quasinormal modes. We can illustrate this with the lukewarm family of RNdS black holes. Focus first on the sector of perturbations {Φ − , = 2} already studied in the left panel of Fig. 7. Clearly, for our purposes, it is enough to compare the leading (n = 0) overtone −Im(ω)/κ − for the three families of modes. This is done in the right panel of Fig. 7. We see that for lukewarm black holes and in the {Φ − , = 2} sector, photon sphere modes have the lowest −Im(ω)/κ − for Q/Q ext 0.955. However, for 0.955 Q/Q ext ≤ 1 the slowest decaying modes are the near-extremal ones. The de Sitter modes are irrelevant for the spectral gap discussion of lukewarm black holes. This analysis still does not identify β for the lukewarm family. For that, we have to repeat the analogue of the right panel of Fig. 7 for the other sector {Φ + , = 1} of perturbations and β is then the minimum of −Im(ω)/κ − over the two sectors of quasinormal modes.
Moving away from the lukewarm family, we will now describe our results for the full moduli space of RNdS black holes. We have spanned the full parameter space 0 ≤ y + ≤ 1 however remarkable that the 1 approximation (5.11) is so accurate for = 2. 21 Without much effort, i.e. without increasing the resolution beyond the value required to have the accuracy desired for the leading overtones, we were able to capture the first ∼40 photon sphere overtones. 22 The higher, n ≥ 1, near-extremal overtones have lower Im(ω rc) and are not shown. 23 The higher, n ≥ 1, de Sitter overtones have lower Im(ω rc) and are not shown. and 0 ≤ Q/Q ext ≤ 1 using a numerical grid with 100 points along y + and another 100 points along Q/Q ext . That is to say, we have computed the {Φ + , = 1} and {Φ − , = 2} quasinormal modes for a total of 10 4 RNdS black holes. Where necessary we further zoomed in a particular region of parameter space, e.g. near Q/Q ext ∼ 1 and/or y + ∼ 0 or y + ∼ 1. Again, all the numerical modes were identified as belonging to one the three families of modes (de Sitter, photon sphere or near-extremal). It is in this sense that we are confident that, for each of the 10 4 RNdS black holes that we studied, the frequency spectra of quasinormal modes belongs to one of the three families discussed in section 5 and no fourth family exists. The discontinuities in the derivatives of these curves occur across the boundaries of the different regions A, B, C. Note that β > 2 sufficiently close to extremality, and large near-extremal black holes can have arbitrarily large β. The black curve is the analytical prediction for near-extremal modes and the black disks correspond to lukewarm black holes.
Our main results for the spectral gap are presented in Fig. 8. In the left panel we show a density plot where we plot β = α/κ − as a function of the horizon ratio y + = r + /r c and charge ratio Q/Q ext . We identify three regions A, B, C separated by three black curves. In region A the spectral gap is dominated by the de Sitter modes. That is, in this region, the slowest decaying quasinormal mode is a de Sitter mode. This region A extends all the way down to Q → 0, i.e. de Sitter modes dominate the region of parameter space described by very small values of y + . On the other hand, in region B it is the photon sphere modes that dominate. Finally, in region C, i.e. in a band of parameter space around extremality Q/Q ext ∼ 1, it is the near-extremal modes that dominate.
The left panel of Fig. 8, also shows a red dashed curve. This curve identifies solutions with β = 1/2 and, above it, we have a region of parameter space near extremality where the solutions have β > 1/2 (see also the density plot legend). It follows from the discussion of section 3.3 that, in this region, the Christodoulou version of strong cosmic censorship is violated (for smooth initial data) by gravitoelectromagnetic perturbations. These results are similar to the results for massless scalar field perturbations presented in Ref. [19]. However, there is an important qualitative difference between our results and the results for massless scalar field perturbations. In the massless scalar case one always has β < 1 [19]. But in our case we can have β > 1. This is apparent in the right panel of Fig. 8, which plots β against y + for different values of Q/Q ext . The black curve corresponds to the analytical prediction (5.26) for near-extremal modes Φ − with n = 0 and = 2. From the discussion at the end of section 5.3 we expect this analytical prediction to be reliable as we approach extremality. Our plot shows that this analytical result does indeed give an accurate prediction for the value of β close to extremality. From the plot we see that, not only that do near-extremal black holes have β > 1/2, but in fact they have β > 2, which implies (section 3.3) that the C 2 version of strong cosmic censorship is violated for smooth initial data. In fact, for any r, by taking y + large enough we can find a near-extremal black hole for which β > r (the appropriate value of y + can be determined from (5.26)). Hence, for any r, the C r version of strong cosmic censorship is violated for smooth initial data. 1 2 Finally, in Table 1 we present detailed numerical results for a particular near-extremal black hole which violates the C 2 version of strong cosmic censorship. For this particular example we have computed not just the {Φ + , = 1} and the {Φ − , = 2} modes but also the {Φ + , = 2} modes and both types of mode with = 3, 4. From the table we see that the slowest decaying mode for this particular black hole is the {Φ − , = 2} mode, in agreement with the discussion at the end of section 5.3. This black hole has κ − r c = 0.098005 and so β = 0.200374/0.098005 = 2.04.
Taking the rough with the smooth
We have reviewed the reason why quasinormal modes determine the behaviour at the Cauchy horizon of linear perturbations arising from smooth initial data. By calculating the gravitoelectromagnetic quasinormal modes of RNdS black holes we have shown that, the Christodoulou and C 2 formulations of strong cosmic censorship are always violated close to extremality, and, for any r, the C r formulation is violated close to extremality for a sufficiently large black hole. Thus gravitoelectromagnetic perturbations exhibit a much worse violation of strong cosmic censorship than the massless scalar field perturbations considered in Ref. [19].
We emphasize that this violation of strong cosmic censorship in Einstein-Maxwell theory does not occur in pure Einstein gravity. Ref. [23] showed that any non-extremal Kerr-dS black hole has slowly decaying photon sphere gravitational quasinormal modes which ensure that the Christodoulou version of strong cosmic censorship is respected for smooth initial data.
As we have discussed above, Dafermos and Shlapentokh-Rothman (DSR) have shown that one can rescue strong cosmic censorship for RNdS black holes at the expense of considering rough initial data [25]. We have explained how a lack of smoothness of the initial data is also required to make sense of the older argument of Ref. [18] in favour of strong cosmic censorship.
What are we to make of this? Should we allow rough initial data? In physics we often assume that it is sufficient to work with smooth initial data. However, in some theories, smooth initial data can lead to a rough solution. For example, a shock can form in a compressible perfect fluid. Once we accept the existence of shocks, it is natural to weaken the regularity of our initial data to allow for shocks present initially. So for a fluid it is natural to allow rough initial data. However, in Einstein-Maxwell (-scalar field) theory, if we start with smooth initial data then the solution will remain smooth throughout the domain of dependence of this data. Shocks do not form dynamically. So we are not forced to consider rough initial data.
On the other hand, rough initial data can be approximated by a sequence of smooth initial data labelled by an integer n, and all with the same energy as the rough data. The sequence of smooth solutions arising from such data will be close to the rough solution in a region of spacetime that becomes larger as n → ∞, and approaches the Cauchy horizon in this limit (this follows from Cauchy stability of the equations of motion). DSR's rough version of strong cosmic censorship indicates that one can find a sequence such that the energy at the Cauchy horizon diverges as n → ∞. Hence, even for smooth perturbations, the energy at the Cauchy horizon is not bounded by the initial energy. Even if the energy of a smooth perturbation does not diverge at the Cauchy horizon, it can still become arbitrarily large there.
Maybe for some reason one would want the initial data not just to have finite energy but also that the first k derivatives are square integrable, i.e. the initial data has finite H k norm. For example, such a condition might arise from the requirement that the leading higher derivative corrections to the equations of motion are negligible initially. DSR's rough version of strong cosmic censorship implies that there exist smooth initial data whose H k norm on a spacelike surface intersecting the Cauchy horizon is not bounded by the H k norm of the initial data. This suggests that generic smooth initial data for which the leading higher derivative corrections are negligible will give a solution for which the leading higher derivative corrections become large near the Cauchy horizon. This does seem to capture the physics of the strong cosmic censorship hypothesis, namely that there is always a breakdown of effective field theory at a Cauchy horizon.
Comments on quantum effects
The analysis of this paper has been entirely classical. In this section we will discuss the role of Hawking radiation [50] in enforcing strong cosmic censorship. Recall that the behaviour at the Cauchy horizon is determined by the late-time behaviour of the black hole solution. So we need to discuss the effects of Hawking radiation on this late time behaviour. In de Sitter spacetime, we have to account for Hawking radiation both from the black hole horizon and from the cosmological horizon [51].
Consider first pure Einstein-Maxwell theory. In this case there are no charged particles and so Hawking radiation cannot change the charge of the black hole. If the black hole has a higher temperature than the cosmological horizon then it will radiate photons and gravitons and its temperature will decrease. If it has a lower temperature than the cosmological horizon then it will absorb photons and gravitons emitted by the cosmological horizon and the black hole temperature will increase. Thus Hawking radiation will drive the black hole towards a lukewarm solution for which the black hole and the cosmological horizon have equal temperatures, i.e. κ + = κ c [48].
We can approximate the late time solution as a (slightly perturbed) lukewarm solution and the behaviour near the Cauchy horizon will be determined by the behaviour near the Cauchy horizon of a lukewarm black hole. Fig. 8 (right panel) shows that small lukewarm black holes have 1/2 < β < 2 and so (in pure Einstein-Maxwell theory) they violate the Christodoulou formulation of strong cosmic censorship (for smooth initial data) but not the C 2 formulation. Thus it appears that Hawking radiation does not rescue the Christodoulou version of strong cosmic censorship in pure Einstein-Maxwell theory.
However, there is another way in which quantum effects can influence the geometry, namely via vacuum polarization. At late time, one would expect the quantum state of fields outside the black hole to approach the Hartle-Hawking state in the lukewarm black holes background. In this state, the results of calculations in a 2d toy model [52] (with conformally coupled quantum fields) indicate that T µν diverges at the Cauchy horizon. This divergence is proportional to (−V − ) −2 , which is not locally integrable at the Cauchy horizon hence one cannot make sense of the semi-classical Einstein equation G µν = 8π T µν there, even in the sense of weak solutions. This suggests that quantum effects may rescue strong cosmic censorship. It would be interesting to confirm this with a calculation of T µν in the Hartle-Hawking state near the Cauchy horizon of a lukewarm black hole.
Of course, in the real world there exist charged particles e.g. electrons, that an electrically charged RNdS black hole can emit as Hawking radiation, and thereby decrease its charge. If the radiation of charged particles is rapid compared to the radiation of uncharged particles then the black hole will first lose most of its charge, and then evaporate away completely. If the radiation of charged particles is slow compared to the radiation of uncharged particles then the latter would tend to push the black hole onto the lukewarm family of solutions as above. The emission of charged particles would then cause the charge gradually to decrease whilst remaining within the lukewarm family. But ultimately the black hole would evaporate away completely. Note that this conclusion does not depend on the mass of the charged particles. This is because, unlike in flat spacetime, particles of any mass can escape the black hole by tunnelling through the potential barrier separating the event horizon from the cosmological horizon. In other words, the mass of the particle is redshifted away at the cosmological horizon.
It seems that Hawking radiation of charged particles will ensure that strong cosmic censorship is respected. However, one could also imagine a magnetically charged RNdS hole, perhaps formed by pair creation in de Sitter spacetime [48]. By performing an electromagnetic duality rotation, our results on gravitoelectromagnetic perturbations of electrically charged RNdS holes map to identical results for magnetically charged holes. If there are no magnetically charged particles then such black holes will evolve via Hawking radiation to lukewarm holes, which will behave as discussed above.
The procedure described in section 5.3 also applies to the current Nariai analysis as long as we do the identifications x → X and σ → δ in these formulas. We want modes that are regular at X = 0 (which corresponds to have outgoing boundary conditions at the cosmological horizon in the full geometry) and the condition that the solutions should decay at large X quantizes the frequencies. The latter condition is poorly motivated but it gives results that agree well with our numerics.
The frequencies (A.2) of near-Nariai modes have a real and imaginary part. This analytical approximation is strictly valid in the near-Nariai limit, δ 1 (i.e. y + → 1), Q Q ext and for small frequencies, |ω r c | 1. So what are these modes? Do they represent a fourth family of modes in RNdS?
To answer this question we attempted different strategies. In one of them we fix the black hole parameters and the quantum number and we solve the perturbation master equation as an eigenvalue problem to find the frequencies that are allowed in the background. After identifying the frequencies − including the first few overtones n ≥ 0 − that describe the 1) de Sitter, the 2) photon sphere and 3) near-extremal modes we do not find evidence of a new fourth family of modes. In a second approach, we use a Newton-Raphson algorithm whereby we give directly (A.2) as a seed (in a region of parameter space, i.e. y + ∼ 1, where it is a good approximation). Again, such a code does not converge to a new fourth family of modes. Instead, this Newton-Raphson code always converges for the family of modes that we have already identified as being the photon sphere of modes. Moreover, this happens not only when we search for the leading radial overtone, n = 0 in the seed (A.2), but also for the first few other overtones that we attempted (n = 1, 2, 3).
We consider that our experiments give good evidence to support the claim that there is no fourth family of quasinormal modes that can be associated to a Nariai origin. Instead, the Nariai frequencies simply give a good approximate description of photon sphere modes in the limit where y + → 1. These conclusions are best illustrated in Fig. 9. In the left panel, we fix y + = 0.99 and the dashed orange curve describes the analytical Nariai expression (A.2) prediction whereas the dashed blue line is the WKB prediction (5.11) for the photon sphere modes. The black diamonds represent the outcome of our Newton-Raphson search when we give the Nariai frequency (A.2) as a seed. Both the near-Nariai and WKB photon sphere predictions agree very well with the numerical data although, as expected, the near-Nariai result works less well at large Q/Q ext . In the right panel of Fig. 9, we fix Q/Q ext Figure 9. Photon sphere family of modes Φ − with = 2, n = 0 and the Nariai limit. In both plots, the dashed orange line refers to the Nariai analytical prediction (A.2) for −Im(ω Nariai )/κ − (with = 2, n = 0), while the dotted blue curve is the analytical photon sphere prediction (5.11) for −Im(ω WKB )/κ − (with n = 0). Left panel: −Im(ω)/κ − as a function of Q/Q ext at fixed y + = 0.99, i.e. r + = 0.99 r c . Right panel: −Im(ω)/κ − as a function of y + at fixed Q/Q ext = 0.4995. Note that, as expected, the Nariai analytical prediction is a good approximation only near y + ∼ 1. It seems to describe the y + ∼ 1 limit of the photon sphere modes (black diamonds and WKB dotted blue line). and vary y + . Again, the black diamonds represent the outcome of our Newton-Raphson search when we give the Nariai frequency (A.2) as a seed. As expected, the near-Nariai prediction (A.2) is very good for 1 − y + 1 but quickly gets worst as y + decreases. The black diamonds turn out to be exactly the photon sphere modes that we had already found in an independent analysis. This is confirmed by the agreement with the WKB prediction (5.11). The results presented in this plot are qualitatively the same for any other value of the charge ratio Q/Q ext (and we did a fine-tunned search which spanned the full interval 0 < Q/Q ext < 1).
Analysis similar to the one displayed in Fig. 10 further reinforce our conclusion. In this figure, we take Q/Q ext = 0.0999 and we focus our attention in the interval 0.98 < y + < 1, i.e. very close to the Nariai limit y + → 1. We display the modes we obtain with a Newton-Raphson search when we give analytical Nariai expression (A.2) as a seed. In the left panel we plot the imaginary part of the frequency, while in the right panel we plot the real part of the frequency. The left panel exemplifies again what we already know: as discussed in the previous cases, both the WKB expression (blue dotted curve) and the Nariai expression (orange dashed curve) give good approximations for Im(ω r c ) when y + ∼ 1 and as we move away from the Nariai limit, the analytical expression (A.2) starts being a less good approximation. On the other hand, the right panel of Fig. 10 shows that the analytical Nariai expression (A.2) yields an approximation for the real part of the frequency that is actually even better than the WKB approximation (5.11), as long as 1 − y + 1. However, we would expect that including higher order terms in 1/ would improve the accuracy of Figure 10. Photon sphere family of modes Φ − with = 2, n = 0 and the Nariai limit for RNdS black holes with Q/Q ext = 0.0999. In both plots, the dashed orange line refers to the Nariai analytical prediction (A.2) for = 2, n = 0, while the dotted blue curve is the analytical geometric optics/WKB photon sphere prediction (5.11) for = 2, n = 0. Left panel: Im(ω r c ) as a function of y + close to the Nariai limit y + ∼ 1. Right panel: Re(ω r c ) as a function of y + close to the Nariai limit y + ∼ 1.
the WKB prediction, which is already remarkably accurate given that we are working with = 2. To conclude, we have shown that the Nariai result (A.2) simply describes the photon sphere family of quasinormal modes in the y + → 1 limit. In the case of a massless scalar field perturbation of RNdS, the analysis of [19] reached the same conclusion. | 2019-04-22T13:12:14.073Z | 2018-08-08T00:00:00.000 | {
"year": 2018,
"sha1": "08e33b5a834cdd88a0d057be56e5471bac334aeb",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP10(2018)001.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "08e33b5a834cdd88a0d057be56e5471bac334aeb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
225442008 | pes2o/s2orc | v3-fos-license | Policy-Driven Sustainable Saline Drainage Disposal and Forage Production in the Western San Joaquin Valley of California
: Environmental policies to address water quality impairments in the San Joaquin River of California have focused on the reduction of salinity and selenium-contaminated subsurface agricultural drainage loads from westside sources. On 31 December 2019, all of the agricultural drainage from a 44,000 ha subarea on the western side of the San Joaquin River basin was curtailed. This policy requires the on-site disposal of all of the agricultural drainage water in perpetuity, except during flooding events, when emergency drainage to the River is sanctioned. The reuse of this saline agricultural drainage water to irrigate forage crops, such as ‘Jose’ tall wheatgrass and alfalfa, in a 2428 ha reuse facility provides an economic return on this pollutant disposal option. Irrigation with brackish water requires careful management to prevent salt accumulation in the crop root zone, which can impact forage yields. The objective of this study was to optimize the sustainability of this reuse facility by maximizing the evaporation potential while achieving cost recovery. This was achieved by assessing the spatial and temporal distribution of the root zone salinity in selected fields of ‘Jose’ tall wheatgrass and alfalfa in the drainage reuse facility, some of which have been irrigated with brackish subsurface drainage water for over fifteen years. Electromagnetic soil surveys using an EM-38 instrument were used to measure the spatial variability of the salinity in the soil profile. The tall wheatgrass fields were irrigated with higher salinity water (1.2–9.3 dS m − 1 ) compared to the fields of alfalfa (0.5–6.5 dS m − 1 ). Correspondingly, the soil salinity in the tall wheatgrass fields was higher (12.5 dS m − 1 –19.3 dS m − 1 ) compared to the alfalfa fields (8.97 dS m − 1 –14.4 dS m − 1 ) for the years 2016 and 2017. Better leaching of salts was observed in the fields with a subsurface drainage system installed (13–1 and 13–2). The depth-averaged root zone salinity data sets are being used for the calibration of the transient hydro-salinity computer model CSUID-ID (a one-dimensional version of the Colorado State University Irrigation Drainage Model). This user-friendly decision support tool currently provides a useful framework for the data collection needed to make credible, field-scale salinity budgets. In time, it will provide guidance for appropriate leaching requirements and potential blending decisions for sustainable forage production. This paper shows the tie between environmental drainage policy and the role of local governance in the development of sustainable irrigation practices, and how well-directed collaborative field research can guide future resource management.
Introduction
This Special Issue of Sustainability focuses on environmental policy and governance issues related to sustainable salinity management. In this paper, we strive to show how a unique selenium contamination problem impacting irrigated agriculture resulted in State and federal environmental policy that fundamentally changed irrigation management in the San Joaquin Valley of California. We describe the agricultural stakeholder response to the new policy, and the development of a dedicated reuse facility that has provided irrigators time to develop sustainable management practices while maintaining local governance. We describe a research project geared to improve and optimize these practices, which involves the use of electromagnetic instrument technology and associated techniques to map the salinity on selected forage fields, and show how data provided by these techniques can be interpreted and used to further develop and calibrate a vadose zone simulation model for future decision support. We conclude with our future goal of bridging the gap between complex sensing techniques that result from the deployment of the EM38 instrument and sensor-informed transient model application, and the irrigator who relies primarily on his/her experience to gain knowledge. Our aim was to show the connectivity between environmental protection and irrigation sustainability policy, and irrigation practice and how research can provide decision support and lead to better resource management outcomes.
Background
Worldwide, it is estimated that 20% of the total farmlands and 33% of the irrigated lands are affected by soil salinity, and that by 2050, half of the farmlands will be salinized [1]. The western San Joaquin Valley (SJV) in the central part of California, USA, is a highly productive agricultural area affected by shallow water tables and soil salinity, as well as high concentrations of selenium and boron in subsurface tile drainage. Soil salinity arises in part due to the marine nature of the native soils and the importation of irrigation water from the Bay-Delta estuary, which contains salts [2,3]. A recent salinity assessment of the western SJV based on remote sensing data and analysis indicated that 0.32 million hectares of lands are salt-affected (i.e., soil electrical conductivity, EC e > 4 dS m −1 ) which represents 45% of the region [3,4]. The high agricultural productivity of the western SJV mostly stems from its practice of irrigation, and from the State's extensive network of canals, which convey irrigation water to its place of use. The SJV also relies on the winter snowpack of the eastern Sierra Nevada mountains, and a network of state, federal and locally owned reservoirs, to help overcome the impact of droughts and the regulation-induced water shortages that can reduce agricultural water supplies. The prospect of future climate change impacts, coupled with an increasing population, has created incentives for local water districts to look beyond traditional sources of water and consider supplies previously deemed too marginal or saline for their use as irrigation water.
Alfalfa is more sensitive to salinity than 'Jose' tall wheatgrass, but it produces higher yields and forage quality, and it is more profitable. Elevated yields were reported for improved varieties of alfalfa in a field study by [7] at soil salinities as high as 7 dS m −1 EC e , and more recently, ref. [9] evaluated Sustainability 2020, 12, 6362 3 of 27 different cultivars of alfalfa in a sand tank study and suggested that irrigation waters resulting in soil salinities of up to 6 dS m −1 EC e could be used throughout the production cycle without any significant yield loss. In a three-year field trial, ref. [10] reported an average yield loss of only 11% for 21 improved varieties of alfalfa irrigated with very high EC waters (8-10 dS m −1 ), which resulted in soil salinities of 10-15 dS m −1 EC e for 0-150 cm soil depth in the last two years. Ref. [11] also found that alfalfa had much higher salt tolerance than previously established, based on the performance of three salt tolerant varieties grown in large pots and irrigated with saline water for 18 months. Likewise, the field study by [7] indicated that 'Jose' tall wheatgrass had a very high level of salinity tolerance, as after five years of saline drainage water application and soil salinities reaching 18 dS m −1 ECe, the forage was still producing 6-7 metric t ha −1 , albeit with a lower dry matter yield.
However, irrigation with saline waters, particularly those high in sodium, can negatively impact soil's physical and chemical properties, and crop yields. Careful management is therefore needed to minimize salt accumulation in the root zone and sustain forage production [2,5,12,13]. Drainage or well waters that are saline-sodic are more problematic due to the negative effect of sodium on soil structure and consequent reductions in water infiltration [14]. Being a conservative constituent, salts tend to accumulate in the crop root zone over time if the water supply is insufficient to provide adequate leaching, or if the drainage disposal is inadequate to provide the long-term removal of salts from the crop root zone. Ref. [12] developed a regional groundwater and hydro-salinity model to conduct long-term (57 year) simulations of soil salinity in western Fresno county in the SJV to replicate historic changes in soil salinity. The model showed that, although long-term irrigation helped to reduce root zone salinity across the study area throughout the second half of the 20th century, there were concerns for the continued leaching of dissolved salts and the salinization of deeper groundwater which could compromise the sustainability of irrigation practices that conjunctively use groundwater [12].
Vadose zone simulation models can help to minimize the salinity hazard in agricultural systems and reduce the environmental impact of salinity. Initial guidelines for managing saline irrigation waters were based on steady state analysis which assumed that (a) irrigation water infiltrated at a constant rate, irrespective of the irrigation frequency, (b) evapotranspiration stayed constant over the growing season and (c) that the salt concentration of the soil solution was constant at all times [15]. These steady-state models provide conservative estimates that over-predict the negative consequences of saline water irrigation and suggest higher leaching requirements than would be recommended using transient-state models [15][16][17]. Hence, a transient modeling approach was chosen for this study. However, this approach requires sufficient data to both calibrate and verify the model, but also serves as a useful framework for experimental design and the design of sensor networks to provide a complete set of essential model input data. Although the CSUID model selected for this study is not elaborated in this paper, it will serve as an essential tool for achieving the goal of SJRIP (San Joaquin River Improvement Project) system sustainability through the optimization of existing and future practices. One of the first steps towards estimating leaching requirements is to know the current state of salinity in the field. Soil salinity is a dynamic soil property that varies spatially, as well as temporally. It is important to determine the spatial distribution of salts in the field in all three dimensions (across the field and downward in the profile). Information on the salt distribution in the soil profile can be used to determine if irrigation volumes are appropriate, or to infer the net movement of salts in different parts of a field, which can be helpful in assessing the functionality of a subsurface drainage system (if it is installed).
Environmental Policy to Control Salt and Selenium Pollutant Loading
The selenium ecotoxicity disaster at Kesterson Reservoir in 1985 that caused reproductive failure in overwintering waterfowl became a landmark in time, signifying a change in attitude on the topic of agricultural return flows in California and throughout the USA (Quinn, 2020; [18][19][20]). Up until that time, the only major constituent of concern in agricultural return flows had been nitrate, because of its health impacts on newborn children and potential for water body eutrophication and salinity for its Sustainability 2020, 12, 6362 4 of 27 slow erosion of crop yields when the applied water salinity exceeded a certain salinity threshold. What followed was a comprehensive research effort led by the University of California and research divisions within State and Federal resource agencies, and the rapid closure of the Kesterson Reservoir and the San Luis Drain-the conveyance that supplied the drainage storage ponds with subsurface agricultural drainage water from a 2360 ha tile drained area within the Westlands Water District [18]. This research tapped past and ongoing research in Australia, Egypt and Israel, and brought in collaborating scientists from these countries and around the world in search of solutions to this unique environmental crisis impacting California's agriculture. The State Water Resources Control Board (SWRCB), the regulatory agency responsible for water resource and water quality policy in the State of California, embarked on a comprehensive model development effort to guide control actions to minimize environmental contamination due to selenium, boron and salt loading from agriculture. The international literature on river basin water quality modeling yielded little in the way of software that could be directly applied to the San Joaquin River Basin (SJRB). Since the San Joaquin River was the main conduit for the export of these contaminants to the Sacramento-San Joaquin Delta and the San Francisco Bay, the SWRCB focused its effort on the San Joaquin River as a driver of environmental policy [21].
The San Joaquin River Input-Output model (SJRIO) [22] was the first attempt at developing water and contaminant mass balances in the Basin, and served as the conceptual basis for the San Joaquin Valley Drainage Program's final policy report, which attempted to provide a balanced and equitable fifty year plan that provided for irrigation sustainability while protecting the fish and wildlife resources of the Basin and minimizing socioeconomic impacts [18,20]. Any significant loss of agriculture in the western San Joaquin Valley was predicted to have significant negative community impacts on the disadvantaged communities on the westside of the SJV. This major five-year research and policy planning process was followed by the Grassland Bypass Project [23] in 1996, a negotiated pact between agricultural and environmental interests, which was extended from 1996 to 2019, which provided a period of adjustment for agricultural entities to achieve zero selenium load discharge in the drainage return flows to the San Joaquin River [23]. This pact was followed by the U.S. Environmental Protection Agency (EPA) mandated salinity and boron Total Maximum Daily Load (TMDL) for the lower San Joaquin River Basin [21], a policy-directed action mandated under US federal law to address polluted and impaired public water bodies. The U.S. TMDL approach has been a particularly effective policy-driven tool for pollution abatement, although it has primarily been applied to pollutants and sectors other than salinity and agriculture.
Grassland Bypass Project
The Grassland Bypass Project [23] was conceived as a potential recipe for long-term irrigated agriculture sustainability in the Grasslands subarea of the San Joaquin River Basin in response to the policy-driven moratorium on selenium contaminated tile drainage export from the Westlands Water District which threatened to curtail agricultural production. The 44,000 hectare Grasslands subarea ( Figure 1) had a long history of drainage export to the San Joaquin River through approximately 160 km of earthen channels that ran through an area dedicated to seasonal waterfowl habitat-private duck clubs and State and Federal wildlife refuges. The approximately 160 private duck clubs and beef cattle operations had made use of the agricultural drainage return flows, especially in the seasonal wetland areas to the south of the city of Los Banos, oblivious to the potential hazards associated with selenium teratogenicity and bioaccumulation in invertebrates and other biota. While a replacement water supply was being negotiated with the US Bureau of Reclamation for the approximately 64,000 ha of combined seasonally managed wetlands within the Grasslands Ecological Area, agricultural entities looked at short-term solutions for the immediate plumbing problem for sustained drainage relief, and longer term solutions for sustainable irrigated agricultural production on some of the most fertile and productive agricultural soils in the San Joaquin Valley. After 6 years of negotiation that lasted from October 1990 until September 1996, the Grassland Bypass Project was finalized, allowing the agricultural entities temporary use of the northern 45 km portion of the federally-owned San Luis Drain in order to remove this selenium-contaminated drainage (water with greater than 2 ppb Se) from the wetland channels. The Use Agreement signed with the US Bureau of Reclamation [23] recognized the policy goal of the long-term reduction of selenium export to the San Joaquin River by mandating the eventual elimination of all selenium export to the San Joaquin River, except as a result of major precipitation events causing uncontrollable flooding. In response, the agricultural draining entities established a reuse facility on several hundred hectares of low value, salt impacted agricultural land that has expanded over the past two decades to its current footprint of 2428 hectares. There was confidence that, during the term of the Use Agreement, affordable selenium treatment technologies would be developed that would allow environmentally safe selenium export to the San Joaquin River that met selenium concentration objectives (5 ppb) in the San Joaquin River and its west-side drainage tributaries [18]. In recent years, evidence of declining crop yields in several alfalfa fields in the SJRIP have led to cropping changes, in which the more profitable alfalfa has been replaced with the less profitable, but more salt tolerant 'Jose' tall wheatgrass. Our study, described below, is the first comprehensive analysis in the SJRIP that attempts to address salt mass balance on alfalfa and 'Jose' tall wheatgrass fields, as well as drainage reuse sustainability issues in the San Joaquin River Basin. The objectives were to: (a) collect essential irrigation water and soil data in selected forage fields in the SJRIP, (b) to In the USA in general, and California in particular, stakeholder and agency-initiated actions in response to major environmental policy mandates can take an inordinate amount of time, given the complex and sometimes contradictory mandates of the existing environmental laws and regulations, and the desire for consensus. The process for setting appropriate water quality objectives, even for constituents as simple as salinity, requires hearings to sort through the relevant underlying science and potential impacts for a myriad of stakeholder entities, and often lasts 5 to 10 years [21]. Although the Grassland Bypass project took six years of negotiation to gain final approval, it is regarded as one of the most successful policy-driven consensus environmental planning projects in the Basin's history.
San Joaquin River Improvement Project (SJRIP)
The drainage reuse facility was appropriately named the San Joaquin River Improvement Project (SJRIP) and has been the recipient of significant State and federal grants over the past twenty years, which have allowed the acquisition of contiguous land from willing sellers. An internal policy agreement was struck with the agricultural water districts that only subsurface agricultural drainage could be exported to the facility, which helped to minimize the volume of drainage requiring disposal. The subsurface drainage exported from each water district was combined in a central drain and exported to the SJRIP facility [3,15]. All of the surface drainage return flows and operational tailwater spill were collected in tailwater sumps where this water could be locally recycled on the same field or on-farm without co-mingling with subsurface drainage water. An innovative float, colored blue, yellow and red from top to bottom, was devised for deployment in field drainage sumps. These risers protruded from the ground and were visible from the road and to local by-passers. A sump riser showing red would indicate high water tables in a field-and likely poor water conservation practices-whereas a blue coloration would indicate relatively low water tables and less deep percolation. This and other internal policy directives allowed the close to real-time control of selenium drainage export, and close to 100% compliance with both the monthly and annual selenium load export limits that were mandated by the oversight committee for the Grasslands Bypass Project.
The Grassland Bypass Project [23] has been successful in meeting the program objectives and all selenium load targets except during the first two years of the project, in 1997 and 1998, when two back-to-back El Nino years resulting in exceedances of the nine-year mean monthly selenium loads that were established for the first three years of the Project. By 2012, the project had reduced the drainage discharge to the river by 82%, and salt, boron and selenium loads by 84%, 72% and 92%, respectively, as compared to the discharge in 1995 [16]. Monitoring was largely focused on selenium loading at the Site B compliance monitoring site (Figure 1), and at Site A immediately downstream, where the selenium drainage entered the San Luis Drain. In addition, discrete and continuous monitoring of water quality, sediments and biota was conducted in the Grasslands watershed. This was ostensibly to monitor any secondary impacts of the Project, and to ensure that agricultural Se-contaminated drainage remained excluded from wetland water supply delivery channels. Funding limitations and the general confidence in an eventual low-cost technological solution for selenium bioremediation and treatment did not extend the monitoring program to the SJRIP.
In recent years, evidence of declining crop yields in several alfalfa fields in the SJRIP have led to cropping changes, in which the more profitable alfalfa has been replaced with the less profitable, but more salt tolerant 'Jose' tall wheatgrass. Our study, described below, is the first comprehensive analysis in the SJRIP that attempts to address salt mass balance on alfalfa and 'Jose' tall wheatgrass fields, as well as drainage reuse sustainability issues in the San Joaquin River Basin. The objectives were to: (a) collect essential irrigation water and soil data in selected forage fields in the SJRIP, (b) to assess the spatial variability of salinity in the soil profile, and (c) to assess the sustainability of this forage production system for saline drainage disposal. The study also introduced the application of a simplified one-dimensional salinity simulation model, based on the Colorado State University Irrigation and Drainage Model (CSUID-1D), primarily as an initial framework to inform data collection. This model can eventually be used to guide water supply blending, leaching requirements and drainage Sustainability 2020, 12, 6362 7 of 27 investment decisions. This decision support tool with a simple graphical user interface was designed for flexibility, in order to allow SJRIP facility personnel to fine-tune management practices for sustained forage yields using saline irrigation while achieving the prime purpose of the facility, which is drainage volume reduction and disposal.
Study Sites
The SJRIP (San Joaquin River Improvement Project) facility is located in western Fresno County, near the city of Firebaugh, California (USA) (Figure 1). It is bounded by the Delta Mendota Canal and the Central California Irrigation District's Main Canal to the south and north, respectively ( Figure 1). The 2428-hectare facility is operated by the Panoche Water District (PWD) and provides drainage service to the Grasslands Drainage Area (GDA), located south of the city of Los Banos, between the San Joaquin River and Interstate 5. Less than 30% of fields within the GDA (i.e., 688 hectares) are installed with tile drains to help protect crops from water logging, and soils from accumulating salt through upward capillary flow. Over the past 20 years, several salt tolerant crops have been cultivated within the SJRIP and irrigated with subsurface drainage, and, in the case of several alfalfa fields, have been blended with pumped groundwater. The most successful crops have been 'Jose' tall wheatgrass, hereafter referred to as tall wheatgrass (TWG), and alfalfa hay (ALF), which now dominate the facility, with 1518 hectares and 384 hectares, respectively Most of the salt-tolerant crops are located on 1657 hectares, referred to as the SJRIP 1 ( Figure 2). An additional 753 hectares, acquired in 2008, were planted with 1478 acres of salt-tolerant crops-referred to as SJRIP 2 in Figure 2. However, we will not be using this terminology henceforth, but rather 'SJRIP', referring to the entire facility.
Study Sites
The SJRIP (San Joaquin River Improvement Project) facility is located in western Fresno County, near the city of Firebaugh, California (USA) (Figure 1). It is bounded by the Delta Mendota Canal and the Central California Irrigation District's Main Canal to the south and north, respectively ( Figure 1). The 2428-hectare facility is operated by the Panoche Water District (PWD) and provides drainage service to the Grasslands Drainage Area (GDA), located south of the city of Los Banos, between the San Joaquin River and Interstate 5. Less than 30% of fields within the GDA (i.e., 688 hectares) are installed with tile drains to help protect crops from water logging, and soils from accumulating salt through upward capillary flow. Over the past 20 years, several salt tolerant crops have been cultivated within the SJRIP and irrigated with subsurface drainage, and, in the case of several alfalfa fields, have been blended with pumped groundwater. The most successful crops have been 'Jose' tall wheatgrass, hereafter referred to as tall wheatgrass (TWG), and alfalfa hay (ALF), which now dominate the facility, with 1518 hectares and 384 hectares, respectively Most of the salt-tolerant crops are located on 1657 hectares, referred to as the SJRIP 1 ( Figure 2). An additional 753 hectares, acquired in 2008, were planted with 1478 acres of salt-tolerant crops-referred to as SJRIP 2 in Figure 2. However, we will not be using this terminology henceforth, but rather 'SJRIP', referring to the entire facility.
Selected Fields
Four forage fields were selected for the study. The fields were chosen based on the availability of historical data on irrigation diversions, and the forage yield collected at each cutting. As shown in Figure 2, fields 13-2 and 13-6 were planted with alfalfa (ALF), and fields 13-1 and 10-6 were planted with 'Jose' tall wheatgrass (TWG). Fields 13-1 and 13-2 had subsurface drains, whereas 10-6 and 13-6 had no subsurface drainage system. TWG field 10-6 was one of the original fields in the SJRIP; its saline irrigation began in 2001. Fields 13-1, 13-2 and 13-6 were developed in 2004; thus, all of the fields in this study had been irrigated with saline drainage water, blended with less saline water in the case of ALF fields, for more than twelve years. In the county soil survey, all four fields were mapped as clays, belonging to the Chateau (10-6), Deldota (13-1), Tranquillity and Deldota , and Tranquillity (13-6) soil series.
Irrigation Data
The salinity of the canal diversions into the four selected forage fields was measured using InSitu electrical conductivity (EC) sondes, which were installed to provide hourly measurements of the salinity of the applied irrigation water ( Figure 3) and the depth of the water in the irrigation supply ditches. The water depth provides an indicator that irrigation is most likely taking place, when water levels rise to a point where the siphon tubes that divert water into each field can be operated. The water depth provides a check on the accuracy of written records provided by the water district. A limited number of grab samples were also collected for chemical composition, and were sent to the California Department of Water Resources' designated laboratory for the analysis of their chemical constituents. The samples were filtered through a 0.22 m pore size nylon filter (Fisherbrand 25 mm syringe filter; Fisher Scientific, Tustin, CA, USA) prior to chemical analysis, and the portion used for analysis of Na + , Ca 2+ , Mg 2+ and B was acid fixed using 1 mL of 70% nitric acid. Chloride and SO 4 2− were measured using a Dionex DX-500 ion chromatography instrument (IC; Sunnyvale, CA, USA) according to EPA method 300.0 [17]. Sodium, Ca 2+ , Mg 2+ and B were measured using inductively coupled plasma atomic emission spectrometry (ICP-AES) according to EPA method 200.7.
Selected Fields
Four forage fields were selected for the study. The fields were chosen based on the availability of historical data on irrigation diversions, and the forage yield collected at each cutting. As shown in Figure 2, fields 13-2 and 13-6 were planted with alfalfa (ALF), and fields 13-1 and 10-6 were planted with 'Jose' tall wheatgrass (TWG). Fields 13-1 and 13-2 had subsurface drains, whereas 10-6 and 13-6 had no subsurface drainage system. TWG field 10-6 was one of the original fields in the SJRIP; its saline irrigation began in 2001. Fields 13-1, 13-2 and 13-6 were developed in 2004; thus, all of the fields in this study had been irrigated with saline drainage water, blended with less saline water in the case of ALF fields, for more than twelve years. In the county soil survey, all four fields were mapped as clays, belonging to the Chateau (10-6), Deldota (13-1), Tranquillity and Deldota , and Tranquillity (13-6) soil series.
Irrigation Data
The salinity of the canal diversions into the four selected forage fields was measured using InSitu electrical conductivity (EC) sondes, which were installed to provide hourly measurements of the salinity of the applied irrigation water ( Figure 3) and the depth of the water in the irrigation supply ditches. The water depth provides an indicator that irrigation is most likely taking place, when water levels rise to a point where the siphon tubes that divert water into each field can be operated. The water depth provides a check on the accuracy of written records provided by the water district. A limited number of grab samples were also collected for chemical composition, and were sent to the California Department of Water Resources' designated laboratory for the analysis of their chemical constituents. The samples were filtered through a 0.22 m pore size nylon filter (Fisherbrand 25 mm syringe filter; Fisher Scientific, Tustin, CA, USA) prior to chemical analysis, and the portion used for analysis of Na + , Ca 2+ , Mg 2+ and B was acid fixed using 1 mL of 70% nitric acid. Chloride and SO4 2were measured using a Dionex DX-500 ion chromatography instrument (IC; Sunnyvale, CA, USA) according to EPA method 300.0 [17]. Sodium, Ca 2+ , Mg 2+ and B were measured using inductively coupled plasma atomic emission spectrometry (ICP-AES) according to EPA method 200.7.
Soil Salinity Surveys Using the EM38-MK2
In this study, soil salinity surveys were performed to determine the levels and the spatial and temporal variability of the salinity in the four forage fields. The surveys were carried out with a Geonics Ltd. (Mississauga, ON, Canada) EM38-MK2 electromagnetic induction sensor. The electromagnetic induction (EM) technique behind this sensor has been widely employed by soil scientists to better understand the spatial variability of soil properties at the field and farm scales. It is a reliable, quick, and easily mechanized technique for collecting salinity data as compared to the more traditional sampling method using a hand auger. EM instruments have been used to map soil moisture content [24] soil texture [25], clay content [26] and soil salinity [27,28] The EM38-MK2 sensor provides simultaneous measurements of the soil's apparent electrical conductivity (EC a ) at two profile depths: 0.75 m and 1.5 m. The EM38-MK2 and a GPS unit (Trimble, Sunnyvale, CA, USA) were connected to the serial ports of an Allegro-CX portable field device (Juniper Systems; Logan, UT, USA) for downloading the EM and GPS measurements. Custom software for the Geonics EM38-MK2 was installed on the Allegro-CX to facilitate the data logging. The EM-38-MK2 was mounted on a non-conductive PVC sled and dragged behind an all-terrain vehicle (ATV) to perform the salinity surveys ( Figure 4). The GPS unit was placed on the ATV to record the geographical coordinates of the EM measurements.
Soil Salinity Surveys Using the EM38-MK2
In this study, soil salinity surveys were performed to determine the levels and the spatial and temporal variability of the salinity in the four forage fields. The surveys were carried out with a Geonics Ltd. (Mississauga, ON, Canada) EM38-MK2 electromagnetic induction sensor. The electromagnetic induction (EM) technique behind this sensor has been widely employed by soil scientists to better understand the spatial variability of soil properties at the field and farm scales. It is a reliable, quick, and easily mechanized technique for collecting salinity data as compared to the more traditional sampling method using a hand auger. EM instruments have been used to map soil moisture content [24] soil texture [25], clay content [26] and soil salinity [27,28] The EM38-MK2 sensor provides simultaneous measurements of the soil's apparent electrical conductivity (ECa) at two profile depths: 0.75 m and 1.5 m. The EM38-MK2 and a GPS unit (Trimble, Sunnyvale, CA, USA) were connected to the serial ports of an Allegro-CX portable field device (Juniper Systems; Logan, UT, USA) for downloading the EM and GPS measurements. Custom software for the Geonics EM38-MK2 was installed on the Allegro-CX to facilitate the data logging. The EM-38-MK2 was mounted on a non-conductive PVC sled and dragged behind an all-terrain vehicle (ATV) to perform the salinity surveys ( Figure 4). The GPS unit was placed on the ATV to record the geographical coordinates of the EM measurements. General setup for the soil salinity surveys using the GPS unit and the Geonics EM38-K2 mounted on a PVC sled and dragged behind an ATV. The gantry that connected the ATV to the sled was made entirely of fiberglass, in order to avoid electromagnetic interference.
Salinity surveys were performed for each field during the spring and fall seasons of 2016 and 2017, following the methods described by [29,30]. Before beginning the surveys, the EM38-MK2 was mounted at a height of approximately 1.5 m above the ground using a PVC stand, and was calibrated following the manufacturer's guidelines. The ATV was navigated along transects marked with flags placed 30 m apart. The speed of travel varied from 8 to 9.7 km hr −1 , and the average distances between two consecutive survey sites are given in Table 1. All salinity surveys were conducted 3 to 5 five days after the irrigations were completed, and when the soil moisture contents were close to the field's capacity. For each field, survey measurements were initiated 5 to 10 m into the field on all sides to avoid any edge effects. General setup for the soil salinity surveys using the GPS unit and the Geonics EM38-K2 mounted on a PVC sled and dragged behind an ATV. The gantry that connected the ATV to the sled was made entirely of fiberglass, in order to avoid electromagnetic interference.
Salinity surveys were performed for each field during the spring and fall seasons of 2016 and 2017, following the methods described by [29,30]. Before beginning the surveys, the EM38-MK2 was mounted at a height of approximately 1.5 m above the ground using a PVC stand, and was calibrated following the manufacturer's guidelines. The ATV was navigated along transects marked with flags placed 30 m apart. The speed of travel varied from 8 to 9.7 km hr −1 , and the average distances between two consecutive survey sites are given in Table 1. All salinity surveys were conducted 3 to 5 five days after the irrigations were completed, and when the soil moisture contents were close to the field's capacity. For each field, survey measurements were initiated 5 to 10 m into the field on all sides to avoid any edge effects. Table 1. EM38 soil survey information for surveys in the spring and fall of 2016 and 2017 in two tall wheatgrass (TWG) fields (10-6 and 13-1) and two alfalfa (ALF) fields (13-2 and 13-6). The fields were 35.6 ha (10-6), 70 acres (13-1) and 30 ha (13-2 and 13-6).
Soil Sampling Locations (Ground-Truthing)
After completing the EM38 motorized surveys, the ESAP-RSSD program was used to determine the soil sampling locations, following a statistical sampling design that selects uniformly across the sample frequency distribution, and is based on the range and variability of the EC a data collected [31,32]. In our study, twelve sample locations were selected across each surveyed field. Soil samples were collected either immediately after the soil surveys were completed or the next morning, in order to ensure that the soil conditions had not changed. Any dry or loose soil (if present at the soil surface) was removed, since it would not be reflected in the EC a measurements because of its low moisture conditions. At each sampling site location, soil was taken at 30 cm depth increments across 0-120 cm with a hand auger. The soil samples were labeled and stored in zip-lock bags. For each survey, 48 samples were collected for lab analyses.
Soil Analysis
One portion (50-70 g) of the ground-truth soil sample was dried in an oven at 105 • C for 3-4 days in order to calculate its gravimetric water contents. The other portion was dried in a 55 • C oven and ground using a mechanical pulverizer to pass it through a 2 mm sieve. Saturated soil pastes were prepared with deionized water using 200 g of 55 • C dried soil, and were allowed to stand overnight prior to vacuum filtration [33]. The saturation percentage (SP) was calculated as the weight of the water required to saturate the soil divided by the weight of the dry soil used to prepare the saturation paste, with the decimal fraction converted to a percentage. The soil salinity (EC e ) was measured from the paste extracts using an EC meter (AccumetTM Basic AB30 Conductivity meter, Fisher scientific, Leicestershire, England). The pH of the saturated soil paste extracts was measured using a pH/conductivity meter. In fall 2017, the ground-truth soil samples collected at a 0-30 cm depth were also analyzed for boron (B), calcium (Ca 2+ ), magnesium (Mg + ), sodium (Na + ), chloride (Cl − ) and sulfate (SO 4 2− ). These additional analyses performed on the saturated past extracts of the samples collected in the soil top layer provided a good representation of the important chemical properties in the crop root zone. Sodium adsorption ratios (SAR) were then calculated from the Ca 2+ , Mg + and Na + values.
EC a to EC e Calibration and Spatial Maps
For each survey, the ESAP-Calibrate program was used to convert the EC a readings into EC e etimates using spatially referenced multiple linear regression models [31]. The DPPC (dual pathway parallel conductance) correlation analysis was performed, where a set of EC a readings (referred as Calc EC a ) were estimated based on the measured salinity (EC e ), SP and water content values using Rhoades' equation [31,34]. This analysis provided a theoretical value for the EC a readings at each sampling point, and served as a quality control check. Correlations between the calculated EC a , measured EC a , EC e , and other soil variables collected were also performed. Finally, a spatially referenced regression model was generated to predict the logarithm of the salinity levels (lnEC e ) at each sampling site and depth within the surveyed area.
Maps depicting the spatial distribution of the salts within each field were developed using ESRI's ArcGIS Pro 2.3.1. Maps were created for each sampled depth, as well as for the average salinity across the soil profile (0-120 cm) using satellite imagery as their base-maps. The Inverse Distance Weighing (IDW) technique, a deterministic geostatistical method of interpolation, was used for the interpolation of the data, and was provided by the Spatial Analyst toolbox within ArcMap. This method was selected, instead of commonly used geospatial methods such as kriging, because of the high spatial resolution of the survey data collected. A fixed radius setting of 40 m was used to generate the interpolated data with a minimum of 25 sample points. The output cell size that determined the map grids was 5 m.
Leaching Fraction Estimation
The leaching fraction (LF) can be estimated by assuming steady-state conditions and good drainage as follows: where Cl iw or EC iw represents the chloride or salinity value for the irrigation water, and the denominators represent the depth and location-specific predictions of the chloride or EC of the water (drainage) moving below the root zone. For our LF calculation, the average irrigation water salinity for the irrigation season (2016 or 2017), obtained from the EC sondes installed in each field, was used for the numerator. For the denominator, the EC of the soil water (EC sw ) was considered as the best estimate of the EC of the drainage. Rather than using the standard multiplication factor of the two to convert EC e to EC sw , the EC e of the ground-truth samples was multiplied by the water content ratio (W sp /W f ) using the saturated paste water content (W sp ) and the field water content of the ground-truth samples (W f ) on a gravimetric basis. For each field and soil survey, LFs were estimated for each 30 cm soil layer and for the entire soil measurement zone (0-120 cm).
Forage Sampling and Analysis
Twelve forage tissue samples were collected from 1 m 2 areas in each field prior to harvest during the period of April to July in 2017. The herbage was cut at the top of the crown at the sites where the soil samples were collected during the spring EM38 surveys. Field samples were taken to the laboratory, where the fresh weight of the biomass was measured. The samples were rinsed with deionized water to remove any surface salt and dust, and then dried for 2-3 days in a forced air oven at 50 • C to obtain the dry weight. The dried samples were then ground in a mechanical grinder to pass a 40-mesh screen for subsequent analyses of potassium and sodium in the shoots. The K + and Na + contents in the forage shoot tissues were determined using an Agilent 240AA Atomic Absorption and Emission Spectrophotometer (Agilent; Santa Clara, CA, USA). The K + and Na + elements were analyzed in the absorption and emission modes, respectively. The tissue extraction consisted of 0.5 g of dried and ground shoot sample mixed with 30 mL of a 2% acetic acid. The extracts were filtered through a #1 filter paper to remove any particulates.
Irrigation Water
In general, the irrigation waters for the forage fields, as analyzed from the grab samples, were alkaline, averaging a pH of 7.7 to 8.0, and relatively high in bicarbonate (72-150 mg L −1 averages for the four fields). The salinity was more sulfate ion-dominated when compared to the concentrations of chloride and sodium ions in solution. As the salinity increased, the sodium adsorption ratio (SAR) and boron concentrations also increased ( Table 2). The EC iw data for grab samples are shown in Table 2, but they represent a very limited number of samples; thus, the discussion of the salinity of the irrigation water applied to the forage fields will focus on the EC sonde (continuous monitoring) data described below. 1 EC w = electrical conductivity (salinity) of applied irrigation water; 2 SAR = sodium adsorption ratio. Unit-less.
The sonde data provided EC values from the continuously monitored diversion sites and gave a good representation of the salinity of the drainage waters applied as irrigation to the forage fields ( Figure 5). The TWG fields were irrigated with higher salinity water (1.2-9.3 dS m −1 ) compared to the ALF fields, which were irrigated with lower salinity water (0.5-6.9 dS m −1 ), reflecting the lower salt tolerance of alfalfa compared to tall wheatgrass. In Figure 5, the salinity of the irrigation water (EC iw ) applied to each field between 1 July 2016 and 25 October 2017 is reported as daily means, which averaged 5.6 and 4.8 dS m −1 for TWG fields 10-6 and 13-1, and 2.0 and 3.7 dS m −1 for ALF fields 13-2 and 13-6, respectively. For the ALF fields, data were more limited in 2016 compared to 2017. Alfalfa field 13-2 received high quality irrigation water (EC iw < 1dS m −1 ) for most of the year in 2017, whereas between August and November 2016, the irrigation water salinity was often in the 3-4 dS m −1 range. Sustainability 2020, 12, x FOR PEER REVIEW 13 of 28 field 13-2 received high quality irrigation water (ECiw < 1dS m −1 ) for most of the year in 2017, whereas between August and November 2016, the irrigation water salinity was often in the 3-4 dS m −1 range. Figure 5. Mean daily irrigation water salinity (ECiw) of the saline drainage water used to irrigate tall wheatgrass fields 10-6 and 13-1 (upper two graphs), and alfalfa fields 13-2 and 13-6 (lower two graphs) in 2016 and 2017. In-SITU Series 200 sondes were used to monitor the EC and water depth in each supply ditch.
Soil Chemistry
The chemical analyses performed on the saturation paste extract of the ground-truth soil samples collected at 0-30 cm depth during the fall 2017 surveys indicated that the soils in the TWG fields had sodium adsorption ratios (SAR) roughly twice as elevated as those observed in the ALF fields ( Table 3). The high mean SAR values of 20.8-22.1 obtained in the TWG fields would suggest low water infiltration into the soil; however, this was not observed in the two TWG fields, possibly due to the fibrous root system of the forage, which help improve infiltration. The soil mean boron concentrations were also much higher in the TWG fields (18.9-21.9 mg L −1 ) compared to the ALF fields (4.0-11.4 mg L −1 ). Although such elevated levels would be detrimental to the growth of most other crops, growth hindrance was not observed in the TWG fields. Thus, these findings reflect the high salt and boron tolerance of 'Jose' tall wheatgrass when grown using saline drainage water, as observed by [7]. Na+ was the predominant cation, in particular in the more saline TWG fields (10-6 Figure 5. Mean daily irrigation water salinity (EC iw ) of the saline drainage water used to irrigate tall wheatgrass fields 10-6 and 13-1 (upper two graphs), and alfalfa fields 13-2 and 13-6 (lower two graphs) in 2016 and 2017. In-SITU Series 200 sondes were used to monitor the EC and water depth in each supply ditch.
Soil Chemistry
The chemical analyses performed on the saturation paste extract of the ground-truth soil samples collected at 0-30 cm depth during the fall 2017 surveys indicated that the soils in the TWG fields had sodium adsorption ratios (SAR) roughly twice as elevated as those observed in the ALF fields ( Table 3). The high mean SAR values of 20.8-22.1 obtained in the TWG fields would suggest low water infiltration into the soil; however, this was not observed in the two TWG fields, possibly due to the fibrous root system of the forage, which help improve infiltration. The soil mean boron concentrations were also much higher in the TWG fields (18.9-21.9 mg L −1 ) compared to the ALF fields (4.0-11.4 mg L −1 ). Although such elevated levels would be detrimental to the growth of most other crops, growth hindrance was not observed in the TWG fields. Thus, these findings reflect the high salt and boron tolerance of 'Jose' tall wheatgrass when grown using saline drainage water, as observed by [7]. Na+ was the predominant cation, in particular in the more saline TWG fields (10-6 and 13-1). The soil salinity in this area had a high sulfate component, as evidenced by the soil sulfate concentrations, which were similar to the chloride concentrations. Table 3. Soil chemical properties of the saturated soil paste extract for EM38 ground-truth samples (0-30 cm depth) taken in tall wheatgrass fields (10-6 and 13-1) and alfalfa fields (13-2 and 13-6) in fall 2017. Fields 13-1 and 13-2 were drained, and 10 -6 and 13-6 were undrained fields.
Soil Survey Quality Checks and Calibration of EC a to EC e
The analysis of the soil salinity survey data using the ESAP-Calibrate program revealed correlations of >0.90 between the EM h (horizontal) and EM v (vertical) measurements, suggesting that there were no moisture or textural irregularities in the soil profile. The surveys were conducted when the volumetric water content of the soil was at least 70% of field capacity. The soil water content relative to the field capacity was estimated by ESAP based on Rhoades equations [31,34], and it was observed that the surveys were conducted when the volumetric water content of the soil was at least 70% of field capacity (data not shown). Results from the data quality check performed on the acquired EC a measurements for each salinity survey are presented as DPPC correlations in Table 4. These correlations show the relationship between the log of 'CalcECa' (calculated ECa) and the z1 signal (EM data), averaged over the entire soil profile [31]. Poor correlations were observed for only one field, TWG 13-1, during the fall seasons. Such results could be explained by the large size of the field and/or the high soil moisture variability across the field and the profile depth. An extended period (up to 5 days) was required to complete one full irrigation cycle; therefore, the most recently-irrigated portion of the field may have been above field capacity when the survey was conducted, whereas the first irrigated section was drier. After the data quality checks, linear regression models were developed, and those that produced the best predictions of the (log) salinity level at each surveyed point were selected. The best-fit regression models developed for each survey are shown in Table 5, in which: • b0, b1, b2, b3 and b4 are the regression parameters; • z1 and z2 are the transformed and de-correlated EM signal readings (i.e., vertical 1.5 m and horizontal 0.75 m); • x and y are the centered and scaled location coordinates.
Soil Salinity Derived from the ESAP Calibration Software and Leaching Fraction (LF)
The ground-truth soil salinity (EC e ) data for all fields, sampling times and profile depths are shown in Table 6. With the exception of TWG field 10-6, there was little or no increase in soil salinity in the forage fields between spring and fall in 2016. This could be explained by a lack of rainfall in the winter of 2016, such that the irrigation applications during the summer helped to leach some of the salts below the 120 cm soil profile depth. In 2017, there was also relatively little increase in the soil salinity from spring to fall. The field with the highest soil salinity was TWG field 13-1, which exhibited mean levels between 16.4 and 19.3 dS m −1 EC e across the 0-120 cm soil profile. The highest salinity levels were observed at the lower sampled depths (90-120 cm), with mean EC e values ranging from 19 to 23 dS m −1 . Field 13-1 was drained, and there was evidence of leaching, given that, in all four sampling periods, the soil salinity was lowest in the 0-30 cm soil depth interval and highest in the 90-120 depth interval. The leaching fraction (LF) data ( Table 7) also show that leaching was greatest in the surface layer (15.2-19.7%) and lowest in the 90-120 cm soil layer (9.4-11.5%) for this field. TWG field 10-6 was less saline, with average measured salinity for the 0-120 cm soil profile ranging from 12.5 to 16.8 dS m −1 EC e over the two-year period ( Table 6). Although it was an undrained field, the salinity was relatively uniform, and the depth in this profile and estimated LF's were highest for this field, being 21 to 36% over the two-year period ( Table 7). Field 10-6 was one of the earliest fields brought under saline irrigation in the SJRIP, and thus, after more than fifteen years of saline irrigation, it had likely reached equilibrium conditions with respect to salt dissolution and precipitation within the soil profile. The ALF fields (13-2 and 13-6) had lower soil salinity than the TWG fields (Table 6), which is consistent with the application of less saline water to the alfalfa fields compared to the tall wheatgrass fields. However, when comparing the alfalfa fields, the soil salinity was higher (12.0 to 14.4 dS m −1 ECe) across the 0-120 cm profile in field 13-2, which was irrigated with less saline water, especially in 2017 ( Figure 5). Field 13-6, which was irrigated with more saline water, had lower soil salinity (9.0 and 10.4 dS m −1 EC e ) in the 0-120 cm layer. This discrepancy may be explained by the fact that field 13-2 had the lowest leaching fractions of all four fields (Table 7).
In the case of Field 13-2, which was drained, the soil salinity was lowest near the surface (0-30 cm depth) and increased with each depth increment in the soil profile. The leaching fraction data support this observation, with a higher estimated LF for the surface layer (7-14%) compared to the 90-120 cm soil layer (3-7%). Field 13-6 was not drained, but soil salinity was again lowest in the top 30 cm depth, and increased in the lower soil depth intervals between 60 and 120 cm. This is indicative of soil leaching and is, again, supported by the LF data showing greater leaching in the surface layer (22-39%) compared to the 60-120 cm soil layer (14-21%).
Leaching Fraction and Drainage through the Profile
As mentioned previously, the soil salinities (EC e ) were consistently lowest-and the leaching fractions were highest-in the surface layer (0-30 cm), and for the 60-120 cm soil depths, the soil salinities were higher and the leaching fractions were lower (Tables 6 and 7). This indicates downward salt displacement from the soil surface and is consistent with the relatively high levels of water application in this saline drainage water reuse site. Over the two-year period, the undrained fields (10-6 and 13-6) had much higher LF's (18-36%), and the two drained fields (13-1 and 13-2) had lower LFs (3.8-14.1%) for the 0-120 cm soil profile (Table 7). Generally, it would be expected that drained fields would have the higher LFs, but many factors influence the overall LF in a soil profile, including the irrigation volume and frequency, soil structure and texture, rooting depth and density, uptake of water from non-stressed portions of the crop root zone, salt precipitation and dissolution, and preferential flow [35][36][37]. For the four fields examined, applied water volume may have been an equally important factor influencing the extent of leaching in the 0-120 cm soil layer, as was the presence or absence of a drainage system. Alternatively, there is published evidence [36,37] that in undrained fields, crop water uptake may be reduced due to poor soil aeration, especially at salinities limiting crop growth. This would result in more water movement through the profile and could explain the higher LF measured for our undrained fields as compared to the drained fields 3.6. Spatial Variability in Soil Salinity Figure 6 shows the salinity distributions across the soil profile (0-120 cm) at each ground-truthing location for all of the surveys. The salt distribution in field 10-6 (TWG) was highly variable among the twelve sampling locations, with EC e values ranging from 1 to 32 dS m −1 . In addition, Figure 6 reveals that most of the salts were accumulating at the 30-60 cm and 60-90 cm soil depths, indicating a lack of adequate drainage, which was expected, as this field had no subsurface drainage system installed.
Spatial maps depicting the soil salinity distribution in field 10-6 are shown in Figure 7 for spring and fall 2017. The green and yellow areas represent lower soil salinity levels, and the orange, dull pink and white areas represent higher soil salinity. The maps illustrate that, throughout the study period, the western edge of field 10-6 had relatively lower EC e values (<8 dS m −1 ) compared to the central (8-18.5 dS m −1 ) and eastern (>18.5 dS m −1 ) parts of the field. This could be attributed to the textural variability of the soil within the field, as the western area was comprised of lighter-textured soil (as indicated by lower saturation percentage values). The maps show an increase in salinity from spring to fall 2017, and an accumulation of salt, primarily in the 30-60 cm and 60-90 cm depth ranges in both the fall and spring seasons. However, in fall 2017, the salt accumulation was also high in the surface 30 cm. Also shown on the upper maps are the transects/rows where the EC a (mS/m) data were collected in the field during the EM38-MK2 surveys, with the blue points representing the soil sampling sites. The EC a data represent the averages of the vertical and horizontal EC a measurements.
Sustainability 2020, 12, x FOR PEER REVIEW 18 of 28 Figure 6 shows the salinity distributions across the soil profile (0-120 cm) at each groundtruthing location for all of the surveys. The salt distribution in field 10-6 (TWG) was highly variable among the twelve sampling locations, with ECe values ranging from 1 to 32 dS m −1 . In addition, Figure 6 reveals that most of the salts were accumulating at the 30-60 cm and 60-90 cm soil depths, indicating a lack of adequate drainage, which was expected, as this field had no subsurface drainage system installed. Spatial maps depicting the soil salinity distribution in field 10-6 are shown in Figure 7 for spring and fall 2017. The green and yellow areas represent lower soil salinity levels, and the orange, dull pink and white areas represent higher soil salinity. The maps illustrate that, throughout the study period, the western edge of field 10-6 had relatively lower ECe values (<8 dS m −1 ) compared to the central (8-18.5 dS m −1 ) and eastern (>18.5 dS m −1 ) parts of the field. This could be attributed to the textural variability of the soil within the field, as the western area was comprised of lighter-textured soil (as indicated by lower saturation percentage values). The maps show an increase in salinity from spring to fall 2017, and an accumulation of salt, primarily in the 30-60 cm and 60-90 cm depth ranges in both the fall and spring seasons. However, in fall 2017, the salt accumulation was also high in the surface 30 cm. Also shown on the upper maps are the transects/rows where the ECa (mS/m) data were collected in the field during the EM38-MK2 surveys, with the blue points representing the soil sampling sites. The ECa data represent the averages of the vertical and horizontal ECa measurements. The soil salinity profiles for field 13-6 (ALF) for the two-year period are shown in Figure 8. Field 13-6 had the lowest salinity levels compared to the other fields, with soil salinity levels below 18 dS m −1 for almost all of the sampling locations and surveys. The most salt accumulation occurred within the 30-60 cm and 60-90 cm soil layers in 2016, as was observed in the other undrained field, 10-6. In the spring of 2017, two ground-truthing locations exhibited a soil salinity of 21-23 dS m −1 ; however, these higher levels were no longer observed during the fall 2017 survey.
Spatial Variability in Soil Salinity
The spatial maps presented in Figure 9 show the lower salinity levels characteristic of ALF field 13-6 in 2017. Most of the surveyed field exhibited soil salinity lower than 13 dS m −1 . The salinity levels The soil salinity profiles for field 13-6 (ALF) for the two-year period are shown in Figure 8. Field 13-6 had the lowest salinity levels compared to the other fields, with soil salinity levels below The spatial maps presented in Figure 9 show the lower salinity levels characteristic of ALF field 13-6 in 2017. Most of the surveyed field exhibited soil salinity lower than 13 dS m −1 . The salinity levels tended to increase from the spring to the fall, and most of the salts accumulated in the 60-90 cm and 90-120 cm depths. The maps also illustrate the lower variability in salinity across the field, as compared to 10-6. For the other two fields 13-1 (TWG) and 13-2 (ALF), the salinity distribution profiles and spatial salinity maps are provided in Supplementary Materials. For Field 13-1 (TWG), it can be seen that leaching is greatest at the soil surface ( Figure S1) and that in 2017, salinity was consistently higher in the western part of the field and in the 90-120 cm soil layer ( Figure S2). It should also be noted that only a part of the field 13-1 was surveyed in spring 2016 due to high water content on the western portion of the field. The irrigation with siphon tubes progressed from east to west across each field, and the bank of siphon tubes deployed last was on the western side of the field. If inadequate time elapsed after the last irrigation event, the surface soils sometimes became waterlogged, which prevented the use of the ATV and risked damage to the crop along the tire tracks and path of the sled carrying the EM. During fall 2016, as previously reported, after performing the salinity survey of Field 13-1, the results showed poor DPPC correlations, which compromised the estimation of EC e from EC a .
Field 13-2 (ALF) consistently received good quality irrigation water, which produced a salinity profile indicative of relatively good leaching ( Figure S3). This effect was evident during the fall of 2017, which showed a uniformly leached surface layer ( Figure S4) in spite of the low leaching fraction (7-8%) estimated for this field (Table 7). Since this was a drained field, most of the salt accumulation was observed in the 90-120 cm soil depth, close to the subsurface tile drains. Only a portion of the field was able to be surveyed in year 2016 because of the high water content in the western section of the field at the time of the survey. Figure 10 shows the results from the correlations between the forage dry weight and soil salinity (EC e ), and between the forage dry weight and Na concentrations in the shoots of each forage. The data were combined from the two corresponding to each forage, and the R square (R 2 ) and p value are also provided. In no case was forage dry weight strongly correlated with soil salinity (EC e ) or with shoot Na; however, the correlation between the forage dry weight and soil salinity was stronger for the alfalfa fields, reflecting its lower salt tolerance compared to tall wheatgrass.
Forage Analysis
The soil salinity was very high in the TWG fields, but even in the range of 15-20 dS m −1 EC e where most of the data points fell, the soil salinity did not appear to be the main factor influencing the forage dry weight. However, the tall wheatgrass yields measured for the entire field were low (4.78 t ha −1 average for fields 10-6 and 13-1) (data not shown) compared to another saline-irrigated site where tall wheatgrass was grown at similarly high soil salinities [7]; thus, it is possible that, within this range of low yield, other site-specific factors such as soil moisture (water-logging) or weed pressure were influencing the forage dry weight. The main goal of forage production at the SJRIP is not high yield, but rather adequate growth to maintain high evapotranspiration (ET) for the maximum consumption (disposal) of saline drainage water.
Likewise, the Na concentration in the tall wheatgrass shoots, although high (6-8 g kg −1 ), was not exerting a strong influence on the forage dry weight. It should be pointed out that the tall wheatgrass yields obtained in these fields, although low, are remarkable given the very high soil salinity (15-20 dS m −1 EC e ) and soil boron concentrations (18-22 mg L −1 ). Tall wheatgrass was the forage of choice for this saline drainage water reuse site, as evidenced by the continued planting of TWG over the past 20 years as the site has increased in size to 2600 ha. Figure 10 shows the results from the correlations between the forage dry weight and soil salinity (ECe), and between the forage dry weight and Na concentrations in the shoots of each forage. The data were combined from the two corresponding to each forage, and the R square (R 2 ) and p value are also provided. In no case was forage dry weight strongly correlated with soil salinity (ECe) or with shoot Na; however, the correlation between the forage dry weight and soil salinity was stronger for the alfalfa fields, reflecting its lower salt tolerance compared to tall wheatgrass.
Summary
This paper highlighted the actions taken by stakeholders in the western San Joaquin Valley of California to sustain irrigated agriculture in light of policy-driven environmental regulation that initially focused on controlling the selenium contamination in the Grasslands Basin wetlands and selenium loading to the San Joaquin River. This paper argued that irrigation sustainability will require a greater understanding of local and regional salt balance, and the development of a new suite of science-driven decision support tools and practices to maintain the crop root zone soil salinity within salt tolerance guidelines. The paper also recommended greater effort to bridge the information and technology gaps between the complexity of the EM38-MK2 instrument and its reliance on statistically-based ground truthing and the laboratory analysis of soil samples, and the annual planning and day-to-day decision making of irrigators. The paper provided detail on the steps involved in making a typical EM38-MK2 survey that would be the underpinning for any longer-term 1-D transient salinity modeling effort, using the Panoche Water District SJRIP facility as an example. The same protocols and interpretative analysis could apply to any salinity-impacted agricultural drainage reuse system worldwide.
In the current study, four fields planted with 'Jose' tall wheatgrass (TWG) and alfalfa (ALF) were surveyed with an EM38-MK2 instrument to determine the spatial and temporal variability of the soil salinity at SJRIP. The TWG fields were irrigated with higher salinity water compared to the ALF fields, and the soil salinities averaged 12 to 19 dS m −1 EC e for the 0-120 cm profile, with boron concentrations of 19-22 mg L −1 in the top 30 cm over the two-year period. The ability of 'Jose' tall wheatgrass to grow and consume saline drainage water through evapotranspiration under these high saline and high boron conditions makes this forage a very suitable candidate for saline drainage water reuse systems.
Field 13-2 (ALF) received relatively good quality irrigation water throughout the study period. Tile drained fields 13-1 (TWG) and 13-2 (ALF) had improved leaching, as most salt accumulation was found in the lower portion of the soil profile (60-90 and 90-120 cm soil depths). In comparison, field 10-6, which was not drained, had high salinity in the 30-60 cm layer, in addition to the 60-90 and 90-120 cm soil layers. Field 10-6 had the largest variability in areal salt accumulation, which could be attributed to the variability in its soil texture. Field 13-2 showed evidence of salt leaching, with the fall 2017 survey showing a uniformly leached surface soil layer, most likely the result of irrigation applications of good quality water. Generally, for all of the fields except field 13-2, there was an increase in the soil salinity measured during the fall survey compared to the spring survey, which was expected because of the additional leaching potential of salts due to winter rains. The salinity did not decrease as much during 2017, given the relatively heavier precipitation during 2017. Additional years of data will be instructive to see if the same salinity trends repeat over time and will help improve the calibration of the CSUID model, which benefits from having large perturbations in the soil salinity signal.
The estimation of the soil salinity (EC e ) was compromised during the fall season of both years for fields 13-1 (TWG) and 13-2 (ALF), as suggested by poor DPPC correlations and poor R-squared values for the regression model used to convert the EC a data to EC e . These results were partially explained by the high clay content (based on higher SP values) and high spatial variability of soil texture in these fields, which likely affected the EC a readings [29]. However, field 13-1 was also particularly difficult to survey to achieve optimal soil moisture conditions due to the irrigation schedule practiced by the water district, which, on some surveys, left a portion of the field with standing water. In general, the model R-squared values were high, and resulted in good model fit for the majority of cases, allowing the realistic estimation of the average salinity of the crop rootzone (0-120 cm). The ability to discern good data from bad is very important in order to maintain the utility of these soil salinity surveys, and to maintain the confidence of the water managers in the SJRIP. The more complex the technology, the greater the need for quality control and transparency. Methods that combine a greater connection to metrics the irrigators understand will help to bridge this gap.
Conclusions
Areal maps delineating the areas of high and low salinity in the fields chosen for this study in the SJRIP have proven to be useful to the managers of the facility in guiding future irrigation practices. These maps have been shared and discussed with Panoche Water District in two data meetings, and in a poster. These data, in combination with a record of the declining yields in the alfalfa fields, led to a decision to fallow fields 13-2 and 13-6 during 2018. The other significant product from this study was the establishment of a ground-truthing dataset for potential soil salinity assessment using remote sensing techniques [4]. High resolution ground-truthing data are often hard to obtain for such efforts. There is also a possibility that hyperspectral sensors may become a better platform for the assessment of soil salinity due to their capability to detect and map saline soils in more detail. Moreover, the 12 sampling locations established in each field could potentially serve as monitoring sites, given that the selected sites depict the full range of variability across the surveyed area-these sites could also be used as representative sites to monitor changes in the salinity levels in these fields over time. However, it should be noted that the main purpose of the sampling design was to optimize the parameter selection for the regression model for accurate salinity (EC e ) estimation, and not to provide a statistical analysis of the data collected as part of the salinity survey [32,38].
As demonstrated by the project, the use of the EM38-MK2 instrument could be part of a long-term monitoring strategy where the soil surveys conducted with this instrument could be used in combination with less time-intensive and easier-to-automate techniques like remote sensing as part of a long-term salinity management strategy. The project has shown that root zone salinity can change seasonally and between years, thus requiring that the salinity of subsurface drainage water used for irrigation be monitored, along with precipitation, in order to ensure that crop salinity thresholds are not exceeded with consequent declines in forage crop yield and profitability. Strategic reclamation may be required to return soil quality should rapid salinization occur.
A state-of-the-art pilot treatment facility located onsite, which uses reverse osmosis and microfiltration to remove salt from the subsurface drainage entering the SJRIP facility, could play a role in the improved management of the salinity of the water applied to the alfalfa and tall wheatgrass crops within the SJRIP facility. However, the cost-effectiveness of this approach would need to be weighed against the profits generated from the forage sales. At present, the exorbitant cost of water treatment and the resulting high cost per m 3 of product water limits the further development of this strategy. The major benefit currently realized by the SJRIP reuse facility is the disposal of drainage return flows and the associated salt load through crop evapotranspiration and direct evaporation. The successful optimization of the management deployed at the reuse facility will be essential for sustainable long-term operation and the facility's ability to serve the 43,000 ha Grasslands Drainage Area while meeting the zero drainage export requirements now in place. We suggest that this mandate be met with the newly designed rapid EM38-MK2-based soil salinity surveys, combined with a better suite of remote sensing tools to improve the automation of soil salinity mapping, combined with skilled technical support. Machine learning techniques may play a role in further streamlining this process. The Panoche Water District, in the meantime, is also expanding the SJRIP acreage (primarily with 'Jose' tall wheatgrass plantings) to meet the zero-drainage discharge mandate.
The goal of providing a user-friendly computer simulation model with an interactive graphical user interface ( Figure 11) as a framework for the collection of relevant data for the development of water and salinity mass balances was fulfilled in this project. Given the dearth of data available at the beginning of the project and the difficulties interpreting the data that had been collected by the District, we were under no illusions that the model would be sufficiently calibrated to be used for prediction purposes. However, the CSUID-1D model was able to show its potential as a decision support tool to guide future management decisions and allow the SJRIP to achieve its drainage disposal function while providing an economic return through sustainable forage production. One significant oversight that was realized after the analysis of the 2016 data was the failure to include tile drains, drain depth and drainage yield among the CSUID-ID model input parameters that were selected at the beginning of the simulation. The original 3-D CSUID simulation code has significant capability for the depiction of tile drainage systems at the field and farm scale, but our initial thinking was to keep the model as simple as possible in order to keep run times short and not intimidate our targeted users. The EM38-MK2 results made clear the beneficial effect of tile drains in redistributing salts within the soil profile, with the highest concentration of salt in the lowest soil layer. Overall, the salt concentration was highest in the undrained fields. This oversight can be readily addressed in a new version of the CSUID-1D model user interface.
Sustainability 2020, 12, x FOR PEER REVIEW 24 of 28 mandate be met with the newly designed rapid EM38-MK2-based soil salinity surveys, combined with a better suite of remote sensing tools to improve the automation of soil salinity mapping, combined with skilled technical support. Machine learning techniques may play a role in further streamlining this process. The Panoche Water District, in the meantime, is also expanding the SJRIP acreage (primarily with 'Jose' tall wheatgrass plantings) to meet the zero-drainage discharge mandate.
The goal of providing a user-friendly computer simulation model with an interactive graphical user interface ( Figure 11) as a framework for the collection of relevant data for the development of water and salinity mass balances was fulfilled in this project. Given the dearth of data available at the beginning of the project and the difficulties interpreting the data that had been collected by the District, we were under no illusions that the model would be sufficiently calibrated to be used for prediction purposes. However, the CSUID-1D model was able to show its potential as a decision support tool to guide future management decisions and allow the SJRIP to achieve its drainage disposal function while providing an economic return through sustainable forage production. One significant oversight that was realized after the analysis of the 2016 data was the failure to include tile drains, drain depth and drainage yield among the CSUID-ID model input parameters that were selected at the beginning of the simulation. The original 3-D CSUID simulation code has significant capability for the depiction of tile drainage systems at the field and farm scale, but our initial thinking was to keep the model as simple as possible in order to keep run times short and not intimidate our targeted users. The EM38-MK2 results made clear the beneficial effect of tile drains in redistributing salts within the soil profile, with the highest concentration of salt in the lowest soil layer. Overall, the salt concentration was highest in the undrained fields. This oversight can be readily addressed in a new version of the CSUID-1D model user interface. Figure 11. Graphical user interface for the CSUID-ID model that is being developed as a decision support tool for the estimation of the optimal leaching rates and the guidance of future irrigation blending decisions with lower EC water supply. The model has provided a useful framework for the assimilation of the required data for salinity mass balance assessments. Figure 11. Graphical user interface for the CSUID-ID model that is being developed as a decision support tool for the estimation of the optimal leaching rates and the guidance of future irrigation blending decisions with lower EC water supply. The model has provided a useful framework for the assimilation of the required data for salinity mass balance assessments.
Future Work
The EM38-MK2 soil surveys were important for the characterization of the spatial variability of soil salinity in the fields included in this study and will have utility for the calibration and validation of the CSUID-1D computer model. Three years of field data is insufficient to obtain the credible calibration of the model. However, EM38 surveys require significant expertise to conduct, and time and effort and are unlikely to be continued by the District alone, given its resource limitations. Future collaborations with universities and project funding may allow the continuation of the EM38 mapping program. In the interim, the installation of representative cluster wells within each of the experimental fields may allow the District to track the salinity trends at two intervals within the soil profile using currently available resources and monitoring equipment. These wells will help assess the adequacy of current salt leaching practices, as well as providing data for the further calibration of computer-based model simulation tools that can serve as decision support systems. By developing a proxy relationship between each well and the average salinity at shallow (1.5-2.5 m) and deep (4.3-5.2 m) depths, the wells' EC data can be useful in showing trends in field salinization. Remote sensing using multispectral satellite imagery has shown some potential for salinity assessments when compared to field data; drone imagery avoids the problems associated with cloud cover, and it allows for image collection when conditions are closer to optimal. We are optimistic that higher resolution hyperspectral imagery may allow new spectral indices to be developed, which can assess the vegetation health and potential crop yield. These relationships could provide a more cost-effective means of tracking the soil salinity and preventing the onset of yield declines when the root zone salinity exceeds the yield response threshold. The regression models developed using soil salinity data and vegetation indices (NDVI, SAVI, RVI) yielded reasonable R-square values (exceeding 0.70). The best agreement was found to occur on the alfalfa field sites for the spring EM38-MK2 survey. We believe that we can achieve better results moving from satellite to drone-based imagery, in addition to increasing the palate of the spectral bands available to us by moving from multispectral to hyperspectral imagery.
The long-term aim is to have a credible, reliable and easy-to-use decision support tool that can guide future irrigation water quality management practices at the SJRIP, i.e., customizing the blend of subsurface drainage water and R.O. treatment plant product water to allow sustainable forage production in both alfalfa and 'Jose' tall wheatgrass fields.
Author Contributions: Conceptualization and funding acquisition for the research were performed by co-PI's S.E.B. and N.W.T.Q. S.E.B. and N.W.T.Q. were also responsible for project administration and the supervision of CSUF and LBNL staff who provided assistance on the project. Monitoring station design and the installation of the sensor network was performed by N.W.T.Q., together with the acquisition of irrigation application and field data from the Panoche Water District. S.E.B. was responsible for the agronomic and forage yield estimation. F.C. was responsible for the design of the EM surveys and soil sampling. The majority of the EM38 field surveys was performed by A.S. and various field staff assigned to this aspect of the work, including N.W.T.Q. and F.C. for the first field surveys. A.S. was responsible for the GIS mapping and laboratory salinity analysis, supervised and aided by S.E.B. This research is drawn largely from an M.S. thesis by A.S. that has been adapted with contributions from N.W.T.Q., S.E.B. and F.C. to conform with the guidelines for the Special Issue on Agricultural Sustainability and Policy. All authors have read and agreed to the published version of the manuscript. | 2020-08-13T10:05:13.909Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "a301691e10d9dfdd15047fc54eee4d279a7d9e31",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/16/6362/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a2c99a6308aae91e04786284fa5b8cd9cfe1aa37",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
4526657 | pes2o/s2orc | v3-fos-license | Critical Transitions in Early Embryonic Aortic Arch Patterning and Hemodynamics
Transformation from the bilaterally symmetric embryonic aortic arches to the mature great vessels is a complex morphogenetic process, requiring both vasculogenic and angiogenic mechanisms. Early aortic arch development occurs simultaneously with rapid changes in pulsatile blood flow, ventricular function, and downstream impedance in both invertebrate and vertebrate species. These dynamic biomechanical environmental landscapes provide critical epigenetic cues for vascular growth and remodeling. In our previous work, we examined hemodynamic loading and aortic arch growth in the chick embryo at Hamburger-Hamilton stages 18 and 24. We provided the first quantitative correlation between wall shear stress (WSS) and aortic arch diameter in the developing embryo, and observed that these two stages contained different aortic arch patterns with no inter-embryo variation. In the present study, we investigate these biomechanical events in the intermediate stage 21 to determine insights into this critical transition. We performed fluorescent dye microinjections to identify aortic arch patterns and measured diameters using both injection recordings and high-resolution optical coherence tomography. Flow and WSS were quantified with 3D computational fluid dynamics (CFD). Dye injections revealed that the transition in aortic arch pattern is not a uniform process and multiple configurations were documented at stage 21. CFD analysis showed that WSS is substantially elevated compared to both the previous (stage 18) and subsequent (stage 24) developmental time-points. These results demonstrate that acute increases in WSS are followed by a period of vascular remodeling to restore normative hemodynamic loading. Fluctuations in blood flow are one possible mechanism that impacts the timing of events such as aortic arch regression and generation, leading to the variable configurations at stage 21. Aortic arch variations noted during normal rapid vascular remodeling at stage 21 identify a temporal window of increased vulnerability to aberrant aortic arch morphogenesis with the potential for profound effects on subsequent cardiovascular morphogenesis.
Introduction
Congenital heart disease (CHD) has the highest incidence and mortality rate of all birth defects in the U.S., occurring in at least 8 of every 1000 live births, and accounting for more than 24% of birth defect related infant deaths [1]. Due to the prevalence and severity of CHD, the cardiovascular (CV) system has become one of the most widely researched areas in developmental biology. Compared to other organ systems, the CV system is the first to form and the only one which is required to function successfully for survival [2]. Much of what we know about vertebrate CV development originated using the chick embryo, which undergoes cardiogenesis similar to humans and is amenable to both acute and chronic imaging and instrumentation [3]. Appearing at Hamburger-Hamilton stage 10 (33 h) in the chick, the heart is initially an open ended tube located at the ventral midline of the embryo and aligned parallel to the cranio-caudal axis [4,5]. Cardiac looping transforms the heart into a looped tube by stage 24 (4 days) [6] and subsequent septation events produce the fourchambered heart and two great arteries, with all major CV structures formed by stage 36 (10 days) [7].
Unlike the adult circulation, where the right and left ventricles eject through semilunar valves into the pulmonary and systemic arterial circulations, respectively, the embryonic ventricle ejects blood through multiple, bilaterally paired aortic arches (AA). In the chick embryo, a total of six AA pairs (numbered I-VI) emerge consecutively in a cranio-caudal fashion, with three pairs generally co-existing at early embryonic time-points ( Figure 1). This network of parallel vessels is selectively reduced and remodeled into the mature asymmetric aortic arch and pulmonary arteries by stage 36 (10 days) [8,9]. Only three of the six AA pairs persist (III, IV, VI). Cranial-most AA I and II remodel into capillary beds and AA V exists only as a transient segment of AA VI [8]. AA III forms portions of the brachiocephalic and common carotid arteries. The right lateral AA IV forms a segment of the transverse adult aortic arch while the left lateral AA IV regresses. This asymmetric AA IV pattern is different in mammals: the left lateral AA IV contributes to the adult aortic arch and the right lateral AA IV forms a short segment of the proximal right subclavian artery. The caudal-most AA VI contributes to segments of the central pulmonary arteries and ductus arteriosus. This sequence of growth and remodeling events is vulnerable to genetic and epigenetic insults, and errors in AA morphogenesis occur in more than 20% of all CHD [1].
The AA transformation patterns detailed above were initially described through India ink injection and serial section experiments in the chick embryo [10,11,12,13]. The first complete 3D analysis of AA morphogenesis used corrosion casts and scanning electron microscopy to generate detailed morphology from chick embryo stage 12 to hatching, providing a contemporary timeline of AA development ( Figure 1) [8]. Advances in imaging technology, including micro-computed tomography (micro-CT) [14], magnetic resonance microscopy (MRM) [15], and optical coherence tomography (OCT) [16] now support comprehensive high-resolution studies of the 3D morphology of chick embryonic vasculature. Despite these advances, work related to AA morphogenesis has been predominantly descriptive and a lack of morphometric data persists. Studies that report measures of AA diameter often apply dehydration and fixation methods prior to acquiring measurements, which can distort vascular geometry [12,17].
Multiple studies on the relationship between blood flow and vessel geometry have demonstrated what is referred to clinically as the ''flow-dependency principle'' [18,19,20,21], establishing hemodynamics as a major epigenetic factor in vascular growth and remodeling. Wall shear stress (WSS), which is sensed by the endothelial cells, functions as a major extrinsic mechanical stimulus for vascular remodeling [22,23], and prolonged exposure to altered blood flow results in the normalization of WSS via an increase or decrease in vessel caliber [18,24,25]. The role of hemodynamics in regulating vascular growth has been validated in a variety of species, including the zebrafish [26,27,28] and chick [29,30,31,32,33,34,35]. This relationship has been demonstrated for the global (organ scale) growth of the AA, where epigenetic perturbations in blood flow lead to congenital defects affecting the great vessels [35,36,37,38].
While these studies provide evidence for the role of hemodynamics in AA growth and remodeling, limited quantitative spatial and temporal data on AA morphometry and flow have been available for biologists and bioengineers. In our previous work [39], we quantified changes in AA geometry, blood flow, and WSS between stages 18 and 24 in the chick embryo (3 and 4 days, respectively). Our in vivo measurements demonstrated that both AA III reduce in diameter while both AA IV increase in diameter.
Using composite three dimensional (3D) AA models reconstructed from micro-CT scanning, we conducted stage-specific computational fluid dynamics (CFD) simulations to quantify AA blood flow and spatial variations in WSS. The results revealed a significant shift in the distribution of cardiac output to the individual AA; in particular, the AA that received the largest amount of flow changed from AA III at stage 18 to AA IV at stage 24. WSS values in the AA increased from stage 18 to 24, with the largest increase occurring in AA IV. This change in WSS was correlated with the enlargement of AA IV diameter, providing the first quantitative evidence for flow-dependent growth in the embryonic AA.
Our previous study also demonstrated that during the 24 hour period between stages 18 and 24, cranial AA II degenerates to a capillary bed and the caudal-most AA VI emerges. AA II, III, and IV were present in all stage 18 embryos while AA III, IV, and VI were present in all stage 24 embryos. While this stage-specific lack of inter-embryo variation in AA configuration seems to indicate a controlled developmental process, there have been no investigations of the intermediate stages to determine how this transition occurs. Should the intermediate stages present with uniform AA configurations, then it is possible that this transition is a tightly controlled and prescribed developmental program regulated by inherent temporal genetic activation. However, significant intra-stage variations in AA architectures would support the alternate hypothesis that epigenetic and environmental factors such as fluctuations in AA hemodynamics could be involved in determining final AA fates.
In the current study, we investigated the transitional stage 21 (3.5 days), applying the multimodal 3D quantitative approach established in our previous work. In vivo imaging of stage 21 AA was performed using fluorescent dye microinjections and OCT to acquire quantitative structural data without disrupting the morphology of the embryo. Representative 3D models of the stage 21 AA were reconstructed from micro-CT scans and used for CFD analysis. We demonstrate that multiple AA configurations exist in ''normal'' stage 21 embryos, suggesting a possible vulnerable window in the transition in AA configuration from stage18 to stage 24. CFD analyses indicate that variations in cardiac output distribution may be a critical factor in AA growth and selection by disrupting the timing of events such as AA regression, generation, and asymmetric growth. Thus, defining critical windows of developmental plasticity and the role of epigenetic, environmental factors that impact these developmental trajectories will help identify the origins of CV malformations and provide insights into the optimal timing for fetal intervention strategies to restore normal biomechanical loading, growth, and adaptation. [40]. The timeline depicts the duration of the connection between the heart and descending dorsal aorta, where line width is an approximation of the frequency that the AA is present. The time axis is skewed to highlight the early stages investigated in this study. Though AA V is included here, it never truly connects to the dorsal aorta. The gray line designates stage 21, where up to four AA can be present. The schematic on the right depicts the mature avian great vessel pattern, where gray represents embryonic AA sections that disappear prior to the final arch configuration. doi:10.1371/journal.pone.0060271.g001
In vivo aortic arch diameter measurement
Fertilized white Leghorn chick eggs were incubated at 37uC and 60-70% relative humidity to stage 21 (3.5 days). We windowed the shell and removed the overlying membranes to expose the embryo and gain optical access. Using our fluid microinjection technique, we injected embryos with approximately 0.5 ml of Rhodamine B diluted in PBS [41]. We recorded time-lapse movies of each injection and extracted still frames for further analysis. The left and right lateral AA were identified using anatomical landmarks and AA midpoint diameters were then measured ( Figure 2). A total of three measurements were made on each vessel and then averaged to obtain the midpoint diameter. AA identity and diameter measurements were performed by three independent observers, and inter-observer agreement was assessed using Bland-Altman analysis [42]. A total of 32 right-lateral and 18 left-lateral stage 21 embryos were injected and analyzed.
Dye injection data was confirmed using a spectral domain OCT (SDOCT) system (Thorlabs Spectral Domain Ganymede, Thorlabs, Inc., NJ) to acquire noninvasive, in vivo images of the right and left lateral AA (4.3 mm resolution). OCT is an echo-based modality, which uses low-coherence interferometry to measure the axial distance of back-reflected light [43]. We have previously applied and validated our OCT system in the development of a novel velocimetry technique for live embryos [44]. The OCT light source is comprised of a 930 nm center wavelength (l) superluminescent diode with a spectral bandwidth (Dl, FWHM) of 100 nm. The 930 nm source is within the 600-1300 nm ''therapeutic window'' for optical radiation, while the total optical power on the sample was 1.5 mW, producing no thermal damage [45,46,47]. Sensitivity of the OCT system defines the minimum detectable change in index of refraction and was measured experimentally to be 91 dB (manufacturer specification). The theoretical axial resolution depends on the coherence length of the OCT light source, and is expressed as 2ln(2)l 2 /Dlpn, where n is the refractive index of the sample medium [48]. In SDOCT, the design of the spectrometer and signal processing both affect the actual resolution. The spectrometer used in our OCT system can image a spectral range (d) of 150 mm, while the OCT software applied Hann-windowing of the spectrum to give smooth axial point-spread functions. The actual axial resolution of our system was l 2 /(dn), equivalent to 5.8 mm in air and 4.3 mm in water. Lateral resolution is set by the minimum waist radius of the focused OCT beam, which in our system was 15 mm. The spectrometer used in our OCT system consisted of a 12 bit highsensitivity CCD camera with 2.0 mm pixel spacing. Data was transferred in real-time over a GigE connection to a PC with a 3.3 GHz processor. The maximum A-scan rate of our OCT system was 29 kHz (equivalent to 38.3 fps for 757 A-lines per frame). The rate of data transfer and live streaming of the 2D OCT scan produced an actual recorded frame rate of 17.1 fps for a 757 A-line image (12.9 kHz). The sample refractive index is defined for the medium surrounding the sample and was considered 1.33 for in ovo embryo imaging. Eggs were windowed as described above and placed in a temperature and humidity controlled imaging chamber. We acquired time-resolved 7576757 (1.561.5 mm) 2D transverse image sequences, each lasting approximately five cardiac cycles. As the AA curves around the foregut to connect to the dorsal aorta, its proximal portion has a significant lateral orientation while the mid to distal region is oriented predominantly dorsoventral. We acquired transverse sections approximately halfway between the start of the dorso-ventral orientation and the connection to the dorsal aorta ( Figure 2). Sequential 2D images were averaged in order to identify the AA lumen by negative contrast (Figure 2). Red blood cells produce a transient reflection as they pass through the scanning beam, and applying an intensityaverage removes these areas while maintaining the constant signal from the surrounding tissue. We applied an ad hoc image processing code to quantitatively measure AA diameter from OCT scans. Approximately 10 discrete points marking the boundary of the AA lumen were manually selected from an intensity-averaged 2D transverse image. The centroid and radius were then computed by fitting the circle equation to the selected points. A total of 17 right lateral and 10 left lateral embryos were analyzed using OCT, with one measurement per AA per embryo. We performed two-tailed, unpaired t-tests assuming equal variance to determine significant differences (p,0.05) in AA diameters at stage 21 (right vs. left lateral of the same AA pair, right laterals compared against each other, left laterals compared against each other).
For some of the stage 21 embryos, the full AA lumen could not be visualized with OCT due to excessive light scattering at the airegg interface and through the pharyngeal arch tissue (Figure 2). During the manual identification of the lumen, we only selected points where the boundary was clearly visible, normally encompassing an arc length of 1/2 to 2/3 of the total circumference. To test whether this truncated arc affected our measurements, we applied our technique to a phantom vessel of known diameter. Our phantom consisted of a clear nylon fiber (Stren Original 4 lb monofilament fishing line, Pure Fishing, Inc., SC) submerged in water. The expected fiber diameter, measured with a micrometer, was 203 mm. We acquired a 7576757 transverse image of the phantom using our OCT system and selected 14 points marking the fiber boundary, including 9 points distributed along the top 1/ 2 of the circumference and 5 points along the bottom 1/3. We then computed four diameter measurements, each using different subsets of the total 14 points selected to determine if the circular arc length contained by the selected boundary affected our measurement ( Figure S1). The OCT measured fiber diameter was 240 mm and only varied by 2 mm when the number and circumferential distribution of the selected points changed. This test demonstrated that the truncated arc selected during AA measurements will produce a valid result and that the entire cross section does not need to be visible. The OCT measured diameter was larger than the expected fiber diameter (18% error), which may indicate over-estimation when applying our measurement technique to the AA. Considering the average AA diameter of 113 mm (Table 1), this error indicates our measurements are accurate to within 20 mm and is within the standard deviation (SD) of the AA diameters. However, the discrepancy in the fiber measurement may also be due to the tolerance of the micrometer.
To further assess the diameters computed from the circle fitting method, we acquired longitudinal AA sections for those embryos where the inner-most wall of the AA was visible under OCT ( Figure 2, note that the inner-most wall is towards the bottom). A total of four longitudinal AA images (2 right, 2 left) were acquired for this comparison, and we obtained three diameter measurements per AA at different points along the vessel length. The measurement from the longitudinal section taken at approximately the same position as the transverse section agreed well with the diameter computed from the circle fitting method in all cases (within 65 mm, we did not perform statistical t-tests due to the insufficient sample sizes). The diameters reported in this manuscript only refer to those from the transverse images, as the longitudinal sections were only used to check the fairness of those measurements. The longitudinal measurements, and that the OCT and fluorescent dye measurements agree, suggest that our OCT data are an accurate measurement of AA lumen diameters.
3D aortic arch imaging and reconstruction
We injected a rapidly polymerizing resin (diluted MICROFILH Silicone Rubber Injection Compounds MV-blue, Flow Tech Inc, Carver, MA) into stage 21 embryos to obtain 3D casts of the AA, as previously described by our group [39]. Casts were scanned using micro-CT (Scanco Inc.) and we reconstructed 3D models using our established protocols [39]. We acquired micro-CT scans of 20 embryos. Several selected scans were imported into computer-aided modeling software (Geomagics Inc., Durham, NC) and combined to create a representative AA geometry with smooth inflow/outflow boundaries required for CFD. This baseline model contained AA III and IV, as they are the two vessels present across stage 21 (see Results below). Using fluorescent injection and OCT imaging as a guide, we transformed the baseline configuration into the other three patterns observed at stage 21 by adding AA vessels using our sketch-based 3D anatomical editing tool [49]. The 3D models compared well with experimental measurements (Tables S1 and S2), supporting realistic data from CFD analysis.
Computational fluid dynamics simulation and analysis
We performed 3D CFD simulations as previously described [39]. A pulsatile 2 nd -order CFD solver (Fluent 6.3.26, ANSYS Inc.) simulated blood flow through the AA models, applying rigid, no-slip walls and Newtonian assumptions (r = 1060 kg/m 3 , m = 3.71610 23 Pa-s) [50]. We prescribed time-dependent flow waveforms as plug-flow inflow boundary conditions, based on our previously published outflow tract velocity measurements (Figure S2) [51]. The distribution of cardiac output to the trunk and cranial vessels was set at a ratio of 90/10 using flow-split boundary conditions [52]. Steady-state solutions were used to initialize the flow field prior to transient solutions. Convergence was enforced by reducing the residual of continuity equation by 10 26 at all time steps. Flow variables were monitored in real-time at the aortic inlet and descending aorta outlet during the course of each solution to ensure that nonlinear start-up effects were eliminated. A mesh sensitivity study at three refinement levels was performed to assure grid independency. Six cardiac cycles were simulated and required approximately 48 h on a Linux workstation with two Quad Core Intel Xeon processors (8 nodes each 2.66 GHz) with 8GB of shared parallel memory.
Variation of aortic arch number and type at stage 21
In our previous study of stage 18 and 24 embryos, we observed no inter-embryo variation in AA configurations; stage 18 contained AA II, III, and IV while stage 24 contained AA III, IV, and VI [39]. In the present investigation of stage 21 embryos, however, we identified four different AA configurations based on visual inspections of fluorescent dye injections, demonstrating significant variability (Figure 3). Anatomical landmarks, such as pharyngeal arch 2, were used to identify the AA present. Two three AA patterns were observed, which we refer to as 3AA-cranial (AA II, III and IV present) and 3AA-caudal (AA III, IV, and VI present). A two AA configuration, 2AA, was found with only AA III and IV present. A four AA configuration, 4AA, displayed AA II, III, IV, and VI. The 3AA-cranial contains the same AA as stage 18 while the 3AA-caudal includes the same AA as stage 24 [39]. The 2AA and 4AA patterns are unique to stage 21. All four configurations were observed for both laterals in at least two embryos. The 3AA-caudal configuration appeared the most frequently (n = 20 right, n = 8 left), followed by 4AA (n = 5 right, n = 5 left), 3AA-cranial (n = 4 right, n = 3 left), and finally 2AA (n = 3 right, n = 2 left). Although lacking simultaneous left and right lateral measurements in the same embryo, our analysis confirmed bilateral symmetry of all AA configurations by observing the flipped embryos after injection and cessation of heart beat.
We did not observe AA V in any of our fluorescent dye injections. Polymeric casts suggest that AA V branches from and then reconnects to AA VI prior to anastomosis with the dorsal aorta and that it is significantly smaller than the other AA. [8]. The large size of pharyngeal arch 2 allowed us to identify which AA vessels were present with little difficulty and there were no disagreements when comparing independent observer classifications. AA V may not be fully formed by stage 21 or does not receive significant flow to produce a fluorescent signal; therefore, we cannot confirm the presence of AA V at stage 21.
Dimensions of the stage 21 aortic arches
Average mid-point AA diameters were measured from fluorescent dye injections after classification of the AA configuration. The inter-observer bias in fluorescent dye measurements was 2 mm and the limits of agreement were 226 to 29 mm, demonstrating reasonable agreement. AA measured with OCT were not classified, and data from OCT was combined with all fluorescent dye data to generate the average stage 21 diameter measurements (Table 1). Fluorescent dye injection and OCT measurements were unmatched, though the average difference between mean diameters obtained using the two methods was 5 mm, suggesting close concordance. Further, a two-tailed, unpaired t-test did not show significant differences between the two methods (p.0.05). Statistical comparison of the stage 21 AA diameters revealed that the left lateral AA VI was smaller than the left lateral AA IV and III (p,0.05, Table 1). Using only the classified fluorescent dye data, we performed further analysis to determine if significant differences in AA diameter existed between the AA configurations. For the vast majority, no significant differences were found; however, the right lateral AA II was larger in the 3AA-cranial vs. 4AA configuration, and the left lateral AA III was larger and the left lateral AA IV smaller in the 3AA-cranial vs. 3AA-caudal configuration ( Figure 4). These data indicate that although AA configurations vary at stage 21, AA diameters are fairly uniform throughout the stage 21 time period. However, we also recognize that the small sample size of certain AA configurations (i.e. 2AA) may limit statistical comparisons. We also compared the average stage 21 diameter data with our previous measurements at stages 18 and 24 [39] and found significant differences (p,0.05) between both AA IV laterals from stage 18 to 21, and between the right lateral AA IV and both AA VI laterals from stage 21 to 24 ( Figure S3, Table S3). In all cases, the diameter was larger at the later stage. [36]. Flow through the AA manifold was laminar, with maximal Reynolds number of less than 20 at the junction between the outflow tract and aortic sac. Womersley numbers were less than 1 in all AA.
Distribution of cardiac output in the stage 21 aortic arches
For all four configurations, AA III and IV were the most perfused, comprising 45% and 38% of the total cardiac output, respectively. AA II and VI received the least flow in the 3AAcranial and 3AA-caudal cases, respectively, and were also the least perfused AA in the 4AA case (13% to AA II, 18% to AA VI of total cardiac output). Flow in the 2AA configuration was split almost evenly, with AA III receiving 55% of the total cardiac output compared to 45% for AA IV. For all but the 4AA configuration, cardiac output was split evenly between the right and left laterals; in the 4AA case, the right laterals received 60% of the total cardiac output. In the configurations in which they appear, AA II and AA VI received considerably less flow, though this disparity was less pronounced in the left laterals of the 4AA case. For the left laterals, AA III received the most flow in all configurations, though in the 4AA case flow was more evenly distributed among the left lateral AA. For the right laterals, AA IV received the greatest amount of flow for all but the 4AA configuration, in which AA III and AA IV were nearly equally perfused. Examining each AA pair individually, the right lateral of AA II received 57% of all AA II flow in the 3AA-cranial configuration, a distribution that was reversed in the 4AA case (left lateral AA II received 59% of AA II flow). AA III flow was consistently split 63% to 37%, between the right and left lateral, with the left lateral receiving the larger share for all but the 4AA configuration, in which the right lateral received 63% of the flow. In all cases, the right lateral AA IV received 65% of the cardiac output directed to AA IV. This was the largest difference (65% vs. 35%) between right and left lateral flow distribution for any AA pair. Flow to AA VI was split nearly evenly between its laterals, 52% right vs. 48% left for both configurations in which it appeared.
Distribution of WSS patterns in the stage 21 aortic arches
With respect to our previous data [39], WSS was elevated in all AA at stage 21 compared to the previous (stage 18) and later (stage 24) time-points ( Figure S3). WSS levels at stages 18 and 24 were between 1 and 3 Pa, compared to the 3-7 Pa range at stage 21. Spatial distribution of WSS, including the acceleration, peak, and deceleration phases of the cardiac cycle, is depicted in Figure 5. The highest WSS zones were located at the junction between the outflow tract and aortic sac, and in the narrow segments of AA III. WSS levels were similar in all AA pairs, though AA VI levels were relatively lower in the configurations where it appears (Figure 4). Examining AA pair by pair, the WSS levels in the right lateral of AA II were always higher than its left lateral (0.5 Pa higher on average). This situation was reversed for AA III, where the left lateral was exposed to higher WSS levels (average of 0.9 Pa higher). As in the flow distribution, AA IV WSS levels were consistently higher in the right lateral, though this difference was less dramatic than the AA II and III WSS (less than 0.1 Pa on average). WSS levels in AA VI were also similar in both laterals, with an average difference of 0.3 Pa.
The increased WSS at stage 21 can induce significant changes in AA growth through shear-mediated genetic and signaling pathways. That WSS levels at stage 24 are similar to those at stage 18 suggests mechanical restoration, a theory introduced for biomechanically-regulated growth [53,54]. This theory, in which tissues are expected to grow and remodel in an attempt to restore homeostatic or optimal loading conditions, has been demonstrated in limited embryonic applications [55,56]. Additional research is required to determine target stress states in the embryo, which may change over the course of development.
Correlation between WSS variation and diameter change
In an attempt to further define the relationship between WSS and vascular growth, we determined if a correlation exists between an incremental change in vessel diameter and a change in WSS. We performed a regression analysis on the differences between average left and right lateral diameter and WSS values at stages 18 and 21 and stages 21 and 24 for AA III and IV. For six of the eight AA vessels, a 2 nd -order polynomial function strongly correlated variation in WSS with change in diameter (p = 0.002, Figure 6), consistent with hyper-restoration theory. Outliers to this trend included both the right and left lateral of AA III during growth from stage 18 to 21. It is noteworthy that the change in WSS must exceed some threshold to produce a significant change in AA diameter, and increases in WSS have a greater effect than decreases. Deviations from this trend (i.e. WSS decreasing and diameter increasing) were observed when comparing each stage 21 configuration separately to the stage 18 and 24 data ( Figure S3) and may be related to cellular heterogeneity within the AA, causing different responses to WSS levels.
Inter-embryo variability in aortic arch patterns coincides with increased wall shear stress
Our fluorescent dye injections demonstrate significant variations in AA patterns at stage 21, which was not observed at previous (stage 18) or later (stage 24) developmental time-points. This interembryo variability was also noted by Pexieder, who documented similar observations at four hours prior to and after stage 21 [12]. Pexieder's observations underscored the importance of using developmental staging landmarks rather than duration of incubation time in assigning developmental stage to maturing avian embryos. CFD models of all four stage 21 AA configurations show a concomitant acute increase in WSS. Compared to our previous study of stage 18 AA [39], we found an average increase of 3.1 Pa (nearly 2 fold) per AA at stage 21; WSS then reduced 2.5 Pa (0.5 fold) by stage 24. This escalation in WSS is likely due, in part, to the exponential rise in cardiac output that occurs during development [31,51,52,57,58,59,60,61], though more research is needed to determine these effects (Figure 7). Based on the coincidence between AA pattern variability and the transient sharp increase in WSS, we hypothesize that stage 21 represents a period of dynamic AA growth, regression, and generation events, which attempt to restore normal loading. While many experiments indicate the importance of flow distribution in AA growth and morphogenesis [35,36,38], the WSS after intervention remains unknown. Future work to characterize the biomechanical envi-
Asymmetric cardiac output distribution to the aortic arches
Our CFD results demonstrated clear differences in cardiac output distribution to AA pairs, as well as asymmetric perfusion between the laterals of distinct pairs (i.e. AA IV). Our group recently developed an optimization-based model for AA growth, where the individual AA diameters were free to alter in response to a global objective function that minimizes the total energy expenditure while maximizing diffusive capacity [62]. This model demonstrated that there was always one dominant (larger diameter) AA, the selection of which was strongly related to the orientation of the outflow tract. The outflow tract orientation acted to preferentially direct flow to one of the AA vessels, which became the dominant AA. This model showed similarities to the classic problem of competing collateral vessels, where small perturbations in the distribution of blood flow cause one vessel to dilate due to increased WSS while the others constrict due to a decrease in WSS, eventually leading to reduction to a single vessel [63,64,65]. Based on this work, the asymmetry observed in AA IV flow at stage 21 (right-lateral dominant, Figure 4) may explain the asymmetric growth of this AA pair, where the left lateral disappears and the right lateral forms a section of the mature arch of aorta. If we consider AA IV as two vessels competing for flow, then this flow asymmetry would predict degeneration of the left lateral. The low flow to AA II may also explain its eventual remodeling to a capillary bed by a similar principle. Asymmetric flow distribution was shown to affect platelet-derived growth factor-A and vascular endothelial growth factor receptor-2 signaling in the asymmetric remodeling of AA VI in the mouse, providing further support for this theory [66]. To date, inherent asymmetry in vascular growth-related gene expression among the AA has not been documented, suggesting that environmental, epigenetic factors such as WSS may play an important role in this process.
Multiple studies using intervention methods to disrupt normal flow in embryos near stage 21 have reported significant subsequent abnormalities in AA growth. Rychter and Lemez [38] tracked the distribution of blood from the vitelline veins in stage 13, 15, and 18 chick embryos, demonstrating clear patterns in AA perfusion. Exclusion of these veins by transection or ligation subsequently rerouted flow to AA not normally perfused from the tested location. Using India ink injections, Hogers et al. [35] extended this vitelline [60] reported velocity data only, which we converted to flow rate using dorsal aorta diameter data from Hu and Clark [52]. Equation of the exponential trend is given in the lower right corner, where Q is the flow rate and t is time, in hours. doi:10.1371/journal.pone.0060271.g007 ligation model to demonstrate that intracardiac flow patterns were also disrupted. Further, embryos were examined through hatching, revealing multiple defects in AA development, including hypoplastic right brachiocephalic artery, interrupted aortic arch, double aortic arch, and hypoplastic pulmonary artery. Using only video microscopy, Hu et al. [36] reported similar anomalies in AA perfusion patterns in the left atrial ligated (LAL) chick embryo. Individual AA flow rates were quantified with laser Doppler velocimetry, and demonstrated a significant reduction of flow in all AA in the LAL embryos, although the flow ratios remained similar to the control group. Examination of LAL embryos to stage 27 and 34 revealed defects such as absent AA III and IV and AA hypoplasia.
It is possible to test the effects of altered outflow tract flow patterns using our current CFD models. As described in our CFD methods, the velocity profile at the outflow tract was plug shaped. The plug flow profile at the outlet of the beating ventricle is an established and valid assumption of cardiovascular fluid dynamics modeling [67], and is therefore employed in this study as well. To examine the effects of other profile shapes, we used the 3AAcranial model and altered the inlet boundary condition. We prescribed two conditions: 1) a normal parabolic profile, with the maximum velocity occurring at the centroid of the inlet surface, and 2) a skewed parabolic profile, where the maximum velocity is offset from center. We modeled pulsatile flow, using the same stage 21 waveform (Figure S2), and kept the remaining boundary conditions and model parameters unchanged. Neither profile significantly altered the flow distribution, with an average difference of 1% (Table 2, Figure S4). This small effect is likely because the profile rapidly becomes fully developed (parabolic) by the time it reaches the aortic sac, even though we prescribe a plugflow profile at the inlet. This flow development occurs particularly in the embryonic outflow tract since it has a narrow constriction upstream of our main area interest, the AA vessels. Furthermore, we expect that the narrowing of the outflow tract and the low Reynolds number attenuates (as a function of the constriction diameter) any flow skewness that may be exist due to the looping of the heart. These results seem to suggest that, even if the altered intracardiac flow patterns resulted in a skewed profile at the outflow tract, its effects on AA flow distribution would be small and possibly insufficient to cause morphogenetic abnormalities. However, the outflow tract of the early embryo through stage 32 is contractile and changes its shape during the cardiac cycle [68]. As our models do not incorporate this wall motion, the effects of the skewed profile may be underestimated. An AA flow model that incorporates the wall motion of the outflow tract would provide further evidence; however it is not within the scope of the current study.
Hypothetical model for transitions in aortic arch patterns at stage 21
The multiple AA configurations at stage 21 led us to propose two pathways by which the transition from the stage 18 to stage 24 AA patterns occurs (Figure 8). Each of the four AA configurations observed at stage 21 can represent a discrete snapshot occurring during the disappearance of AA II and emergence of AA VI. The 3AA-cranial configuration maintains the stage 18 II, III, IV AA, and can be considered as the immature stage 21 AA pattern. Similarly, as the 3AA-caudal configuration contains the same AA as the stage 24 embryo, it can be referred to as the mature stage 21. The remaining configurations, 2AA and 4AA represent two distinct intermediate stage 21 configurations, which, in turn, demonstrate the two possible growth pathways by which AA II degenerates and AA IV becomes patent. To achieve the 2AA configuration, AA II must degenerate before AA VI emerges; for the 4AA configuration to occur, AA VI must emerge before AA II disappears. While these two pathways can be logically deduced from our fluorescent injection data, the factors governing whether AA morphogenesis proceeds through the 2AA or 4AA pattern is unclear. Based on previous experiments demonstrating the importance of blood flow in AA growth, it is reasonable to propose that hemodynamic loading, such as WSS, has a role in this process. Mechanical restoration theory [53,54] may offer some insight: prolonged exposure to WSS above some critical value (WSS crit ) may lead to AA generation while WSS far below the normative level (WSS eq ) may lead to AA regression in order to restore loading to WSS eq (Figure 8). Variations in the trend of cardiac output increase is one possible explanation for the selection of either the 2AA or 4AA pathway, where a sharp increase would lead to generation of AA VI and a slow increase in cardiac output would lead to regression of AA II. Simultaneous hemodynamic and structural measurements, which are not currently available, are needed to investigate this theory.
Relating wall shear stress to biologic events
Several published studies related to the biomechanical regulation of genetic, signaling, and cellular events involved in the normal growth and remodeling of the embryonic AA place the results from our CFD models into context with AA biology. Groenendijk et al. found that levels of high WSS were associated with Krüppel-like factor-2 (KLF-2) and endothelial nitric oxide synthase (NOS-3) expression, while low WSS areas expressed endothelin-1 (ET-1) [69]. Their 3D reconstructions of these expression patterns qualitatively overlap with the WSS magnitudes predicted by our current and previous CFD models [39]. Egorova et al. demonstrated that chick endothelial cells have a dosedependent relationship between WSS and Tgfb/Alk5 signaling activity [70]. Defective Alk5 signaling in mouse neural crest cells lead to AA hypoplasia and uncharacteristic regression [71]. The asymmetric WSS in AA pair IV may result in asymmetric Alk5 signaling, leading to persistence of the right lateral and regression of the left lateral. Molin et al. examined Tgfb22/2 mice and found significant defects in AA IV, though some mice had normal AA [72]. This study indicated that SMAD2 signaling was critical for the development of AA IV and the authors hypothesized that the WSS levels in the unaffected Tgfb22/2 mice were high enough to maintain Tgfb1/Alk5 signaling for sufficient SMAD2 levels. More experimental studies are required to link WSS with these molecular mechanisms and our CFD modeling techniques are well suited to determine 3D WSS distributions. Modeling 3D blood flow in the embryonic aortic arches As previously described, our 3D AA models were constructed from a library of micro-CT scans and therefore represent an average embryo. We verify the 3D geometries by comparing AA diameter (Table S1) and length (Table S2), both of which were quantitatively similar. Due to the smaller sample size of experimental length measurements, we further examined the influence of AA length using our previously published numerical parametric 2D hemodynamic model of the right lateral AA [62]. Briefly, this model uses parametrically defined third order Bezier curves to describe the centerlines of the AA and then generates a lumen of uniform diameter by extracting in the normal directions. An outflow tract and dorsal aorta are incorporated at the proximal and distal ends, respectively. We modified this 2D model to represent the 3AA-cranial stage 21 configuration and applied the stage 21 cycle average flow rate to model a steady state simulation (only one half of the total flow was used as we only model the right lateral). As in the 3D models, we employed a rigid wall assumption and 90/10 trunk/cranial flow split at the dorsal aorta. Blood properties remained the same as the 3D models. The parametric geometry allowed us to easily modify AA length and curvature. We performed simulations for 12 distinct geometry cases, varying the lengths and curvatures of each AA individually ( Figure S5). We found that when the vessel length varied by 50%, the flow distribution and WSS were maintained within 20% of its original values ( Figure S5). Furthermore, when the curvature of the AA was changed such that AA tortuosity increased by 10%, flow distribution and WSS were maintained within 10% of their original values ( Figure S5). Therefore, we expect that the small difference between the AA lengths of the 3D models and the experimental measurements (,3%) does not have a significant effect. While a larger number of experimental length measurements would provide additional evidence, our current data and 2D simulations indicate that even a difference up to 20% in AA length would have little effect on the flow distribution and WSS.
We further qualitatively compared our 3D AA models with previous descriptions of the AA at comparable stages including scanning electron micrographs [8], schematic illustrations [73], reconstructions from serial registered histological sections [74,75], and MRM [15]. Although 3D information is very limited in these reports, all of these studies are in qualitative agreement with the topology of composite 3D reconstructions in the current study. The integrity of the present 3D quantitative morphology of the AA was further verified 1) by overlapping the 3D reconstructions with the large set of 2D fluorescence dye injection recordings at several views, 2) through multiple snap-shots used for vessel diameter measurements, and 3) by the auxiliary micro-CT scans and 3D reconstructions with parametric segmentation and smoothing settings. These checks were previously applied to our stage 18 and 24 models [39]. It is clear from the longitudinal OCT sections that AA diameter is not constant along the vessel length (Figure 2), and this variation is captured by our 3D models. That the smallest diameter appears at the midpoint is consistent with previous reports that show formations of the AA lumens begin at the aortic sac and dorsal aorta and gradually progress to the midpoint [76,77]. The strong quantitative and qualitative agreement suggests that our models provide good estimation of the flow and WSS distribution within the embryonic AA at stage 21.
The CFD models used in this study are subject to several assumptions related to boundary conditions. We specify rigid walls, which may over-estimate the WSS values. As we compare these values to our previous stage 18 and 24 models [39], which also employed a rigid wall assumption, the results related to these comparisons remain valid. While the AA wall is distensible, our experience with the chick embryo indicates that the expansion is small during systole (see Movie S1 for a time-lapse OCT sequence). Our rigid wall models provide a good estimation of the biomechanical forces acting on the embryonic AA during this critical stage in development. Measuring flow in these vessels using direct experimental techniques such as Doppler ultrasound or micro particle image velocimetry is difficult and prone to errors given their small size and limited access due to their position within the pharyngeal arches. A full fluid-structure interaction model would be necessary to capture the effects of wall compliance and the surrounding tissue. The outlet boundary conditions specify a 90/10 flow split between the trunk and cranial vessels. As the flow split is enforced by the CFD model, it is independent of the AA morphology. This distribution is based on Doppler ultrasound studies in the chick embryo, and is consistent across the investigated timeframe [52]. The ratio of the cranial and trunk peripheral resistances set this distribution in vivo. Though the trunk peripheral resistance decreases geometrically from stages 12 to 29, the ratio likely remains constant since the 90/10 flow split is maintained [52]. Alterations in the peripheral resistance may change this flow split, leading to variations in AA perfusion and therefore WSS. We examined the effects of a 60/40 flow split using our stage 18 model [39] and found that the larger cranial perfusion shifted approximately 5% of the cardiac output from the caudal-most AA pair IV to the cranial-most AA pair II (Table 3). Flow to AA pair III remained similar. Thus, peripheral resistance can have an effect on the AA flow distribution and the biomechanical environment. Indeed, increasing the downstream arterial resistance by ligating the right vitelline artery reduced dorsal aortic flow by 38% after one hour, though cranial flow and AA flow was not measured [31]. Future research is required to determine the effects of altered peripheral resistance on AA flow.
Limitations
Although limited published data exists on AA dimensions, our data is consistent with these previous studies [17,36]. The 2D diameter measurements acquired from fluorescent dye images were influenced by both reflected fluorescent light and the volume of injected dye, requiring the large sample numbers. Due to the large size of pharyngeal arch 2, identifying the boundaries of AA II was sometimes difficult. The imaging depth using OCT is limited by the amount of light scattering caused by the sample and is normally 1.5 mm for biological tissue. At stage 21, pharyngeal arch 2 causes excessive light scattering due to the thickness of the tissue and obstructs imaging of AA II and occasionally AA III when using OCT. Therefore, we did not measure AA II under OCT and did not attempt to classify AA configurations, as errors were likely to result due to the obscured cranial AA. The obstructed imaging of AA III is the reason for the different n- numbers between AA III and IV in Table 1. Measuring AA diameter with OCT assumes a circular cross-section, which we feel is valid. This technique is limited by observer identification of the AA lumen, which is considerably improved when averaging several B-scans. The four stage 21 AA geometries were created by adding additional AA to the baseline 2AA (III, IV) configuration. We used the same AA vessel geometries in each configuration (i.e. left lateral AA III, 3AA-cranial is the same as left lateral AA III, 3AAcaudal, etc.). This strategy removes any differences in AA diameter or curvature that may exist among the four stage 21 configurations and midpoint AA diameters in the models do not always exactly match those measured experimentally (Table S1). Neglecting these differences when constructing the models may result in an inconsistency between flow rates and WSS levels obtained from CFD and their actual, in vivo values. This possible discrepancy may be a reason for deviations in the trend between the change in WSS and change in diameter ( Figure 6). A comparison of in vivo velocity data at each stage 21 configuration would be necessary to determine the degree of this difference, however no such data currently exists. However, given that AA diameters are similar for all the stage 21 configurations (see above), we expect that our method in creating the 3D model generated little error when comparing flow and WSS values.
Conclusions
Our study provides the first comparison between quantitative in vivo data and CFD-predicted flow and WSS patterns of the stage 21 embryonic AA. We have shown a transient variability in the number and identity of AA present at stage 21, creating four possible configurations. We applied multimodal imaging strategies to provide the first quantitative data on AA diameter at stage 21, which revealed significant growth in key AA vessels (IV, VI) when compared to our previous data at stage 18, and asymmetric growth of AA IV when compared to our stage 24 data (right lateral grew significantly, left lateral remained the same). CFD analysis of all four stage 21 configurations demonstrated changes in cardiac output distribution and elevated WSS levels compared to stages 18 and 24. Our data revealed that changes in WSS and AA diameter are closely correlated, providing further evidence for flow-dependency in embryonic vascular growth. In particular, flow asymmetry in AA IV may relate to its asymmetric growth patterns based on shear-mediated gene expression and signaling activation. The timing of events such as cardiac output increase and outflow tract migration may have additional roles in the progression of AA growth and remodeling. Understanding the relationship between hemodynamics and the growth of the AA can provide insight into the progression of great vessel defects and other forms of CHD. Figure S1 Diameter measurement of a nylon filament using OCT. Each panel (A-D) represents the diameter computed based on the selected points (green dots). The best fit circle is shown in yellow and the diameter is given at the center. The distribution of the selected points around the circumference of the fiber did not significantly affect the calculated diameter. This method is sufficient to measure AA diameters from transverse sections where the entire lumen boundary is not visible. (TIF) Figure S2 The pulsatile flow waveform used to represent a single cardiac cycle at the outflow tract for the CFD model was interpolated from the data published by Yoshigi et al. [51]. (TIF) Figure S3 Graphical comparison of average AA midpoint diameter (6SD), cardiac cycle-averaged flow, and spatially-averaged (6SD) cycle-average WSS levels for each of the four configurations at stage 21 with the preceding (stage 18) and succeeding (stage 24) data from our previous work [39]. Widths of bars are scaled, with values provided for stage 21. Gray boundaries give the SD. The rate of change of diameter, flow, and WSS is dependent on the stage 21 AA configuration. Significant differences (p,0.05) between stage 21 diameters are designated with *, where superscripts delineate the statistical pairs. (TIF) Table S1 Experimentally measured average (6SD) AA diameters compared with those in the 3D models used for CFD simulations. See Table 1 for experimental sample sizes. (DOC) Table S2 Experimentally measured AA lengths compared with those in the 3D models used for CFD simulations. Experimental measurements were taken for a single AA sample (n = 1) and one measurement was made per sample. (DOC) Movie S1 Time-lapse OCT sequence through a transverse section of the right lateral AA at stage 21. The movie is obtained in vivo with no embryonic intervention and during the regular cardiac cycle of the beating ventricle. Sections of AA III and IV are visible, where AA IV is towards the left. AA II can also be seen furthest to the right, within the large pharyngeal arch 2. Cranial is toward the right and dorsal is into the page (same as Figure 2C). Frame dimensions are 1.561.5 mm. (AVI)
Author Contributions
Conceived and designed the experiments: WJK OD YW BBK KP. Performed the experiments: WJK OD YW JPT. Analyzed the data: WJK | 2015-09-23T00:31:53.000Z | 2013-03-21T00:00:00.000 | {
"year": 2013,
"sha1": "dba01495d3354a7ed996e4213e6fe9aadfb85d64",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0060271&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dba01495d3354a7ed996e4213e6fe9aadfb85d64",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
227015151 | pes2o/s2orc | v3-fos-license | Wash-In Leptogenesis
We present a leptogenesis mechanism based on the standard type-I seesaw model that successfully operates at right-handed-neutrino masses as low as a few hundred TeV. This mechanism, which we dub wash-in leptogenesis , does not require any CP violation in the neutrino sector and can be implemented even in the regime of strong wash-out. The key idea behind wash-in leptogenesis is to generalize standard freeze-out leptogenesis to a nonminimal cosmological background in which the chemical potentials of all particles not in chemical equilibrium at the temperature of leptogenesis are allowed to take arbitrary values. This sets the stage for building a plethora of new baryogenesis models where chemical potentials generated at high temperatures are reprocessed to generate a nonvanishing B − L asymmetry at low temperatures. As concrete examples, we discuss wash-in leptogenesis after axion inflation and in the context of grand unification. of wash-in leptogenesis
Introduction.-The cosmic imbalance between matter and antimatter [1,2] represents clear evidence for new physics beyond the standard model (SM). Early attempts to explain the baryon asymmetry of the Universe (BAU) related its origin to the CP-violating decays of heavy GUT particles in grand unified theories (GUTs) [3][4][5][6][7]. It was, however, soon realized that electroweak sphaleron processes [8] spoil this explanation. In the early Universe, sphalerons nonperturbatively wash out the baryonplus-lepton number B þ L, which is exactly the linear combination of charges generated during standard GUT baryogenesis. This observation subsequently led to the proposal of leptogenesis [9], which links the BAU to neutrino physics in the type-I seesaw extension of the SM [10][11][12][13][14] and which exploits the fact that sphalerons do not violate the baryon-minus-lepton number B − L. Indeed, during leptogenesis, the CP-violating decays of righthanded neutrinos (RHNs) N I (I ¼ 1; 2; …) first create a lepton asymmetry (and, hence, nonzero B − L), which is then converted by the SM interactions in the thermal bath, including sphalerons, to a baryon asymmetry.
Standard thermal leptogenesis requires very large RHN masses, M I ≳ 10 9 GeV, in order to achieve sufficient CP violation during RHN freeze-out [15,16]. This makes it hard to directly probe the RHN sector in experiments and leads to large radiative corrections to the mass of the SM Higgs boson, which aggravates the SM hierarchy problem for RHN masses above the Vissani bound M I ≲ 10 7 GeV [17,18]. In addition, standard leptogenesis is vulnerable to strong asymmetry wash-out, if the RHN Yukawa interactions with the SM lepton-Higgs pairs l α ϕ are too strong [19][20][21][22].
In this Letter, we will present a mechanism to generate nonzero B − L charge in the type-I seesaw model that avoids most of these shortcomings; for alternative routes to low-scale leptogenesis, see [23][24][25][26][27][28][29][30]. The key idea behind our proposal is to generalize standard freeze-out leptogenesis to a nonminimal cosmological background in which all conserved charges C at the time of leptogenesis (see Table I) are allowed to take arbitrary values. In such a background, the lepton-number-violating (LNV) RHN interactions then result in a new equilibrium attractor for the chemical potentials in the plasma that generically features nonzero B − L, even if B − L ¼ 0 initially. The RHN interactions also actively drive the plasma toward this new attractor solution, which is why we dub our mechanism wash-in leptogenesis.
As we will show, wash-in leptogenesis can successfully operate down to RHN masses of a few hundred TeV, i.e., masses shortly above the equilibration temperature of the electron Yukawa interaction [31]. The mechanism, therefore, allows one to satisfy the Vissani bound; in particular, it is compatible with the neutrino option, which denotes the idea that RHNs with masses of a few PeV are responsible for radiatively generating the electroweak scale in the SM [32][33][34][35][36]. Wash-in leptogenesis is also independent of the amount of CP violation in the RHN sector, which liberates it from the Davidson-Ibarra bound M I ≳ 10 9 GeV; and its success is not jeopardized by large Yukawa couplings. In fact, in the presence of additional conserved charges, strong asymmetry wash-out turns into efficient asymmetry wash-in.
Our proposal builds on earlier work, which already partly considered some of the ideas presented here [37][38][39][40][41] (see also [42]). The essential new elements of our analysis are the following: (i) We provide a systematic discussion spanning ten orders of magnitude in temperature, T ∈ ð10 5 ; 10 15 Þ GeV. In doing so, we account for all possible unconstrained charges in each temperature regime, which allows us to develop a general toolkit for constructing new baryogenesis models; see our main results in Table II. (ii) We pay particular attention to flavor. That is, we allow for an arbitrary flavor composition of the primordial charge asymmetries, and we take into account charged-lepton flavor effects in our analysis of wash-in leptogenesis. This especially includes effects related to flavor coherence or decoherence. (iii) We go beyond LNV two-to-two scattering processes mediated by the dimension-5 Weinberg operator, considering also the ordinary decays and inverse decays of dynamical RHNs.
While wash-in leptogenesis can provide the basis for numerous new baryogenesis models, it does not represent a complete model by itself. It should rather be regarded as a general mechanism that describes how RHN interactions reprocess primordial charge asymmetries that were generated at higher temperatures. This includes the intriguing possibility of creating a nonvanishing B − L asymmetry from B − L-symmetric initial conditions. But it is agnostic about the ultraviolet (UV) physics that is responsible for setting these initial conditions. This is an advantage, as it allows us to perform a model-independent analysis from a bottom-up perspective. The remainder of this Letter is therefore organized as follows: First, we will study wash-in leptogenesis in the spirit of an effective field theory that describes the evolution of its input parameters (i.e., the primordial charge asymmetries) from some high-energy matching scale down to low energies. Then, we will turn to concrete UV completions that illustrate how wash-in leptogenesis can successfully create the BAU, even if B − L ¼ 0 initially. Specifically, we will consider the generation of nonzero B þ L charge during GUT baryogenesis and axion inflation [44][45][46][47]. A lesson from these examples is that wash-in leptogenesis is able to resurrect baryogenesis scenarios that would otherwise suffer from TABLE II. Numerical coefficients x C that describe the composition of μ eq B−L ¼ q eq B−L 6=T 2 in terms of the conserved charges μ C ¼ q C 6=T 2 in different temperature regimes; see Eq. (15). The ✗ symbol marks the absence of the corresponding μ C due to an efficient SM interaction. The second column indicates the active flavors l α with respect to N 1 interactions; see the discussion around Eq. (13). The last column contains n Δ ⊥ , which vanishes in the case of B − L-symmetric initial conditions. P and P τ are model dependent and encode the flavor composition of the primordial q e;μ;τ asymmetries with respect to the N 1 wash-out direction [see the text for examples and Eqs. (S41) and (S56) [43]]. In this table and throughout the Letter, we assume vanishing global hypercharge, μ Y ¼ 0. For more details, see Supplemental Material [43]. (2021) 201802-2 strong asymmetry wash-out, in a way that is more complex than simply resorting to standard leptogenesis. Wash-in leptogenesis.-We begin by considering a particularly interesting and simple scenario: N 1 -dominated wash-in leptogenesis at temperatures of a few hundred TeV. In this temperature regime, all SM interactions are equilibrated-except for the electron Yukawa interaction, which renders the comoving charge asymmetry of right-handed electrons a classically conserved quantity, q e =s ¼ const, with entropy density s. Its anomalous violation via the chiral plasma instability is negligibly slow for the q e =s values of interest [48][49][50]. At the same time, all chargedlepton flavors α ¼ e; μ; τ are fully decohered, which allows us to work with the standard Boltzmann equations for the three lepton flavor asymmetries Δ α ¼ B=3 − L α in the type-I seesaw model [24,26]: which is valid in the nonrelativistic regime T ≲ M 1 , where any N 1 chemical potential is clearly negligible because of the N 1 Majorana mass μ N 1 ≃ 0. The negative sign on the left-hand side follows from Δ α ⊃ −L α . The charge asymmetry q i for a particle species i is defined as the difference of its particle and antiparticle number densities, , with chemical potential μ i and multiplicity g i , while q C ¼ μ C T 2 =6 for all conserved charges C, with μ C in Eq. (5). The first term on the right-hand side in Eq. (1) is the standard source term describing the asymmetry production from RHN decays, while the second term is the standard wash-out term, with total wash-out rate per unit volume which encompasses RHN inverse decays γ id αβ ¼ γ 1α δ αβ as well as ΔL ¼ 2 and lepton-flavor-violating ΔL ¼ 0 twoto-two scattering processes (see [24,26] for more details).
Before we are able to solve the coupled system of equations in Eq. (1), we have to specify the relation among the chemical potentials μ l α , μ ϕ , and μ Δ α . In standard leptogenesis, this relation is encoded in the flavor coupling matrix ðCÞ αβ ¼ C αβ [51][52][53][54][55][56], whose structure is determined by SM spectator processes [57][58][59]. The crucial difference between standard leptogenesis and our scenario is that, in a nontrivial chemical background, the standard linear relation μ l α þ μ ϕ ¼ − P β C αβ μ Δ β turns into an affine relation: where, at temperatures of a few hundred TeV, the translation by the constant shift vector μ 0 α is solely induced by the conserved chemical potential of the right-handed electrons: Equations (3) and (4) follow from analyzing all 16 SM chemical potentials μ i (i ¼ e; μ; τ; l e ; l μ ; l τ ; u; c; t; d; s; b; Q 1 ; Q 2 ; Q 3 ; ϕ): In any given temperature regime, the number of linearly independent conserved charges C and the number of SM interactions in equilibrium always add up to 16; see Table I. This results in 16 constraint equations in each temperature regime that allow one to express the chemical potentials μ i of all SM species as linear combinations of the conserved chemical potentials μ C (C ¼ Δ α ; …). In general, we therefore obtain a constant shift vector μ 0 α in Eq. (3) of the form with charge vectors n C i and multiplicities g i ; see [60] for details. We provide explicit expressions for n C i , g i , the flavor coupling matrices C αβ , and source matrices S αC in all temperature regimes of interest in Supplemental Material [43].
Equations (1) and (3) tell us that the Boltzmann equations are linear in the lepton flavor asymmetries Δ α . This allows us to split q Δ α into contributions from thermal and wash-in leptogenesis, respectively, where Γ w αβ ¼ 6=T 3 γ w αβ . Equation (6) is reminiscent of spontaneous baryogenesis [61,62], specifically, spontaneous leptogenesis [63,64], where the rolling of a (pseudo) scalar field φ induces effective chemical potentials μ 0 α ∝ q 0 α [60] (see also [65,66]). The difference between spontaneous leptogenesis and our scenario is that we assume nonzero primordial asymmetries stored in a set of conserved charges, whereas spontaneous leptogenesis involves time-dependent asymmetries-controlled by the interaction Lagrangian of the field φ and not necessarily related to conserved charges-that are present only when φ is in motion. This requires that LNV processes must be efficient exactly at the time when φ is rolling. In our scenario, such a temporal coincidence is not needed. Still, it is PHYSICAL REVIEW LETTERS 126, 201802 (2021) 201802-3 straightforward to generalize the following analysis to timedependent charges q 0 α [67]. At any given temperature, the total wash-out rate is typically dominated by a single process, such that it factorizes into Γ w αβ ¼ P αβ Γ w , where the temperature dependence is contained in the flavor-blind wash-out rate Γ w and where the matrix ðPÞ αβ ¼ P αβ encodes the flavor structure. In this case, it is then possible to write down an exact solution of Eq. (6). For arbitrary initial conditions q ini Δ β , we find q eq Δ α is the equilibrium attractor in the presence of RHNs, which can also be derived from Eq. (3) by requiring all RHN interactions to be in equilibrium, μ l α þ μ ϕ ¼ μ N 1 ¼ 0. The matrix ðEÞ αβ ¼ E αβ describes how the RHN interactions actively drive the plasma exponentially close to this solution: where K 1 denotes the standard N 1 decay parameter: At temperatures of a few hundred TeV, the total wash-out rate is dominated by inverse decays, such that P αβ ¼ p 1α δ αβ and where w ≈ 3π=4 assuming Maxwell-Boltzmann statistics for all particles [68]. For strong wash-in, K 1 ≫ 1, and a generic flavor structure, p 1α ≪ 1, all entries of E are exponentially suppressed. The total washed-in B − L asymmetry then reads which also immediately follows from Eq. (4). Any UV mechanism that results in q e ≠ 0 at high temperatures, thus, induces nonzero B − L at temperatures of a few hundred TeV.
Flavor effects.-Next, let us generalize the above discussion to arbitrary temperatures T ∈ ð10 5 ; 10 15 Þ GeV. Equations (1)-(9), except for Eq. (4), remain valid in this case, the only difference being that the meaning of the flavor index α is now different. At T ∈ ð10 9 ; 10 11-12 Þ GeV, electrons and muons propagate as coherent states, which means α ¼ k τ , τ, while at temperatures T ∈ ð10 11-12 ; 10 15 Þ GeV, all three charged leptons propagate in coherent superpositions, such that α ¼ k. Here, l k represents the coherent single-flavor field that can be created and destroyed by N 1 interactions, and l k τ is the same field after projecting out its τ component. Denoting the N 1 Yukawa couplings by h e 1 , h μ 1 , and h τ 1 , we can write where h 2 k ¼ jh e 1 j 2 þ jh μ 1 j 2 þ jh τ 1 j 2 and h 2 k τ ¼ jh e 1 j 2 þ jh μ 1 j 2 . Flavor coherence at higher temperatures also implies that some flavor asymmetry Δ ⊥ can escape wash-in leptogenesis: where l ⊥ is perpendicular to l τ and l k τ and where l ⊥ 1 and l ⊥ 2 span the two-dimensional flavor space perpendicular to l k . Making use of these definitions and assuming again strong wash-in and generic RHN couplings, Eq. (12) now turns into where the numerical coefficients x C are listed in Table II. This asymmetry remains conserved as soon as the RHN interactions become inefficient at some high temperature T B−L [22]. We therefore obtain for the present-day BAU where c sph ≃ 12=37 [69]. Note that the standard contribution from thermal leptogenesis may be suppressed because of strong wash-out or insufficient CP violation. Equation (15) and Table II are our main results, which serve as a general toolkit to construct new baryogenesis models by implementing the following algorithm: (i) Conceive a UV model that leads to primordial chemical potentials μ i for some particle species i. (ii) Determine the corresponding conserved charges μ C . (iii) Specify the N 1 mass and, hence, relevant temperature scale for leptogenesis, T B−L . (iv) Compute the final BAU according to Eqs. (15) and (16).
Possible UV completions.-Let us now showcase two possibilities for generating primordial charge asymmetries prior to wash-in leptogenesis. Both scenarios result in B þ L ≠ 0 but preserve B − L. First, we consider SU(5) unification, where the decay of the heavy colored Higgs field H c ⊂ 5 mainly proceeds via the third-generation Yukawa coupling, H c →Q 3Q3 , tτ, Q 3 l τ ,tb [7,70,71]. The production and decay of H c bosons after inflation in the SU(5)-broken phase (see, e.g., Refs. [72,73] for a viable scenario) then results in while all other chemical potentials vanish. Here, μ 0 is determined by the decay rate, CP violation, and production mechanism of the colored Higgs field. This scenario sets the stage for wash-in leptogenesis above the equilibration temperature of the tau Yukawa interaction, T ≳ 10 [11][12] GeV. Similarly, one can construct models where extra Higgs scalars also generate primordial asymmetries in the first two fermion generations. The initial q e;μ;τ asymmetries are then encoded in general fieldsē ¼ c e e þ c μ μ þ c τ τ orē τ ¼ c τ e e þ c τ μ μ, such that in Table II, where a e;μ;τ ¼ h e;μ;τ 1 =h k and b e;μ ¼ h e;μ 1 =h k τ . Our second example is axion inflation featuring a coupling of the axion-inflaton field φ to the Chern-Simons term of the hypercharge gauge field, φ=ð4ΛÞY μνỸ μν [74]. This coupling sources nonvanishing hY μνỸ μν i during inflation [75][76][77], which induces primordial chemical potentials for all SM fermion species via the SM chiral anomaly [78,79], [46,47], with hypercharge fine-structure constant α Y , hypercharges n Y i , and AE for leftand right-handed fermions. h Y ¼ hA Y · B Y i=a 3 is the physical hypermagnetic helicity density, which is defined in terms of the comoving vector potential A Y , comoving flux density B Y , and cosmic scale factor a. In the parameter region where h Y =T 3 is approximately conserved [47,[80][81][82], its value at reheating after inflation dictates the magnitude of the conserved charges in each temperature regime. For T ∈ ð10 5 ; 10 6 Þ GeV, e.g., we have μ e =T ¼ −3α Y =πðh Y =T 3 Þ rh and, hence, μ B−L =T ¼ 9=10α Y = πðh Y =T 3 Þ rh . Axion inflation with a Hubble rate of H inf ∼ 10 10 GeV can therefore readily give rise to the observed baryon asymmetry [47]. The evolution of B and L in this scenario is schematically shown in Fig. 1. Axion inflation produces all lepton flavors in a symmetric way, meaning P ¼ 1=3 and P τ ¼ 1=2 in Table II.
Conclusions.-In this Letter, we presented a systematic discussion of wash-in leptogenesis, a mechanism to generate nonzero B − L in the type-I seesaw model. Our mechanism successfully operates at low RHN masses, strong wash-out, negligible CP violation in RHN decays, and B − L-symmetric initial conditions. We focused on N 1dominated wash-in leptogenesis; however, the inclusion of heavy-neutrino flavor effects [83], or even the generalization to a density-matrix formalism [84][85][86], is straightforward. Similarly, one may generalize our mechanism to other sources of LNV in the early Universe. The general concept of wash-in leptogenesis opens the door to a plethora of possibilities.
We thank Apostolos Pilaftsis, Mikhail Shaposhnikov, and Daniele Teresi for helpful comments. K. | 2020-11-19T02:00:50.629Z | 2020-11-18T00:00:00.000 | {
"year": 2020,
"sha1": "9ac112b283271e73878686d22bb065f285a2f52f",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.126.201802",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "5522a2c67cc4b784aafd146a0bcb0aec71de24fe",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
256934577 | pes2o/s2orc | v3-fos-license | Scintillating Organic–Inorganic Layered Perovskite-type Compounds and the Gamma-ray Detection Capabilities
We investigated scintillation properties of organic–inorganic layered perovskite-type compounds under gamma-ray and X-ray irradiation. A crystal of the hybrid compounds with phenethyl amine (17 × 23 × 4 mm) was successfully fabricated by the poor-solvent diffusion method. The bulk sample showed superior scintillation properties with notably high light yield (14,000 photons per MeV) under gamma-rays and very fast decay time (11 ns). The light yield was about 1.4 time higher than that of common inorganic material (GSO:Ce) confirmed under 137Cs and 57Co gamma-rays. In fact, the scintillation light yield was the highest among the organic–inorganic hybrid scintillators. Moreover, it is suggested that the light yield of the crystal was proportional with the gamma-ray energy across 122–662 keV. In addition, the scintillation from the crystal had a lifetime of 11 ns which was much faster than that of GSO:Ce (48 ns) under X-ray irradiation. These results suggest that organic–inorganic layered perovskite-type compounds are promising scintillator for gamma-ray detection.
between the organic barrier layers. Excitons in the inorganic layer possess large oscillator strength and exciton binding energy due to the quantum confinement effect and image-charge effect 15,16 . In addition to the unique optical properties such as electroluminescence 17 and distinguished optical nonlinearities 18 , scintillation properties of the hybrid compounds have been investigated under various types of radiation. Efficient scintillation owing to the exciton recombination in the inorganic layer was observed under proton, electron, and X-ray irradiations [19][20][21] .
Such optical properties under optical and ionizing irradiations are governed by exciton properties in the inorganic layer. In our previous studies, we have investigated the correlation between electronic structure in the inorganic layer and optical properties under various types of radiation [22][23][24] . Based on structure analysis and photoluminescence spectroscopy, it has been demonstrated that luminescence properties of the hybrid compounds are governed by structural distortions in the Pb-Br-Pb bondings between the adjoining PbBr 6 2− octahedra and Br-Pb-Br bond inside the PbBr 6 2− octahedra in the inorganic layer 22 . In addition to the luminescence properties, it has been shown that scintillation properties are also governed by the exciton properties in the inorganic layer under synchrotron X-ray irradiation (67.4 keV), because the effect of the energy transfer from an organic to an inorganic layer on the scintillation properties are negligible owing to much lower energy deposited in the organic layer than that in the inorganic layer 23,24 . Therefore, (C 6 H 5 C 2 H 4 NH 3 ) 2 PbBr 4 , which has both the distortions in the inorganic layer, is promising candidate as a gamma-ray scintillator material.
In this study, we investigated photoluminescence (PL) and scintillation properties of organic-inorganic layered perovskite-type compounds under gamma-ray and X-ray irradiations. A hybrid compound crystal (phenethyl amine incorporated into the organic layer) was fabricated by the poor solvent diffusion method. Further, the scintillation spectra, scintillation decay profiles and pulse-height spectra of the (C 6 H 5 C 2 H 4 NH 3 ) 2 PbBr 4 (or Phe) crystal were characterized. Figure 1 illustrates a photograph of a prepared Phe crystal. The size of the crystal was 17 × 23 × 4 mm 3 approximately. The transparency was not so high that the line patterns on the back of the sample were not seen clearly. The visual observations indicated that some cracks and defects such as grain boundaries were included. Figure 2 shows the XRD patterns of the Phe crystal, C 6 H 5 C 2 H 4 NH 3 Br and PbBr 2 . Some of the diffraction peaks of the Phe crystal and all the diffraction peaks of PbBr 2 were identified while the diffraction peaks of C 6 H 5 C 2 H 4 NH 3 Br could not be identified due to the absence of crystalline data in the database. The observation of (0 0 2 l) diffraction patterns of the Phe crystal, where l = 1-7, explains that two dimensional quantum structure was formed in the Phe crystal. The lattice constant of c-axis was estimated to be about 16.4 Å. Each the (0 0 2 l) diffraction peak was a single peak, and no peak separation was observed. Hence, the obtained Phe crystal had no phase separation. However, a small amount of undesirable impurities except for the precursors such as C 6 H 5 C 2 H 4 NH 3 Br and PbBr 2 may be included in the Phe crystal. Figure 3 exhibits PL spectra of the Phe crystal under excitation of 280 nm. A PL peak was observed at 410 nm in the Phe crystal. The emission wavelength of the Phe crystal agreed well with the reported value for Phe spin-coated film 25 . Therefore, the PL peak can be ascribed to exciton emissions from the inorganic layer 22,25 . According to ref. 25 , the Stokes shift in the Phe spin-coated film was less than 10 meV, so self-absorption and reemission of excitons can occur in the Phe crystal. Moreover, quantum efficiency of the Phe crystal under excitation of 300 nm was 0.25, which was almost equivalent to the value of Phe single crystal reported in our previous study 22 . Figure 4 exhibits X-ray induced scintillation spectra of the Phe crystal and GSO:Ce as a reference. A scintillation peak at 437 nm was observed for the Phe crystal. This sharp peak was attributed to exciton emission from the inorganic layer according to the PL spectrum ( Fig. 3) and the previous studies 21, 26 . In addition, the emission from GSO:Ce at 434 nm was due to the 5d-4f transitions of Ce 3+ as typical and reported earlier 27,28 . Figure 5 represents scintillation decay profile of the Phe crystal and GSO:Ce measured by X-ray irradiation. The profile was fitted with multi-exponential decay curves. The approximation led components of three different lifetimes: 11 ns (81%), 36 ns (18%), and 236 ns (1%). The first component can be attributed to the recombination of excitons in the inorganic layer, based on our previous studies 24 . In addition, the lifetime (11 ns) was much faster than that of GSO:Ce (48 ns) in which the emission is due to the 5d-4f transitions of Ce 3+ . Therefore, the Phe crystal exhibits greater scintillation properties-significantly shorter decay in addition to higher light yield compared with GSO:Ce. Figure 6 compares X-ray induced afterglow profiles of the Phe crystal and GSO:Ce. The afterglow level (A) was defined as A (%) = 100 × (I 2 −I BG )/(I 1 −I BG ) where I BG is the background signal, I 1 is the averaged signal intensity during X-ray irradiation and I 2 is the signal intensity at 20 ms after X-ray irradiation. The afterglow levels of the Phe crystal and GSO:Ce were 5 ppm and 15 ppm, respectively. The afterglow level of the Phe crystal was almost equivalent to those of commercial scintillators CdWO 4 and BGO 29 . According to ref. 30 , activation energies of trapping sites in organic-inorganic layered perovskite-type compounds are very shallow 30 . The compounds are formed in a self-organized manner, so the creation of lattice defects and lattice mismatch between the organic and inorganic layers is inhibited. This should be the reason why the afterglow level of the Phe crystal was very low. Figure 7 shows pulse-height spectra measured using the Phe crystal and those using GSO:Ce as a comparison. The gamma-ray sources used are 137 Cs (662 keV) and 57 Co (122 keV). The pulse height channel of Phe and GSO:Ce was 247 ± 10 (Phe, 137 Cs), 107 ± 5 (Phe, 57 Co), 182 ± 5 (GSO:Ce, 137 Cs), 77 ± 3 (GSO:Ce, 57 Co). The scintillation light yield of the Phe crystal was about 1.4 times higher than that of GSO:Ce under gamma-ray irradiation of both 137 Cs and 57 Co, on an assumption that the pulse height was proportional to the scintillation output. In our previous studies, the scintillation light yield of GSO:Ce was evaluated using reverse-type avalanche photodiodes (APDs) as scintillation detectors 31 . According to the evaluation system calibrated by using 55 Fe (5.9 keV), the scintillation light yield of GSO:Ce was estimated to be 10,000 photons/MeV. Based on the registered channel number and the light yield of GSO:Ce, the scintillation light yield of the Phe crystal was estimated to be 14,000 photons per MeV. Regarding the energy resolution ΔE(FWHM)/E, the Phe crystal showed 29 ± 6%( 137 Cs) and 43 ± 7%( 57 Co) and that of GSO:Ce was 9 ± 2%( 137 Cs) and 19 ± 2%( 57 Co), which was approximately consistent with those reported by the previous studies 32, 33 . The poor energy resolution of the Phe crystal can be ascribed to crystal nonuniformity and self-absorption of excitons in the inorganic layer. Figure 8 exhibits pulse-height spectra of 57 Co (122 keV), 133 Ba (356 keV), 22 Na (511 keV), and 137 Cs (662 keV) gamma-ray sources measured by using the Phe crystal sample. Pulse height peaks were successfully observed under each gamma-ray sources. The registered channel was 52 ± 2 ( 57 Co), 136 ± 5 ( 133 Ba), 213 ± 5 ( 22 Na), and 292 ± 5 ( 137 Cs). In addition, the energy resolution ΔE(FWHM)/E of the Phe crystal under each gamma-ray sources was measured to be 49 ± 6% ( 57 Co), 58 ± 7% ( 133 Ba), 55 ± 5% ( 22 Na), and 35 ± 5%( 137 Cs), respectively. Figure 9 represents the correlation between gamma-ray energy and the corresponding pulse height channel. It is suggested that the pulse height channel (in turn scintillation light yield) is proportional to the gamma-ray energy. These results suggest that organic-inorganic layered perovskite-type compounds are potential materials for determination of the energy of the detected radiation in the 122-662 keV energy range.
Discussion
According to Figs 5 and 7, the Phe crystal exhibited significantly higher scintillation light yield and faster decay compared with those of GSO:Ce. In our previous study, the effect of organic moieties on the scintillation properties of organic-inorganic layered perovskite-type compounds has been investigated. In case of scintillation, both an organic and an inorganic layers are excited because the excited energy of benzene is about 4.7 eV 34 . Recently, we demonstrated that scintillation properties of organic-inorganic layered perovskite-type compounds mainly depend on exciton properties of the inorganic layer because deposited energy of the inorganic layer is much higher than that of the organic layer when the compounds are irradiated by gamma-rays or X-rays. Therefore, significant scintillation shown in Figs 5 and 7 can be achieved by the excitons confined in the quantum well layer (the inorganic layer). In addition, scintillation light yield of the Phe crystal exhibits 14,000 photons per MeV, which was higher than that of GSO:Ce (10, 000 photons per MeV) and commercial organic-inorganic hybrid scintillators such as BC-452. Our structural analyses and photoluminescence spectroscopy suggested that structural distortion in the inorganic layer affects the intensity of luminescence, because these distortions lead to a decrease in the Bohr radius of the excitons 35,36 . This is the reason why the Phe crystal, which has both distortions in the adjoining PbBr 6 2− octrahedra and inside the PbBr 6 2− octahedron, exhibited a high scintillation light yield. In addition, it is suggested that scintillation light yield of the Phe crystal was proportional to the gamma-ray energy as illustrated in Fig. 9. The correlation between gamma-ray energy and scintillation light yield for GSO:Ce has been investigated experimentally and theoretically 33,37 . Scintillation light yield of GSO:Ce was non-proportional to the gamma-ray in the 10-1000 keV energy range, due to the effects of K-and L-edge absorptions and non-radioactive processes 33,37 . On the other hand, it is suggested that scintillation light yield of the Phe crystal was proportional to the gamma-ray energy in the 122-662 keV range. The proportionality can be attributed to the following reasons. The K-edge absorption energy of Pb was about 85 keV, which was out of the range of 122-662 keV 38,39 . In addition, organic-inorganic layered perovskite-type compounds are formed in a self-organized multiple quantum structure. Therefore, size distributions and creation of lattice defects related to non-radiative processes are avoided. Further investigations of the correlation between gamma-ray energy and scintillation light yield should be required due to the poor energy resolution. These results suggest that organicinorganic layered perovskite-type compounds are promising scintillator material for gamma-ray detections.
Methods
Synthesis. Stoichiometric quantities of phenethylamine (C 6 H 5 C 2 H 4 NH 2 ) and hydrobromic acid were reacted in water for 0.5 h. After evaporation of the solvent, C 6 H 5 C 2 H 4 NH 3 Br powder was obtained and subsequently dissolved in N,N-dimethylformamide (DMF) with PbBr 2 at a molar ratio of 2:1 and then stirred for 3 h under dry argon flow. Powder of (C 6 H 5 C 2 H 4 NH 3 ) 2 PbBr 4 was then obtained by evaporating the solvent. Furthermore, the obtained powder was processed by the poor-solvent diffusion method in order to fabricate into a single crystal of (C 6 H 5 C 2 H 4 NH 3 ) 2 PbBr 4 as follows. The obtained (C 6 H 5 C 2 H 4 NH 3 ) 2 PbBr 4 were dissolved in DMF, as a strong solvent, in a glass bottle (50 ml), and then nitromethane, as a poor solvent, was dropped into the solution until just before precipitation. Next, the bottle was loaded in a shaded desiccator where the poor solvent was poured at the bottom. The vapor of the poor solvent was gradually diffused into the solution to reduce the solubility. It took for a month to obtain single crystals of Phe, which were grown in the bottom of the bottles. Furthermore, a piece of the obtained single crystals was used as a seed crystal and loaded into the new bottle including Phe powder, DMF, and nitromethane solution. Over another month of crystal growth, a larger size of the Phe crystal was obtained. By repeating the above procedures several times, a crystal with a thickness of 4 mm was obtained.
Evaluation of the sample. The crystal structure was investigated by X-ray diffraction (XRD) over a 2θ range of 3° to 40° at room temperature using Cu Kα radiation. The quantum efficiency was measured by using Quantaurus QY (C11347, Hamamatsu). The PL decay curve was measured by using Quantaurus τ (C11367, Hamamatsu). In these measurements, the excitation wavelength was 280 nm which was the shortest excitation wavelength available in the instrument, and the monitoring wavelength was 410 nm. X-ray induced scintillation spectra were measured by our original setup 40 . A conventional X-ray tube which is equipped with a W anode target (XRB80P & N200 × 4550, Spellman) and a Be window was used as an excitation source. During the operation, the tube voltage and current was set to 40 kV and 5.2 mA, respectively. The scintillation photons from the sample were led to a spectrometer (an assembly of Andor DU-420-BU2 CCD and Shamrock 163 monochromator) through a 2.0 m optical fiber. Here, the spectrometer was placed off the irradiation geometry axis to avoid X-ray photons directly striking onto the CCD. In order to reduce the thermal noise, the CCD element was cooled down to 193 K by a Peltier module. Pulse height spectrum measurements were performed to estimate scintillation light yields. A crystal sample was placed on a window of photomultiplier tube (PMT; R7600-2000, Hamamatsu) with an optical grease. The sample was covered with several layers of Teflon tape to guide all the scintillation photons towards the PMT. The high voltage of −700 V was supplied (ORTEC 556), and the signals were read out from the anode of the PMT. In order to separate from the background gamma-rays, the detector assembly (PMT with Figure 9. Correlation between gamma-ray energy and pulse-height channel. scintillator sample) was placed inside Pb walls with a thickness of 5 cm. Once a gamma-ray was detected, the signals were fed into the pre-amplifer (ORTEC 113) and then to the shaping amplifer (ORTEC 572) with 1 µs shaping time. After converting to digital signals by a multi channel analyzer (Amptek, Pocket MCA 8000 A), they were recorded to a computer. The X-ray induced scintillation decay profiles and the X-ray induced afterglow profiles were measured using an afterglow characterization system 29 , which was equipped with a pulsed X-ray tube. The repetition frequency was 200 kHz for scintillation decay measurements and 10 Hz for afterglow measurements. The X-ray source was supplied with the voltage of 30 kV during the measurement. The systems integrate the emission signal over the wavelength range of approximately 160-650 nm. A commercial GSO:Ce scintillator was used as a standard in order to compare the scintillation properties. | 2023-02-17T14:22:48.752Z | 2017-11-07T00:00:00.000 | {
"year": 2017,
"sha1": "50f9a38f449086ed099a210e6bf20d84b2df3162",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-15268-x.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "50f9a38f449086ed099a210e6bf20d84b2df3162",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": []
} |
24279359 | pes2o/s2orc | v3-fos-license | Pyruvate Formate-lyase and Its Activation by Pyruvate Formate-lyase Activating Enzyme*
Background: PFL is a glycyl radical enzyme (GRE) activated by a radical AdoMet-activating enzyme (PFL-AE). Results: Equilibrium constants for PFL-AE binding to PFL and AdoMet are determined, and the effects of substrates on activation are quantified. Conclusion: In vivo, PFL-AE exists largely in complex with PFL and AdoMet. Significance: GREs play key roles in anaerobic metabolism, but their activation is poorly understood. The activation of pyruvate formate-lyase (PFL) by pyruvate formate-lyase activating enzyme (PFL-AE) involves formation of a specific glycyl radical on PFL by the PFL-AE in a reaction requiring S-adenosylmethionine (AdoMet). Surface plasmon resonance experiments were performed under anaerobic conditions on the oxygen-sensitive PFL-AE to determine the kinetics and equilibrium constant for its interaction with PFL. These experiments show that the interaction is very slow and rate-limited by large conformational changes. A novel AdoMet binding assay was used to accurately determine the equilibrium constants for AdoMet binding to PFL-AE alone and in complex with PFL. The PFL-AE bound AdoMet with the same affinity (∼6 μm) regardless of the presence or absence of PFL. Activation of PFL in the presence of its substrate pyruvate or the analog oxamate resulted in stoichiometric conversion of the [4Fe-4S]1+ cluster to the glycyl radical on PFL; however, 3.7-fold less activation was achieved in the absence of these small molecules, demonstrating that pyruvate or oxamate are required for optimal activation. Finally, in vivo concentrations of the entire PFL system were calculated to estimate the amount of bound protein in the cell. PFL, PFL-AE, and AdoMet are essentially fully bound in vivo, whereas electron donor proteins are partially bound.
Pyruvate formate-lyase (PFL) 2 supplies the citric acid cycle with acetyl-CoA during anaerobic glycolysis by catalyzing the reaction of pyruvate ϩ CoA º acetyl-CoA ϩ formate and is a central enzyme in anaerobic metabolism of Escherichia coli and other facultative anaerobes. PFL is among the growing list of glycyl radical enzymes (1), which play key roles in anaerobic metabolism in microbes, including the reduction of ribonucleotides to deoxyribonucleotides (2), synthesis of benzylsuccinate (3), and conversion of choline to trimethylamine (4). The defining feature of a glycyl radical enzyme is the presence of a stable and catalytically essential glycyl radical in the active site. The glycyl radical is generated by an activating enzyme that belongs to the radical S-adenosylmethionine (AdoMet) superfamily; these radical AdoMet activases utilize a [4Fe-4S] cluster and AdoMet to generate the glycyl radical by direct H-atom abstraction. These glycyl radical enzymes and their activating enzymes are notoriously difficult to study due to the oxygen sensitivity of both the glycyl radical in the glycyl radical enzyme and the [4Fe-4S] cluster in the activating enzymes.
PFL is constitutively expressed in E. coli; however, its expression increases 10 -12-fold under anaerobic conditions (5,6). The enzyme is in an inactive state when produced and must be activated by an activating enzyme (PFL-AE) under anaerobic conditions before catalysis can occur (6 -8). PFL exists as a dimer with one active site per subunit (6,9,10) and has been shown to exhibit half-site reactivity (5,(11)(12)(13). X-ray crystal structures of PFL have revealed that each active site is buried ϳ8 Å from the surface of the enzyme (9,10). These data, together with the evidence that activation requires direct H-atom abstraction from an active site glycine residue (PFL Gly-734) by a deoxyadenosyl radical generated in the PFL-AE active site (14 -18), suggest that significant conformational changes of one or both proteins are required during the activation process. Recent biophysical and biochemical studies indeed support a two-state model for PFL, in which the closed state that has been structurally characterized can be converted to an open state in which the glycyl radical loop of PFL is more solvent-exposed (11). This conversion to the open state is favored in the presence of PFL-AE (11).
The activation of PFL by PFL-AE involves intriguing issues of protein-protein interactions, associated protein conformational changes, and protected generation and transfer of highly reactive carbon radical species. In this publication, we provided biophysical insight into interactions between PFL and PFL-AE using surface plasmon resonance under anaerobic conditions, and we explore the roles of AdoMet and PFL substrates on this interaction. Our own data together with some previously published work has allowed us to estimate the degree to which the PFL system components are bound in complexes in vivo and to provide a more complete understanding of the conditions under which PFL activation occurs.
EXPERIMENTAL PROCEDURES
Protein Preparation and Small Molecules-PFL-AE and PFL was grown and purified as published previously (12,20,27,28). PFL-AE was quantified using ⑀ 280 nm ϭ 39.4 mM Ϫ1 cm Ϫ1 , which was in agreement with the Bradford assay (29) using a correction factor of 0.65 (19). Two batches of PFL-AE were prepared for AdoMet binding assays and PFL activation assays. Iron assays were performed on both batches, and iron content was determined to be 2.83 Ϯ 0.03/protein for AdoMet binding assays and 3.96 Ϯ 0.02/protein for PFL activation assays by a previously published method (30). PFL was quantified using either the Bradford assay or ⑀ 280 nm ϭ 178 mM Ϫ1 cm Ϫ1 , with both techniques giving identical values (29). The PFL-AE and PFL extinction coefficients were obtained using the ExPASy ProtParam tool. S-adenosylmethionine was synthesized using AdoMet synthetase and purified as described previously (22). The small molecule substrates pyruvate, oxamate, and coenzyme A that were used in PFL activation assays were obtained from Sigma Aldrich and were of the highest commercially available quality and used without further purification.
Thiol coupling was performed at a flow rate of 5 l/minute, and all injections lasted 400 s. A 1:1 (v/v) mixture of 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide⅐HCl (EDC) and N-hydroxysuccinimide (200 mM EDC and 50 mM N-hydroxysuccinimide) was injected on the CM5 sensor chip to activate the carboxylic acid moieties using the Biacore X-100. 2-(2-Pyridinyldithio)ethaneamine hydrochloride was freshly prepared at 80 mM in 0.1 M borate buffer at pH 8.5. The ligand protein PFL-AE was injected at a concentration of 22.7 mg/ml, resulting in a baseline increase of 557 resonance units. The unreacted 2-(2-pyridinyldithio)ethaneamine hydrochloride was blocked by injecting 50 mM L-cysteine in 20 mM HEPES, 10 mM NaCl, pH 5.5, and the signal was decreased by 224 resonance units. We estimate that 360 RU of PFL-AE was coupled to the CM5 biosensor. Experiments were run at 25°C in 20 mM HEPES, 10 mM NaCl, pH 7.4. Analytes were injected at a flow rate of 30 l/min with a contact time of 180 s and a dissociation time of 60 s. Experimental sensograms were corrected by subtracting the response from the control flow cell. After each experiment, immobilized PFL-AE was regenerated using 20 mM HEPES, 500 mM KCl, 0.005% polysorbate 20, 200 mM imidazole, pH 7.4, with a regeneration time of 180 s, which completely removed the PFL and restored the preinjection baseline. Typical experiments included two blank cycles with buffer followed by three trials, each separated by one blank cycle. Five experimental PFL concentrations were prepared for each trial by making a 3-fold dilution with a maximum concentration of 10 M PFL dimer. Triplicate experiments resulted in similar R max values indicating that PFL-AE was not damaged during regeneration. Sensograms for PFL-AE and PFL binding in single cycle kinetics mode were fit to a Langmuir 1:1 interaction model using BIAevaluation software (from GE Healthcare) available on the Biacore X-100 plus package.
AdoMet Binding Studies-CD experiments were run in triplicate under anaerobic conditions using a Jasco-710 spectropolarimeter at room temperature. Visible region measurements were collected using a 1-cm path length cuvette, and far-UV spectra were run using a 0.1-mm path length cuvette. For visible region scans, the sensitivity of the Jasco-710 was set to 100 millidegrees, with a data pitch of 0.1 nm, in continuous scan mode at a speed of 100 nm/min, with a response of 1 s, a bandwidth of 1.0 nm, and an accumulation of three scans. The parameters for far-UV scans were exactly the same as the performed in the visible region except a scan rate of 50 nm/min was used from 195 to 260 nm. The buffer used for all CD experiments was 20 mM HEPES, 250 mM NaCl, 1 mM DTT, pH 7.4, and PFL-AE concentrations were in the range of 50 -120 M in the visible region and 30 M in the far-UV region. During AdoMet binding experiments, small volumes of concentrated AdoMet were titrated into the cuvette, and CD data were collected from 300 to 800 nm. A control experiment was also performed where buffer was titrated in place of AdoMet to show AdoMet binding was responsible for the changes in the CD spectrum and to provide data for dilution correction of the AdoMet binding experiments. The AdoMet binding results are an average of triplicate data analyzed using the change in ellipticity at 400 nm, which was divided by the maximum change in ellipticity and fit to Equation 1.
The total PFL-AE concentration is represented by the variable Et in the equation below. Lt represents the total AdoMet concentration titrated during the assay, and K D is the equilibrium constant. PFL Activation Studies-EPR spectra were measured on a Bruker ER-200D-SRC spectrometer at 12 and 60 K for PFL-AE and PFL, respectively, with a frequency of 9.37 GHz. The EPR microwave power was set to 0.06 milliwatt (for examining the glycyl radical of PFL) and 1.59 milliwatt (for PFL-AE) with a modulation frequency of 100 kHz and a 5 gauss modulation amplitude for all samples; all spectra were the sum of four scans. PFL activation reactions were carried out under anaerobic conditions in an mBraun box with Ͻ1 ppm O 2 . PFL-AE was added to an EPR tube at 100 M with a volume of 350 l in 100 mM Tris, 250 mM NaCl 10 mM DTT, 100 M 5-deazariboflavin, pH 7.4 and photoreduced for a time course of 0, 5, 10, 20, 30, and 60 min with a 500-watt halogen bulb. Photoreduced PFL-AE was then added to PFL to make a 1:1 ratio at a final concentration of 50 M each of PFL-AE and PFL in the presence of 500 M AdoMet. One PFL substrate was then added to each EPR sample (10 mM pyruvate, 10 mM oxamate, 100 M CoA, or no substrate), and components were mixed. Samples were pipetted into a clean EPR tube and wrapped in foil before being incubated for 20 min to provide time for the reaction to go to completion. EPR samples were then flash frozen in liquid N 2 and stored in a liquid N 2 Dewar until the EPR spectrum could be measured. The concentration of the PFL glycyl radical was determined using a K 2 (SO 3 ) 2 NO standard according to previously described methods (11,12,34). The K m values for PFL substrates have been determined previously to be 2 mM for pyruvate and 7 M for CoA (5). Equilibrium constants have also been determined for PFL small molecule binding, yielding a K D of 2 mM for oxamate and 100 M for pyruvate (10). Under the conditions employed during these experiments, we can there-fore confidently say that the substrates are at sufficiently high concentrations to interact with PFL and should be close to fully bound.
In Vivo Concentrations of the PFL System-Protein purifications and two-dimensional gel electrophoresis studies provide information on the number of protein copies per cell for the PFL system in E. coli grown under anaerobic conditions in minimal medium and supplemented with glucose (5,6,24,26,35). A more recent study using cell microscopy determined cell volume for E. coli cells under some of the most commonly used growth conditions (36). We selected the cell volume that corresponded to growth under anaerobic conditions in minimal media and supplemented with glucose to determine the in vivo concentrations for proteins of the PFL system.
PFL-AE Binding
Interactions with PFL-Surface plasmon resonance binding experiments were performed under anaerobic conditions to investigate the interaction between PFL and the oxygen-sensitive PFL-AE. We determined the K D for this interaction to be 1.1 Ϯ 0.2 M at 25°C (Fig. 1). The association rate for complex formation was determined to be 1028 Ϯ 34 (M Ϫ1 s Ϫ1 ). When compared with other biological systems, this rate is very slow and on the low end for protein-protein interactions; this indicates that the association rate is limited by large conformational changes rather than by diffusion (37). Indeed, conformational changes are evident in the crystal structure of PFL-AE upon binding of AdoMet and the 7-mer peptide analog of the PFL active site (32). Conformational changes have also been detected in PFL upon binding of PFL-AE when the active site loop on PFL must unfold to interact with the binding site in PFL-AE (11).
Electrostatic interactions between proteins lead to association rates that are much faster than the rate of diffusion; given the slow rate of association for PFL-AE with PFL, it is therefore reasonable to assume that electrostatic interactions do not play a significant role in PFL-AE and PFL binding (37). These data are further corroborated by activity assay data that show ionic strength does not affect PFL activity in the range of 0.1-1.6 M KCl (38). The dissociation rate for the PFL-AE⅐PFL complex was determined to be 1.17 Ϯ 0.16 ϫ 10 Ϫ3 s Ϫ1 , indicating that the complex exhibits reasonable stability. When the same PFL-AE and PFL binding data were examined using affinity analysis, a K D of 3.4 Ϯ 2.2 M was determined for the interaction, which is with in error of the equilibrium constant based on association and dissociation rates.
AdoMet Binding Studies with PFL-AE-The CD spectrum of PFL-AE exhibits max values of 305 and 430 nm with shoulders at 345 and 630 nm and min values of 380 and 550 nm. When AdoMet is titrated into a solution of PFL-AE, there are dramatic changes in the CD spectrum with multiple isosbestic points ( Fig. 2A). The CD spectrum of PFL-AE with AdoMet bound has max values of 305, 410, 495, and 690 nm with shoulders at 365 and 630 nm and min values of 345 and 560 nm. The spectral changes upon titration with AdoMet have allowed us to determine that PFL-AE in the as isolated form binds AdoMet with a K D of 7.6 Ϯ 1.9 M (Fig. 2B). Results from the Knappe lab using reconstituted PFL-AE with similar iron content shows that only holo-PFL-AE binds AdoMet with an equilibrium constant of 3 M, in close agreement with our data (33). The K m for AdoMet has been determined previously as 2.8 -7 M (25, 38).
By using our experimentally determined affinity of PFL-AE for PFL, we were able to set up binding experiments in which PFL-AE was essentially fully bound to PFL prior to titrating in AdoMet; in this way we were able to monitor binding of AdoMet to the PFL-AE⅐PFL complex (Fig. 2). The PFL-AE⅐PFL complex exhibited essentially the same affinity for AdoMet as PFL-AE alone, with a K D of 5.7 M Ϯ 1.7.
We used far-UV circular dichroism studies to see if changes in secondary structure occur during AdoMet binding. Interestingly, there was no difference in PFL-AE secondary structure in the presence or absence of AdoMet (Fig. 3). In either case the protein appears to be well folded. The aggregate data suggests AdoMet binding alters the environment of the iron-sulfur cluster without inducing changes in secondary structure.
PFL Activation Studies-Although nearly all reports of in vitro PFL activation include the PFL substrate pyruvate or its analog oxamate in the activation mixtures, the roles of these molecules in activation have remained unclear. Previous work has shown that photoreduction of PFL-AE results in time-dependent conversion to the EPR-active [4Fe-4S] 1ϩ cluster state, The data were analyzed using changes in ellipticity at 400 nm divided by total change in ellipticity, which was then plotted as function of AdoMet concentration and fit to the quadratic binding equation. CD parameters were set to a sensitivity of 100 millidegrees, with a data pitch of 0.1 nm, in continuous scan mode with a scan rate of 100 nm/minute, a scan range of 300 -800 nm, response of 1 s, bandwidth of 1.0 nm, and an accumulation of three scans; all measurements were performed using a 1-cm path length anaerobic cuvette. and in the presence of PFL, AdoMet, and oxamate, there is stoichiometric generation of the glycyl radical on PFL concomitant with cluster oxidation (12). We reproduced these assays to examine the roles, if any, of PFL substrates on the activation process. Activation assays were carried out by photoreducing PFL-AE for set amounts of time by using deazariboflavin and exposing the PFL-AE mixture to an intense halogen lamp. The PFL-AE reduction was quantified by using EPR spectroscopy, as the amount of the catalytically active [4Fe-4S] 1ϩ state can be determined by comparison with a Cu II (EDTA) standard; all activation samples described below and illustrated in Fig. 4 for a given time point had the same starting amount of [4Fe-4S] 1ϩ cluster. PFL, with or without added PFL substrates, was added to the reduced PFL-AE, and the amount of glycyl radical generated was quantified by EPR spectroscopy. In these assays, samples containing either pyruvate or oxamate exhibited a stoichiometric conversion of [4Fe-4S] 1ϩ cluster of PFL-AE to the glycyl radical on PFL (Fig. 4). After PFL-AE was photoreduced for 60 min and mixed with PFL in the presence of pyruvate, the concentration of the glycyl radical was determined to be 44 Ϯ 5 M and in the presence of oxamate, the glycyl radical was determined to be 46 Ϯ 5 M. A similar activation of PFL in the presence of the PFL substrate CoA resulted in only 12 Ϯ 3 M glycyl radical, despite the same starting amount of PFL-AE [4Fe-4S] 1ϩ cluster as the samples used for PFL activation in the presence of pyruvate or oxamate. Activation of PFL in the absence of PFL substrates yielded 13 Ϯ 3 M glycyl radical.
In Vivo Concentrations of Proteins and Small Molecules Involved in the PFL System-In vivo concentrations of the proteins and small molecules involved in the PFL system were calculated for this study to provide a context for equilibrium constants and estimate the fraction of bound proteins and small molecules in vivo. Calculations were performed using data published by Knappe et al. (5) where the amount of protein per cell was quantified for the PFL system. Advances in cellular microscopy have allowed for the accurate determination of cytosolic volumes of E. coli cells under similar conditions (36). When combined this data has allowed us to calculate the in vivo con-centrations of the proteins involved in the PFL system: 20 M PFL, 1.1 M PFL-AE, 2.9 M flavodoxin, 2.3 M NADP ϩ : flavodoxin oxidoreductase, and 648 nM pyruvate:flavodoxin oxidoreductase. Unfortunately error calculations were not available for the polypeptide or percent soluble proteins measurements in the PFL system, so we are unable to calculate errors associated with in vivo concentrations. The in vivo concentration of AdoMet has been estimated to be in the range of 50 -400 M (39 -41). In vivo concentrations have been calculated for pyruvate of 7.5 Ϯ 0.5 mM (42). Under these conditions and assuming a AdoMet concentration of 50 M, PFL, PFL-AE, AdoMet, and pyruvate would be essentially fully bound. Only a small fraction of these complexes would have the electron donor flavodoxin bound, however (Table 1).
DISCUSSION
The activation of PFL was studied in this work, providing significant new information on the interactions between PFL and its activase, PFL-AE. Surface plasmon resonance binding experiments were carried out under anaerobic conditions, and the data were fit to a 1:1 interaction model with good fits. The K D value of 1.1 Ϯ 0.2 M, calculated for PFL and PFL-AE, is nearly identical to the K m value previously reported of 1.4 M and agrees well with previous estimates of the K D (11,14). The and magenta, no substrate. B, the graph shows the quantity of glycyl radical formed on PFL after mixing with PFL-AE that had been photoreduced for 0 -60 min; after mixing, the samples were analyzed using EPR. Colors are the same as in A with the addition of a PFL-AE standard in black, which was spin quantified for [4Fe-4S] 1ϩ using a Cu II (EDTA) standard. PFL was spin quantified using a K 2 (SO 3 ) 2 NO standard. Samples containing 50 M PFL-AE and 50 M 5-deazariboflavin in 20 mM HEPES, 250 NaCl, pH 7.4, were photoreduced for 0, 10, 20, 30, and 60 min. 50 M PFL dimer with either pyruvate, oxamate, CoA, or no substrate was added, and the samples were incubated in the dark for an additional 20 min before being frozen in liquid N 2 and analyzed by EPR. Glycyl radical signals were measured at 60 K and the [4Fe-4S] 1ϩ cluster signals were measured at 12 K to ensure that there would be no overlapping signals. EPR parameters were as follows: microwave frequency, 9.37 GHz; power, 19 milliwatt; modulation amplitude, 5 G.
TABLE 1 Equilibrium constants and in vivo concentrations for the PFL system
In vivo concentrations for the PFL family were calculated using previously determined polypeptide measurements and combined with cell volume measurements of 2.9 Ϯ 1.2 fl for E. coli cells grown in minimal media supplemented with glucose under anaerobic conditions (5,36). In vivo concentrations under anaerobic conditions were as follows: ͓PFL-AE͔ ϭ 1.1 M, ͓PFL͔ ϭ 20 M, ͓Fld͔ ϭ 2.9 M, ͓FNR͔ ϭ 2.3 M, ͓PFOR͔ ϭ 648 nM. Equilibrium constants were taken from previously published data (47). Error calculations could not be performed because error information was not available for polypeptide measurements (5,48). association rate between PFL-AE and PFL is on the low end for biological interactions indicating that the rate of binding is limited by large conformational changes (37). The [4Fe-4S] cluster in PFL-AE undergoes dramatic changes to its CD spectrum as a direct consequence of AdoMet binding ( Fig. 2A). However no changes in secondary structure occurred upon AdoMet binding based on far-UV CD measurements (Fig. 3). Therefore, it is assumed that the changes in the visible region CD spectrum are caused by the direct coordination of AdoMet to the unique iron of the [4Fe-4S] cluster (22,23). These changes in the visible region CD of PFL-AE can be used to accurately determine equilibrium constants for AdoMet binding in the presence and absence of PFL. PFL-AE binds AdoMet with identical affinity within error regardless of whether PFL is bound to PFL-AE, indicating that PFL binding to PFL-AE does not affect AdoMet binding affinity. These data suggest that in vivo, the order of interaction for AdoMet binding to PFL-AE or the PFL-AE/PFL complex does not matter.
K D ͓Bound in vivo͔ % Bound in vivo
The PFL substrate pyruvate and its analog oxamate have been suggested to act as allosteric effectors required for PFL activation (8,12,14,33). We used EPR spectroscopy to monitor PFL activation in the presence and absence of pyruvate, oxamate, and CoA, to determine whether they are required for PFL activation and if they have any direct affect on the amount of active enzyme produced. Our data shows that although PFL substrates are not absolutely required for activation, their presence results in significantly higher glycyl radical concentrations. When pyruvate or oxamate are incubated with PFL and reduced PFL-AE, there is a stoichiometric conversion of the [4Fe-4S] 1ϩ cluster from PFL-AE to the glycyl radical of PFL. PFL activated in the presence of CoA or no substrate results in 3.7-fold less glycyl radical than in the presence of pyruvate or oxamate. The signal for the [4Fe-4S] 1ϩ cluster in PFL-AE is absent in all experiments after the addition of PFL, indicating that in all cases the PFL-AE is being oxidized in the presence of PFL. The lower quantities of glycyl radical observed in the absence of pyruvate or oxamate therefore suggests that solvent quenches a portion of the PFL glycyl radical. Given that pyruvate and oxamate are known to bind in the active site of PFL (9, 10), we propose that these molecules aid in reinsertion and stabilization of the glycyl radical loop in the closed, catalytically active state of PFL (11).
In vivo concentrations for PFL-AE, PFL, flavodoxin, pyruvate:flavodoxin oxidoreductase, and NADP ϩ :flavodoxin oxidoreductase were calculated in this work and compared with K D values to estimate the amount of bound protein in vivo. Under these conditions, PFL-AE is almost completely bound to PFL (Table 1). In vivo concentrations of AdoMet have been determined to be in the 50 -400 M range, (39 -41) however there may be less available AdoMet for PFL-AE given the widespread use of AdoMet in many enzymatic reactions in E. coli (24,(43)(44)(45)(46). AdoMet binds to both PFL-AE and the PFL-AE/ PFL complex with the same affinity of ϳ 6 M, so assuming an in vivo AdoMet concentration of 50 M, PFL-AE would be essentially completely bound with AdoMet in vivo regardless of whether PFL is bound. Only 11% of cellular PFL-AE is estimated to be bound to its electron transfer partner flavodoxin at any given time in vivo, consistent with the idea that flavodoxin needs to bind only transiently to deliver an electron to the [4Fe-4S] cluster of PFL-AE.
Taken together, our data provide important new insights into the process by which a glycyl radical activating enzyme (PFL-AE) activates its substrate glycyl radical enzyme (PFL). The process involves slow binding associated with large conformational changes, likely involving movement of the glycyl radical domain of PFL and a conserved loop of PFL-AE implicated in substrate binding (11,32). AdoMet can bind to this complex either before or after association, and binding gives rise to changes in the visible region CD spectrum of the [4Fe-4S] cluster of PFL-AE. These changes in visible CD features can be used to monitor AdoMet binding and indicate that the affinity of AdoMet for the PFL-AE⅐PFL complex is comparable with that for PFL-AE alone. Calculations indicate that in vivo, PFL-AE is nearly completely in the PFL-AE/AdoMet⅐PFL⅐pyruvate complex, awaiting reduction from flavodoxin to initiate catalysis. | 2018-04-03T02:49:23.484Z | 2013-12-12T00:00:00.000 | {
"year": 2013,
"sha1": "ba96c8524e5226cce44920c17e6925bf97419868",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/289/9/5723.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "66c0c7b78084392544a18180302ab93e89806e23",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
13891401 | pes2o/s2orc | v3-fos-license | A New Strategy for Analyzing Time-Series Data Using Dynamic Networks: Identifying Prospective Biomarkers of Hepatocellular Carcinoma
Time-series metabolomics studies can provide insight into the dynamics of disease development and facilitate the discovery of prospective biomarkers. To improve the performance of early risk identification, a new strategy for analyzing time-series data based on dynamic networks (ATSD-DN) in a systematic time dimension is proposed. In ATSD-DN, the non-overlapping ratio was applied to measure the changes in feature ratios during the process of disease development and to construct dynamic networks. Dynamic concentration analysis and network topological structure analysis were performed to extract early warning information. This strategy was applied to the study of time-series lipidomics data from a stepwise hepatocarcinogenesis rat model. A ratio of lyso-phosphatidylcholine (LPC) 18:1/free fatty acid (FFA) 20:5 was identified as the potential biomarker for hepatocellular carcinoma (HCC). It can be used to classify HCC and non-HCC rats, and the area under the curve values in the discovery and external validation sets were 0.980 and 0.972, respectively. This strategy was also compared with a weighted relative difference accumulation algorithm (wRDA), multivariate empirical Bayes statistics (MEBA) and support vector machine-recursive feature elimination (SVM-RFE). The better performance of ATSD-DN suggests its potential for a more complete presentation of time-series changes and effective extraction of early warning information.
from time-series data in metabolomics studies. Smilde et al. 13 combined analysis of variance (ANOVA) and simultaneous component analysis to study the variation caused by different factors such as time, doses or combinations, and then proposed ANOVA-simultaneous component analysis (ASCA) method to deal with time course problems. Nueda gave a time-series feature selection technique by calculating the leverage and the squared prediction error based on the ASCA model 14 . Tai et al. proposed multivariate empirical Bayes statistical time-series analysis (MEBA) method to rank the features by calculating the Hotelling's T 2 15 . Berk et al. 8 used smoothing splines mixed effects (SME) and an associated statistic functional test to detect the features with differences between groups. Subsequently, some data analysis platforms also have been established 16,17 to facilitate the study of time-series data. In our previous work 18 , we also proposed a weighted relative difference accumulation algorithm (wRDA) in which an adapted weight was assigned to every time point for extracting early information regarding complicated diseases. These dynamic methods worked successfully in metabolomics, however, all of them only considered individual metabolites without taking feature association into consideration.
Biological processes are intricate and the relationships among features (such as genes, metabolites and proteins) [19][20][21][22] are complicated and evolve with dynamic physiological processes. Thus, analyzing data from the perspective of networks could provide more information to understand the associations among features and discover important markers. Fang et al. 23 calculated the information gain (IG) of a ratio between two genes to construct a network. The genes with the largest degrees were regarded as the important factors related to lung cancer. Netzer et al. 24 also constructed a ratio network to select the nodes as biomarkers. If the ratio indicated a statistically significant difference between the classes (e.g., control and obesity groups), then there was an edge between the two corresponding features. Zuo et al. 25 used a low order partial correlation that could reduce spurious edges to infer the network. It is worth noting that most network methods were applied to find key information in static-omics data that discriminated between the different groups, rather than the tracking of features with dynamic differential changes.
In this study, a novel strategy for analyzing time-series data based on dynamic networks (ATSD-DN) in a systematic time dimension was developed. The non-overlapping ratio (NOR) was introduced to quantify the changes in feature ratios with the process of disease development, and provide a novel basis for network construction. Given that the ratio of two metabolites can be assumed to be the result of pathway reactions in which one metabolite is converted into another via single or multiple reaction pathways 26 , ATSD-DN constructed the networks based on the NOR changes of feature ratios along time points, which would facilitate the reflection of physiological or pathological changes. Dynamic concentration analysis and topological structure analysis were performed to analyze the networks and extract early warning information for the disease. Hepatocellular carcinoma (HCC) is one of the most lethal malignancies 27 , and liver cirrhosis is the major precancerous lesion in the majority of HCC cases 28 . However, until now, early detection of HCC has been a great challenge, especially for the discrimination of precancerous cirrhosis and small malignant HCCs 29,30 . Developing new effective methods for the discovery of new biomarkers for early warning of HCC is urgently needed. Due to similarities with histological and genetic features of patients, a diethylnitrosamine (DEN)-induced HCC model can be used to imitate the process of stepwise hepatocarcinogenesis [31][32][33] . Considering the important role of the liver in ensuring the homeostasis of lipids 11,34 , delineating the changes in lipid metabolism would be useful to provide unique insight into early hepatocarcinogenesis and identify novel diagnostic targets. Therefore, ATSD-DN was applied to the time-series lipid data from a rat HCC model induced by DEN administration to define the potential lipid biomarkers for early diagnosis of HCC and validate the performance of ATSD-DN.
Results
The workflow of the ATSD-DN strategy is given in Fig. 1. After filtering the non-informative features by static analysis, ATSD-DN constructed the networks. ATSD-DN provides two techniques: dynamic concentration analysis and topological structure analysis, each of the two network analysis techniques was performed independently to define the informative feature ratios. The PCA score plots based on the feature ratios defined by each network analysis technique alone were used to show the performance of each technique. Finally, the common feature ratios defined by both two techniques were selected and the corresponding performance analysis was also given.
The construction of dynamic networks. Time-series lipidomics data were analyzed to depict changes in lipid metabolism regarding the process of stepwise hepatocarcinogenesis. A histological examination confirmed that the DEN-induced hepatocarcinogenesis model was successfully produced in this study. The serial progression of hepatocarcinogenesis was divided into three stages: week 8 (hepatitis (H) stage, T 1 ), weeks 10-14 (cirrhosis (CIR) stage, T 2 -T 4 ) and weeks 16-20 (HCC stage, T 5 -T 7 ). The last week of each stage (i.e., T 1 , T 4 and T 7 ) was the typical time point of the corresponding liver disease stage, while the first weeks of the latter two stages (i.e., T 2 and T 5 ) were the interfacial points.
In three sub-problems of classification (H vs. CIR, H vs. HCC and CIR vs. HCC), 38 individual features were selected from the first process of noise filtering (i.e., static analysis) at typical time points by SVM-RFE 35 (Table S1). The multivariate unsupervised PCA analyses were performed to show the discrimination between HCC (T 5 -T 7 ) and non-HCC (T 1 -T 4 ) samples (i.e., hepatitis and cirrhosis samples). The first two principal components captured 65.1% and 71.1% of the total variation from the PCA models based on original all features and these 38 individual features, respectively ( Figure S2A,B).
Subsequently, a total of 703 feature ratios were developed based on these 38 individual lipids. For each feature ratio, if the NOR value at two adjacent time points was greater than or equal to 0.85, the corresponding two individual lipids were linked with a red edge. If the NOR was less than or equal to −0.85, the edge was green. As only two time points were considered in each network construction and each time point had exactly the same samples, the sample probability p t was 0.5. Figure 2 shows the six networks along the 7 time points. In particular, each network can illustrate the changes in feature ratios at two continuous time points, instead of quantification at a single time point. Dynamic concentration and topological structure analyses. These NOR-based dynamic networks were firstly analyzed from the perspective of dynamic concentration. In Fig. 2, the color of the edges in each network DN-i indicates the change trend in the effective range for each feature ratio with increased (red) or decreased (green) results at two adjacent time points. To trace the continuous changes of the most important interfacial stage between pre-cancer CIR and early HCC, networks DN-4 (T 4 -T 5 ) and DN-5 (T 5 -T 6 ) representing the cases in which liver disease developed from pre-cancer cirrhosis to HCC and continued to deteriorate were first emphasized. Therefore, 44 edges with the same color in networks DN-4 and DN-5 were picked, and the corresponding ratios were retained to construct feature subset 1. The edges with the same colors in DN-4 and DN-5 represent continuous changes in the dynamics of the circulating metabolites from T 4 to T 6 . The PCA analysis was then performed based on the 44 feature ratios to show the discrimination between HCC (T 5 -T 7 ) and non-HCC (T 1 -T 4 ) samples. The score plot shows that the non-HCC and HCC samples could be separated well. A better performance of the PCA model was obtained that 95.6% of the total variation could be explained (Fig. 3A).
In Fig. 2, the dynamics of circulating metabolites could also be analyzed from the perspective of topological structure of networks. In ATSD-DN, the edges between two features represent the dynamics of circulating metabolites over time 26 . Therefore, the network with the most edges among the 6 networks may represent the largest difference in the dynamics of circulating metabolites, which implies physiological or pathological abnormalities. The network with the most edges could be a key stage along the time course and the key point for a particular biological process. The top nodes with the largest degrees in the network would be the key factors signaling the onset of the key stage. For this topological structure analysis, it can be observed that the edge number of network DN-4 (T 4 -T 5 ) (Fig. 2G) was the largest among the 6 networks that agreed with the development of HCC validated by the histological examination, indicating activated metabolic disturbance in the interfacial stage between CIR and HCC. Then, the top node with the largest degree (i.e., the number of edges) was chosen. Two nodes (free fatty acid (FFA) 20:5 and triacylglycerol (TAG) 56:9) were observed with the same largest degree in network DN-4. It is worth noting that FFA 20:5 was also the top one with the most accumulated degree in 6 networks (Table S2), indicating the continuous metabolic disturbance over time. As a result, 33 ratios associated with FFA 20:5 in network DN-4 was retained for subsequent analysis. The separation between non-HCC and HCC stages can also be obviously represented in the PCA score plot based on these 33 feature ratios with 96.9% of the total variation explained (Fig. 3B).
Definition and external validation of prospective biomarkers.
In the discovery set, the common 15 ratios were selected by both dynamic concentration and topological structure analyses (Table S3). In the PCA score plot based on these 15 ratios, the HCC samples could be clearly discriminated from non-HCC subjects with the highest percentage of the total variation explained (i.e., 99.1%; Fig. 3C).
For univariate evaluation, 4 of the 15 ratios showed significant difference between the model and age-matched control groups at the HCC stage (t-test, p < 0.05) and between T 4 and any time point at the HCC stage (paired t-test, p < 0.05) simultaneously. Detailed information of these 4 ratio candidates (lyso-phosphatidylcholine (LPC) 16:0/FFA 20:5, LPC 18:1/FFA 20:5, phosphatidylcholine (PC) 34:2/FFA 20:5 and LPC 20:3-isomer2/FFA 20:5) is given in Table 1, and the metabolic trajectories of them are presented in Fig. 3D-G. In the model group, their levels changed slightly at the pre-HCC stage and appeared to increase significantly in the early stage of HCC (T 5 ). A significant difference between the model and age-matched control groups was also observed at the HCC stage (T 5 -T 7 ). To further illustrate the ability of the 4 feature ratios to discriminate HCC and non-HCC samples, the receiver operating characteristic (ROC) curve was analyzed based on the results for the area under the curve (AUC) and the sensitivity and specificity at the best cut-off points ( Table 2). The AUC values of these 4 feature ratios were 0.940-0.980 in the discovery set.
To validate the performances of the 4 biomarker candidates, 36 sera from another 6 model rats with 6 monitoring time points (i.e., T 1 -T 6 ) were analyzed. These 6 rats were sacrificed for histological examination with the validation of HCC at week 18 (T 6 ). In this external validation set, the AUC values of these 4 candidates were 0.934-0.983 for the discrimination of T 1 -T 4 (pre-HCC stage) and T 5 -T 6 (HCC stage), confirming the potential of these 4 ratio biomarkers for HCC diagnosis. Considering the similar metabolic characteristics of these 4 candidates and clinical practicability, the feature ratio of LPC 18:1/FFA 20:5 was found to be the potential biomarker with the best AUC value for discrimination. The chromatograms and MS/MS data for LPC 18:1 and FFA 20:5 are provided in Figure S4.
Comparison with previous methods. To further evaluate the performance of ATSD-DN, this novel approach was compared with two time-series methods wRDA and MEBA, and a popular two-way technique SVM-RFE. The features with the top AUC values in the discrimination of HCC and non-HCC were retained from each method. Phosphatidylinositol (PI) 36:3 was selected by both wRDA and MEBA and TAG 56:8 was selected by SVM-RFE.
In the discovery set, 95.2% of HCC and 96.4% of non-HCC samples could be correctly diagnosed at the best cutoff value based on the results of ATSD-DN (i.e., LPC 18:1/FFA 20:5; Table 2). The AUC value of LPC 18:1/FFA 20:5 was 0.980, which was better than 0.898 of PI 36:3 defined by both wRDA and MEBA and 0.852 of TAG 56:8 defined by SVM-RFE ( Fig. 4A-C). Similar comparison results in the validation set are also presented in Fig. 4D-F (the corresponding AUC values were 0.972, 0.833 and 0.833, respectively). The better performance of ATSD-DN may suggest its potential for a more complete presentation of time-series changes.
Discussion
HCC is one of the most prevalent malignancies with a high mortality rate 27 . Early diagnosis could greatly improve the survival rate 36 . However, unapparent early symptoms and individual differences bring difficulties to early discrimination and seasonable treatment of HCC. Although ultrasonography and some typical tumor markers (e.g., α-fetoprotein) have been applied for clinical diagnosis and achieved some successes, they are far from ideal, with high false negative rates 29,30 . Developing new efficient methods such as discovering new biomarkers for the early screening of high risk populations is challenging and urgent. Dynamic metabolomics studies based on time-series data can trace the interfacial stage between pre-cancer cirrhosis and HCC and then facilitate the screening of biomarkers for early diagnosis. To identify the early warning signals of disease deterioration, a new strategy for analyzing time-series data based on dynamic networks in a systematic time dimension was proposed and applied in a prospective cohort study using a diethylnitrosamine (DEN)-induced rat hepatocarcinogenesis model. In this study, noise and irrelevant features were first removed based on the pre-screen. Then, the feature ratio of each of two individual metabolites was developed. The change in the effective range for each feature ratio at two adjacent time points was depicted by the NOR value, which provided the novel basis for network construction. Then, these dynamic networks were used to trace and define the feature ratios with continuous differential changes from two different methods.
In this time-series dataset, to trace the continuous changes of the interfacial stage between CIR and HCC, the networks DN-4 and DN-5 inferred by T 4 , T 5 and T 6 representing the cases in which liver disease developed from pre-cancer cirrhosis to HCC and continued to deteriorate were first emphasized. In Fig. 2, these NOR-based dynamic networks were firstly analyzed from the perspective of dynamic concentration. The edges with the same colors in DN-4 and DN-5 represent continuous changes in the dynamics of the circulating metabolites from T 4 to T 6 , which were picked to facilitate the discrimination between the pre-HCC and HCC stages. Moreover, it is known that there usually exists a key point in disease development that warns the deterioration of the disease. The discovery of this key point and related key information are of great importance to study the disease. In Monoglycerophospholipid LPC 18:1 can be formed via the hydrolysis of phosphatidylcholine (PC), which has an important role in cell signaling. FFA 20:5 (i.e., eicosapentaenoic acid) has been previously reported to improve steatohepatitis and inhibit the development of HCC 34,37 . The decrease in FFA 20:5 may indicate the risk of HCC. In this study, the combination of these two lipids using the biomarker pattern of the LPC 18:1/FFA 20:5 ratio was employed to improve the diagnostic performance. This ratio biomarker pattern would facilitate the magnification of metabolic differences for discrimination. Moreover, compared with traditional individual features or the combination of metabolites from a single pathway, this combination pattern reflects the imbalance of the lipid network from different perspectives of physiology, which would be more informative and robust for HCC risk assessment 38 . Further validation is still needed with a larger cohort of specimens.
To evaluate the efficacy of this new strategy, ATSD-DN was further compared with previous methods (wRDA, MEBA and SVM-RFE). As shown in Fig. 4, the ratio biomarker from ATSD-DN fulfills the best discrimination of HCC and non-HCC samples with the best AUC values in both discovery and validation sets. Based on the comparison results, the better performance of ATSD-DN suggests its great potential for the extraction of early warning information. The advantages of ATSD-DN are as follows: i) this novel strategy is better for the more complete presentation of time-series changes. Rather than screening differentially expressed variables at isolated time points, as in two-way analysis methods, ATSD-DN can be used to trace and define feature ratios with continuous differential changes in a systematic time dimension. ii) The introduction of NOR based on the repeated time series measure facilitates the quantification of changes at two continuous time points and provides a novel basis for network construction. Thus, each network in ATSD-DN presents changes in feature ratios at two continuous time points, which could better reflect the physiological and pathological changes. iii) ATSD-DN analyzes data from the perspective of networks which could possibly provide the insight into the complicated interplay of multiple molecules and be better to explore the development of diseases. Two ways of dynamic concentration and topological structure analyses can be flexibly selected to define the early warning information. iv) ATSD-DN is a data-driven learning method in which few parameters need to be set by the researchers.
It should be noticed that ATSD-DN traces the effective range of a feature ratio along the time points to examine the changes in the feature relationships, and time series repeated measures has been considered in the construction of network. Different from other time-series methods such as ASCA which explores the contributions of different factors or multi-factors, ATSD-DN aims to analyze the networks and extract early warning information for the disease by dynamic concentration analysis and topological structure analysis. In the analysis of metabolomics data, ATSD-DN focuses on the relationship of features to extract the early warning information, and it may ignore some metabolites which associate with the disease but have little relationship with others. Besides, it should be noticed that the present study based on the lipidomics analysis may drop some metabolites which their associate metabolites cannot be detected by the MS. The novel strategy which can combine the feature associations and independent features together should be further developed. In summary, ATSD-DN analyzes the time-series data from the perspective of networks to define the early warning biomarkers of complicated diseases. The application of ATSD-DN to the rat HCC metabolomics data demonstrated that it is an effective method for identifying potential metabolic biomarkers for early diagnosis. To improve the performance of early risk identification, more construction methods for dynamical networks can be employed in further studies.
Methods
To study the development of a disease and identify the early warning signals, both control and model samples were collected. Let C denote the control group, M denote the model group and T i denote a time point, 1 ≤ i ≤ N, where N is the number of time points. Usually, as time goes on, the model samples may suggest different stages of the disease. Let N s denote the number of the different disease stages along N time points.
ATSD-DN defines the prospective information of the disease deterioration based on the dynamic analysis of the networks along the time course. However, not all the features in the metabolic spectrum are involved in the network analysis. Non-informative features are filtered out by static analysis before network construction. ATSD-DN provides two independent techniques to identify the features of interest from the networks. Figure 1 shows the procedure for ATSD-DN.
Static analysis.
It is known that noise and irrelevant features are two factors affecting the efficient analysis of metabolomics data. Given that the model samples experience N s different biological stages, the features containing little discriminative information from each two-stage segment are noise or unrelated to the problem and should be removed. Thus, ATSD-DN separates the problem into N s (N s − 1)/2 binary sub-problems and selects the features with discriminative information for each sub-problem to construct the networks for further analysis.
A change in r ijt at the adjacent time points could reflect a change in the biological procedure. Thus, ATSD-DN traces the effective range of a feature ratio along the time points to examine the changes in the feature relationships. The effective range of r ijt is defined as follows 39 : where − er ijt and + er ijt are the floor and the ceiling of the effective range of r ijt . p t is the sample probability at time point T t in the corresponding network construction. For the effective range containing least two-thirds of the samples, 1, 2, …, n). For a change in the effective range of a feature ratio between two time points, there exist three cases ( Figure S3). In the third case, the effective range of the feature ratio at one time point is included in the effective range at another time point ( Figure S3C). This is far from ideal to illustrate the changes in the assumed pathway reactions related to the disease development. Therefore, only the first two cases ( Figure S3A,B) are examined in ATSD-DN. Additionally, the changes in the effective range of the feature ratio at the adjacent time points T t and T t+1 (1 ≤ t < N) are depicted by the non-overlapping ratio (NOR), which is defined as follows: 1) . If |NOR(r ijt )| is large, it indicates that the feature ratio r ijt from time T t to time T t+1 changes greatly, suggesting the continuous metabolic disturbance for the assumed reaction between individual feature f i and f j . Thus, a network DN-t could be built based on T t and T t+1 . The network is presented using the rational visualization method of hive plots which is accessed at http://www.hiveplot.net/. Let the features be the vertices of DN-t. For every pair of features f i and f j , if |NOR(r ijt )| ≥ τ, then there is an edge between f i and f j in DN-t. NOR could also tell the direction of the feature ratio change. NOR(r ijt ) > 0 represents the feature ratio r ijt increasing along two adjacent time points, and NOR(r ijt ) < 0 represents r ijt decreasing. For simplicity, if NOR(r ijt ) ≥ τ, the edge between f i and f j in DN-t is colored red, and if NOR(r ijt ) ≤ −τ, the edge is colored green. If the edge between the two individual features stays red (or green) in consecutive networks, it implies that the feature ratio of these two individual features increases (or decreases) continually along the time points.
Network analysis. To define the prospective information for a complex disease, ATSD-DN analyzes the networks from two perspectives: dynamic concentration analysis and topological structure analysis.
Dynamic concentration analysis. Dynamic concentration analysis investigates the changes in the feature ratios during the course of disease development. As a biological process is always in motion, some signals must exist before a specific time point in a complex disease, such as a malignant tumor. To identify the signals, ATSD-DN focuses on certain time points (without loss of generality, it is assumed to be N e (0 < N e < N) time points) before the typical time point T s (1 < s ≤ N) of the disease. If the effective range of the ratio between the features along N e time points continues to change in the same direction (such as continuous increasing or decreasing), it indicates a continuous metabolic disturbance. Therefore, to identify the early warning signal for the specific time point of disease, the networks DN-i (s − N e ≤ i < s − 1) are examined, and the edges that remain the same color in DN-i are selected. The corresponding ratios are selected as the signals of the specific time point of the disease and constitute feature subset 1.
Topological structure analysis. The topological structures of the N-1 networks along N time points can also indicate the biological changes over time. If the edge number of DN-t (1 ≤ t < N) is large, it implies that many pathway reactions experience large changes in the reaction rate and the organism experiences a relatively drastic biological change. Thus, DN-t (1 ≤ t < N) with the most edges could be a key stage along the time course and may be the key point for a particular biological process. The nodes with the largest degrees in the network would be the key factors signaling the onset of the key stage. Thus, in topological structure analysis, ATSD-DN analyzes the edge numbers of N-1 networks along N time points and focuses on the one (DN-t, 1 ≤ t < N) that has the most edges. It ranks the nodes in DN-t according to their degrees in a descending order, and the top k ≥ 1 nodes are selected and the feature ratios corresponding to the edges associated with the k nodes are selected to constitute feature subset 2.
Each of the two network analysis techniques has its own merits for extracting early warning information. Therefore, they can be used flexibly to analyze the time-series data and to define the potential biomarkers independently. It is also possible to use them simultaneously to get the feature subset by union or intersection of feature subset 1 and feature subset 2.
The application of ATSD-DN to metabolomics data from a rat HCC model. ATSD-DN was applied to the time-series data to define the potential biomarkers for early diagnosis of HCC. The data include a discovery set and a validation set. ATSD-DN was performed on the discovery set to identify prospective information. The validation set was used to test the results of ATSD-DN on the discovery set.
Time-series data source. In this study, time-series data were obtained from the animal model with DEN-induced stepwise hepatocarcinogenesis. This animal experiment was conducted at the experimental animal center of Scientific RepoRts | 6:32448 | DOI: 10.1038/srep32448 Dalian Medical University (Dalian, China), in compliance with national guidelines for the care and use of laboratory animals. The study protocol was reviewed and approved by the institutional reviewer board of Dalian Medical University, Dalian, China. And the experiment was carried out in accordance with the approved guidelines.
This rat model has been described detailedly in our previous report 11,40 . Briefly, a total of 55 male Sprague-Dawley (S.D.) rats were enrolled in the present study at the age of 42 days (i.e., week 0). Then, after two weeks of adaptation, all rats were randomly divided into control (n = 10) and model (n = 45) groups, administrated with saline and DEN at 70 mg/kg body weight respectively via intraperitoneal injection. The injection was performed once a week between week 2 and week 11, and 14 rats from the model group died during the administration.
Histological examination was performed to monitor the progress of stepwise hepatocarcinogenesis based on the sacrifice of model rats, until all of the surviving animals (n = 10 for control and n = 7 for model groups) were finally sacrificed in week 20. Collected liver tissues were fixed in 10% buffered formalin and embedded in paraffin for histological examination, which confirmed that the DEN-induced hepatocarcinogenesis model was successfully produced in the present study.
The collection of time-series sera set was conducted from week 8 to week 20 once every 2 weeks (i.e., 7 monitoring time points). The discovery data included 10 rats from the control group and 7 rats from the model group. A total of 119 time-series sera were then collected from all 7 monitoring time points once every two weeks from week 8 to week 20. Thus, the number of the time points for the discovery set was 7; i.e., N = 7. In the model group, the first time point T 1 was week 8 (M8) and the 7th time point T 7 was week 20 (M20). Similarly, C8 and C20 were week 8 and week 20 in the control group.
Furthermore, 36 sera from another 6 model rats were used for validation. These 6 rats were sacrificed for histological examination with the affirmance of HCC at week 18. Therefore, their sera were collected from 6 monitoring time points (i.e., T 1 -T 6 ).
Profiling of lipids by LC-MS analysis.
Time-series serum samples were analyzed to perform a non-targeted lipidomics study using an ACQUITY ultra-performance liquid chromatography (UPLC) system (Waters, USA) coupled with a tripleTOF ™ 5600 plus mass spectrometer (AB Sciex, USA). Details regarding lipidomics analysis including serum preparation and instrument methods are provided in the Supplemental Information.
Data analysis. Based on the accurate m/z, retention behavior and MS/MS fragmentation pattern, lipid species were first identified with LipidView and PeakView software (AB Sciex, USA). Then, the quantitative information for detected lipids was extracted using MultiQuan software (AB Sciex, USA) with a mass width of ± 0.01 Da and retention time width of ± 0.15 min. Before statistical analysis, the relative abundance of all lipids was calculated by normalizing to the area of corresponding internal standards. Finally, a time-series dataset was exported to the ATSD-DN strategy.
Seven time points include three different stages of liver disease (N s = 3): hepatitis, cirrhosis and hepatocellular carcinoma. The features containing little discriminative information for every two-stage segment were removed. SVM-RFE was first applied on three binary sub-problems (H vs. CIR, H vs. HCC, CIR vs. HCC). Five-fold cross-validation was run fifty times for each sub-problem. In SVM-RFE, the kernel function and penalty factor were set as the liner kernel function and 1, respectively. The implementation of SVM was performed with LIBSVM (available at http://www.csie.ntu.edu.tw/~cjlin/libsvm). MEBA was from http://www.metaboanalyst.ca/ faces/Secure/upload/TimeUploadView.xhtml. All the algorithms were written in C++.
The selected feature subsets of the three sub-problems were united and used to infer the networks with τ = 0.85. T 7 is the typical HCC stage and T 4 is the typical CIR stage. It is known that HCC usually develops from CIR. Thus, N e = 3 time points before typical HCC (T s = 7) were studied to define the early warning information of for HCC by means of dynamic concentration analysis. Thus, DN-4 and DN-5 were inferred by these three time points. The feature ratios corresponding to the edges whose colors stay the same in DN-4 and DN-5 were selected to constitute feature subset 1.
The edge numbers of the 6 networks along the 7 time points were analyzed. The network that had the greatest number of edges was selected. Its nodes were ranked according to their degrees in descending order, and the top ranked node was selected. The ratios corresponding to the edges linked with the top ranked node were selected to constitute feature subset 2.
The compared methods. wRDA. The mean value and standard deviation were used to measure the differences for a feature between the control and model groups 18 . An adapted weight was assigned to each time point for extracting early information on complicated diseases. Subsequently, a false discovery rate (FDR) 41 was used to evaluate the selected feature subset. The lower the FDR, the better the selected features. In this study, the weights of non-HCC and HCC stages were 0.1 and 0.2, respectively. The top 30 features with the largest scores with FDR = 0% were constructed as the final feature subset.
MEBA.
A time-course analysis method based on multivariate empirical Bayes statistical which could evaluate the importance of the features by the Hotelling's T 2 15 . The top 30 features with the largest Hotelling's T 2 were constructed as the final feature subset.
SVM-RFE. This method has been widely applied to select discriminative features from the high-dimensional metabolomics data 35,[42][43][44][45][46] . It removes the least important features iteratively. In each iteration, the weight of each feature in the current feature subset is re-measured based on the contribution to the hyper-plane, and r% features with the smallest weights are removed. This process is repeated until the current feature subset is empty. The feature subset with the largest accuracy rate in the iteration is kept as the selected features subset. | 2018-04-03T03:47:04.873Z | 2016-08-31T00:00:00.000 | {
"year": 2016,
"sha1": "29c2c2d2534825296535005b7e9969457d3601eb",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep32448.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29c2c2d2534825296535005b7e9969457d3601eb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
243008117 | pes2o/s2orc | v3-fos-license | Perception Of Office Managers on Technology Skills Possessed of Secretaries in Colleges of Education for Managing Information in Enugu State
: The study was undertaken to determine the perception of office managers on technology skills possessed by secretariats in colleges of education for managing information in Enugu State. The researcher employed a descriptive survey research design. The study population consisted of 90 office managers from two government owned colleges of education in Enugu State. There was no sampling since the size was manageable. The instrument for the data collection was a structured questionnaire developed by the researcher entitled: “Technology Skills Possessed” by Secretaries Questionnaire (TSPSQ). The instrument was duly validated by three experts. The reliability was done using Cronbach Alpha which yielded coefficient of 0.68; indicating the instrument was reliable. The two research questions was answered using mean with standard deviation. While the null hypotheses were tested at .05 level of significance using t-test. Results of data analysis relating to the study have shown the following. Networking skills are highly possessed by secretaries as perceived by office managers in colleges of Education for managing information in Enugu State. The office managers equally perceived that word processing skills are highly possessed by their secretaries for managing information. The hypotheses tested, showed that there was no significant difference between office managers in federal colleges of education and their counterparts in state on their perception on technology skills possessed by their secretaries for managing information. Based on the findings, it was recommended that secretaries should be allowed to update their technology skills by attending regular conferences.
Introduction
An office is believed to be a centre of administration despite the size, type and location. It is any place where clerical or administration work of a firm is carried out (mbazue,2014). Akpan (2015) in Aja (2019) saw office as a place in which clerical process and activities of a business are started, developed and controlled. Every office is headed by office manager. Office managers essentially ensure the smooth running of an office on a day to day basis and may manage a team of administrative or support staff Chibuike (2019) observed that office manager is a person whose job is to be responsible for the work of an office. It is the duty of office manager to supervise the office correspondence, procedures, policy implementation, record maintenance, filing and indexing. Therefore, office manager job descriptions includes scheduling meetings and appointments within the office, organizing the office layout and managing information (data base).
Managing information is making sure that the right people have the right information at the right time. Managing information describes how successful organization make best use of information and knowledge. According to obayi (2020). Information management is the collection and management of information from one or more sources and the distribution of that information to one or more audience. In an office, managing information includes stock control system, decision support system among others. Ugwunwoti (2020), recommended that information can be managed thus: Selecting and transferring paper records, preserving digital records, policy and process, public enquiry guidance, managing risk and digital records transfer. No good office manager can manage information without the services of Secretary.
A secretary is one whose work is to perform secretarial duties in an office. A secretary according to Ihekwoaba (2015) is a person employed in an office to work for another person, dealing with letters, typing records, making appointments and arrangements. In other word, a secretary is a person employed by an individual or in an office to assist with correspondence, make appointments and carry out administrative tasks. Obayi (2009), in Ngwoke (2019) was of the view that a good secretary is one who has organization abilities, clear, friendly and professional communication skills. Presently, technology innovation in office has increasingly made the work of a secretary more complex and sophisticated (Ojobor and Musa,2010) . Hence such secretary should posses technology skills. Technology is the skills, method and processes used to achieve goals.
Skills is vital for managing information in colleges of education irrespective of the status and place. Skills according to Obi (2014), is the ability to use one's knowledge effectively and readily in performing an act or a European Journal of Business and Management www.iiste.org ISSN 2222-1905(Paper) ISSN 2222-2839(Online) Vol.13, No.18, 2021 habit of doing a particular thing competently. The author further stated that, an individual may hardily be skilled in a task without exposure, training or practice. Skill is the ability to do something well and expertly. For Secretaries in Colleges of Education to manage information well, they are expected to posses word processing and networking skills respectively. This is because the duties of secretaries have gone beyond answering phones and bells calls from bosses.
Word processing refers to the act of using computer to create, edit, save and print documents. According to Technohella (2021), word processing skills are the abilities to add text, enter text, format text, font, style, size and colour among others. Technohella (2021) further stated that word processing is probably the most extensive and essential for staff (secretaries) to survive in an office. Hence, word processing skills are the abilities to use computer to create, edit, save and print documents. A good secretary working in Colleges of Education is expected to possesses needed word processing skills if she can manage information. Apart of word processing skills, such secretary need to posses networking skills. Networking skills are the abilities to make contacts through a process that is two-way. According to mbaezue (2015), Networking is the act of making contact and exchanging information with other people, group and institution to develop mutually beneficial relationships or to access and store information between computers. Nwangwu (2010), stated that Networking Skill enhances the managing of information in an offices through internet services. But Ugwunwoti (2019) opined that Networking Skills make a staff to gain access to necessary resources that foaster development. When Networking Skills are possessed by secretaries in Colleges of Education, it will definitely help in managing information in Enugu State.
Colleges of Education is an institution aimed at training individuals to be a qualified teacher to teach in primary and post-primary schools'. It is a three -year programme. According to Ezoem (2019),the colleges of education in Nigeria are the train, trainers' institutions as they are responsible for production of teacher at primary and secondary school levels. Colleges of Education are the third tier of higher educational institution in Nigeria. At the completion of the three year fulltime course programme, the college of education award grandaunts with Nigeria Certificate in Education (NCE). The NCE is the basic qualification for teaching in Nigeria. According to Lassa (2000), it is sub-degree certification course and a professional teacher diploma which is obtained after three year full-time at College of Education. In Enugu State there are two Government owned Colleges of Education viz: Federal College of Education Eha-Amufu and Enugu State College of Education (Technical). A Federal Colleges of Education are owned, controlled and financed by the Federal Government, while the State Colleges of Education are owned, controlled and financed by State Government.
In the context of this paper, office managers are principal officers in the Colleges of Education in Enugu State who have secretaries attached to them. Apart from performing clerical duties of typing-setting office document, secretaries perform other functions like arranging for meetings. It was in the light of this that Ezenwafor and Gude (2020) opined that a secretary must document vital actions and events taking place in the organization where he serves in fulfilment of his functions. Supporting this view, Ezenwafor (2013) stated that secretarial staff are the category of office personnel concerned with proper maintenance of office information. Based on this assertion, it has become imperative that such staff should possess the needed technology skills. Hence Wozison technology skills possessed by secretaries in Colleges of Education for managing information in Enugu State was carried out.
Statement of the problem
Secretarial functions particularly in various offices irrespective of the locations and types are been performed by secretaries. These functions are referred to as clerical duties of receiving, recording, arranging, giving and storing information. In today's offices particularly in Colleges of Education, there are changes in the ways these functions are been performed. One of the function of an office manager in Colleges of Education is to manage information. Hence, can be achieved with a qualified secretary and with relevant skills. New machines have been introduced to secretaries against the old ones. This has given rise to the demand of secretaries in Colleges of Education in Enugu State to possess the needed skills.
Regrettably findings have shown that office managers are faced with challenges in managing information. Could it be that their secretaries were not exposed to the needed technology skills while in institutions? The problem of this study was that the secretaries in Colleges of Education seem not to posses the needed technology skills. This state of affairs is indicative of the fact that there may be some technology skills for managing information. Therefore, there is need to reverse the trend by finding out these skills, hence this question: what then are these technology skills?. Finding answer to this question is the major concern of this study. Vol.13, No.18, 2021 Purpose of the study The main purpose of this study was to determine the perception of office managers on technology skills possesses of secretaries in Colleges of Education for managing information in Enugu State. Specifically, the study sought to: 1. determine Networking skills possessed by secretaries in Colleges of Education as perceived by office managers for managing information in Enugu State and; 2. determine word processing skills possessed by secretaries in Colleges of Education as perceived by office managers for managing information in Enugu State.
Research Questions
The following research questions guided the study: 1. What are the Networking skills possessed by secretaries in Colleges of Education as perceived by office managers for managing information in Enugu State. 2. What are the word processing skills possessed by secretaries in Colleges of Education as perceived by office managers for managing information in Enugu State.
Hypotheses
The underlisted hypotheses were tested at .05 level of significance using t-test HO1 : There is no significant difference between the mean ratings of office managers in Federal Colleges of Education and their counterpart in State Colleges regarding the Networking skills possessed by secretaries for managing information in Enugu State. HO2: There is no significant difference between the mean ratings of office managers in Federal Colleges of Education and their counterpart in State Colleges of Education regarding the word processing skills possessed by secretaries for managing information in Enugu State.
Method
A descriptive survey research design was employed in this study. According to Nworgwu (2015), descriptive survey is the one in which a group of people is studied by collecting and analyzing data from few people, considered to be representative of the entire group. The study was carried out in two Colleges of Education in Enugu State , Federal College of Education Eha-Amufu and Enugu State College of Education (Technical). The population for the study comprised of 40 office managers and 50 office managers from Federal College of Education Eha-Amufu and Enugu State College of Education (Technical) respectively; totalling 90 office managers. There was no sampling because the population was manageable.
The instrument for data collection was a structured questionnaire developed by the researcher entitled "Technology Skills Possessed by Secretaries Questionnaire" (TSPSQ). The instrument has three sections. Section A was on bio-data. Section B on the networking skills and Section C contains items on word processing skills. The rating responses were Very High possessed (VHP) Highly Possessed (HP) Fairly Possessed (FP) and Lowly Possessed (LP) with numerical order of 4,3,2 and 1 respectively. The instrument was face-validated by three expect. Two from Department of Business Education and one from Department of Mathematics and Computer (Measurement and Evaluation) both from Enugu State University of Science and Technology (ESUT) Enugu. To test the reliability of the instruments, 20 office managers in Colleges of Education in Anambra State was used. Croubach Alpha estimate was used to determine the internal consistency of the instrument; which yielded a coefficient index of 0.78. This index indicated that the instrument was reliable enough to be used in the study.
The researchers with the help of three research assistants used direct delivery techniques in the administration of the instrument to the respondents. At the end of the administration of the instrument, all the 90 copies were returned and used for the study; representing 100% rate of return. Mean and standard deviation was used in answering the research questions; while t-test was used in testing the null hypotheses at .05 level of significance at the appropriate degree of freedom. Decision rule for answering the research question was based on the real limits of the mean thus: Very High possessed -3.50 -4.00 Highly Possessed -2.50 -3.49 Fairly Possessed -1.50 -2.49 Lowly possessed -1.00 -1.49 For the hypothesis, when the calculated t-value was equal to or greater than the table value, the null hypothesis was rejected, otherwise it was rejected.
Results
Research question 1 What are the Networking skills possessed by secretaries in Colleges of Education as perceived by office managers for managing information in Enugu State? 25, 3.53, 3.32, 3.29 3.29, 3.49, 3.42, 3.46, 3.32, 3.26, 2.23, and 3.46 respectively are regarded as highly possessed by the respondents. The grand mean value of 3.67 also attested to that; while cluster standard deviation of 0.83 shows homogeneity of opinion of respondents.
Hypothesis I
There is no significant difference between the mean ratings of office managers in federal colleges of education and their counterparts in state colleges regarding the networking skills possessed by secretaries for managing information in Enugu state.
European Journal of Business and Management www.iiste.org ISSN 2222-1905(Paper) ISSN 2222-2839(Online) Vol.13, No.18, 2021 Table 2 shows that the calculated t-value at 0.05 level of significance and 88 degree of freedom is 0.81, while the table or critical value under the same conditions is l.96 since the calculated t-value is less than the critical or table value, the null hypothesis is therefore not significance. This invariably mean that, there is no significant difference between the mean ratings of office managers in federal colleges of education and their counterparts in state colleges regarding the networking skills possessed by secretaries for managing information in Enugu State.
Research Questions 2
What are the word processing skills possessed by secretaries in colleges of Education as perceived by office managers for managing information in Enugu State? 2222-1905(Paper) ISSN 2222-2839(Online) Vol.13, No.18, 2021 of 3.52. The table also shows that the items numbers 13,15,16,17,18,19,20,21,22,23 and 24 respectively were highly possessed regarding the word processing skills by secretaries in colleges of education for managing information in Enugu state. The grand mean of 3.34 also attested to that. The cluster standard deviation of 0.71 shows that the disparities of opinions of respondents are slim.
Hypotheses 2
There is no significant difference between the mean ratings of office managers in federal colleges of education and their counterparts in state colleges of education on word processing skills possessed by secretaries for managing information in Enugu state. Table 2, shows that, the calculated t-value at 0.05 level of significance and 88 degree of freedom as 0.194; while the critical t-value under the same condition 1.96. since the calculated t-value is less than the t-table, the null hypotheses is therefore not significant. This invariably means that, there is no significant difference between the mean ratings of office managers in federal colleges of education and their counterparts in state colleges of education on word processing skills possessed by secretaries for making information in Enugu State.
Principal Findings of the study
Results of data analysis relating to the study have shown the following; 1. Networking skills are highly possessed by secretaries as perceived by office managers in colleges of education for managing information in Enugu state.
2. There is no significant difference between mean ratings of office managers in federal colleges of education and their counterparts in state colleges of education on networking skills possessed by their secretaries for managing information in Enugu state. 3. Word processing skills are highly possessed by secretaries as perceived by office managers in colleges of education for managing information in Enugu state. 4. There is no significant difference between the mean ratings of office managers in federal colleges of education and their counterparts in state colleges of education regarding the word processing skills possessed by secretaries for managing information in Enugu State.
Discussion of the Findings
The data presented in table 1 showed that, office managers in colleges of education in Enugu state were in agreement that networking skills were highly possessed by their secretaries in federal and state colleges. This is in-line with Nwangwu (2010) that, networking skills enhance the management of information in an office through internet services. The null hypothesis one tested on the networking showed that, there was no significant difference between the mean ratings of office managers in Federal colleges of education and their counterparts in state colleges of education on networking skills possessed by their secretaries for managing information in Enugu state. The implication of the findings of no significant difference was that the types of colleges of education had no significant influence in their opinions. Data obtained regarding research question two, revealed that word processing are highly possessed by secretaries as perceived by office managers in colleges of education for managing information in Enugu state The finding is in harmony with technohella (2000) that wood processing is probably the most extensive and essential for staff (secretaries) to survive in an office.
The null hypotheses two tested on the word processing skills showed that office managers in federal colleges of education and those in state colleges of education did not differ significantly in their mean ratings on the word processing skills possessed by their secretaries for managing information.
Conclusion
Based on the findings of the studys it was concluded that secretaries in colleges of education in Enugu state possess technology skills for managing information in their offices contrary to the reports trending owing some office managers that their secretaries perform poorly in employment due to lack of such skills.
Recommendations
Based on the finings of the study, it was recommended that: 1. Office managers in colleges of education in Enugu state should ensure ICT resources are provided. 2. Secretaries in the colleges of education should be allowed to update their technology skills by attending conferences regularly. 3. Office managers should from time to time invite technological experts to update their secretaries. 4. Authority of colleges of education should ensure regular payment of salaries and other incentives to their secretaries. | 2021-10-15T16:12:29.050Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "c1eb7a4358ea4189cd993598584915e9d4951caa",
"oa_license": "CCBY",
"oa_url": "https://www.iiste.org/Journals/index.php/EJBM/article/download/57364/59238",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "e01d42bdc89eb7730d4eda27fef7942fa8af0715",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
247478541 | pes2o/s2orc | v3-fos-license | Metabolic Alternations During Gestation in Dezhou Donkeys and the Link to the Gut Microbiota
The maternal intestinal microbial community changes dramatically during pregnancy and plays an important role in animal growth, metabolism, immunity and reproduction. However, our understanding of microbiota compositional dynamics during the whole pregnancy period in donkey is incomplete. This study was carried out to evaluate gut microbiota alterations as well as the correlation with serum biochemical indices, comparing pregnant donkeys to non-pregnant donkeys. A total of 18 pregnant (including EP, early-stage pregnancy; MP, middle-stage pregnancy and LP, late-stage pregnancy) and six non-pregnant (C as a control) donkey blood samples and rectum contents were collected. The results showed that pregnant donkeys had higher microbial richness than non-pregnant donkeys and that the lowest microbial diversity occurred at the EP period. Moreover, the relative abundances of the families Clostridiaceae and Streptococcaceae were significantly higher in the EP group (p < 0.05) than that in the C and MP groups, while the relative abundances of the families Lachnospiraceae and Rikenellaceae were significantly lower in the EP group (p < 0.05) than that in the C group. The predicted microbial gene functions related to the inflammatory response and apoptosis, such as Staphylococcus aureus infection, the RIG-1-like receptor signaling pathway and apoptosis, were mainly enriched in EP. Furthermore, pregnant donkeys had higher glucose levels than non-pregnant donkeys, especially at EP period. EP donkeys had lower triglyceride, total protein and albumin levels but higher malondialdehyde, interleukin 1β, interleukin 6 and tumor necrosis factor-α levels than those in the C and MP groups. Additionally, there were strong correlations between inflammatory cytokine levels and the relative abundances of genera belonging to the Clostridiaceae and Streptococcaceae families. This is the first comparative study performed in donkeys that indicates that pregnancy status (especially in the early pregnancy period) alters the gut microbiota composition, which was correlated with serum biochemical parameters. These results could provide useful information for improving the reproductive management in Dezhou donkeys.
INTRODUCTION
Pregnancy is a unique physiological condition wherein various dramatic physiological changes (including metabolism and immunity) occur compared to the non-pregnant state (Koren et al., 2012;Nair et al., 2017). In regard to physiological changes, biochemical changes in the blood of normal pregnant animals are required, and are important markers reflecting the health status of animals (Shao et al., 2020). In the metabolic state, insulin sensitivity is commonly reduced, especially in the late pregnancy stage (Barbour et al., 2007). Additionally, oxidative stress (Toboła-Wróbel et al., 2020) and immunological challenges (Mor and Cardenas, 2010;Nair et al., 2017) might also occur during the pregnancy period. It has been reported that physiological biochemical changes occur during pregnancy in humans Stokkeland et al., 2019), rats (Corvino et al., 2015) and sows (Shao et al., 2020). In the equine ass, several studies have also reported blood biochemical changes during pregnancy, including in horses and donkeys (Vincze et al., 2015;Bonelli et al., 2016;Gloria et al., 2018;Liao Q. et al., 2021), but every species has particular blood biochemical changes related to gestation (Gloria et al., 2018) due to different anatomy and metabolic conditions in different species. Therefore, it is necessary to investigate the species-specific changes in blood biochemical profiles throughout the entire pregnancy period in donkeys. These data could provide a practical basis for the management and diagnosis of gestation in certain donkey breeds.
The gut microbiota has been a topic of great interest due to its important role in body metabolism and immune and physiological functions; thus, it is closely related to host health (Turnbaugh et al., 2006;Kamada et al., 2012;Osbelt et al., 2020). Factors affecting the composition of the gut microbiota and the relationship with the host are of considerable complexity. However, pregnancy, which is a common physiological state, is considered a classical factor that alters the gut microbiota. A significant shift in the gut microbiota during the pregnancy period has been shown (Santacruz et al., 2010;Koren et al., 2012;Huang X. et al., 2019). For example, an increase in the relative abundances of Actinobacteria and Proteobacteria from the first to the third trimester of pregnancy was reported by Koren et al. (2012). Changes in community structure could result in functional changes, and then affect host health. For instance, adiposity and insulin insensitivity were increased in germ-free mice receiving the intestinal microbiota from women in late pregnancy compared to those in mice inoculated with the gut microbiota of women in early pregnancy (Koren et al., 2012), thus affecting innate immune function and fetal development and health (Gomez de Agüero et al., 2016;Lee et al., 2021) by regulating the incipient microbial biomass and communities of offspring (Perez-Muñoz et al., 2017;Bi et al., 2021). Furthermore, gut microbiota changes may directly influence maternal metabolic alterations related to pregnancy (Koren et al., 2012). Based on association analysis, in humans, it has been indicated that the gut microbiota is correlated with the body weight, weight gain and blood biochemical indices of pregnant women (Santacruz et al., 2010). Regarding the relationships between the gut microbiota and blood biochemical parameters, an increasing number of studies have been carried out. In laboratory animals, Hua et al. (2018) showed that the relative abundances of Romboutsia and Phascolarctobacterium were positively associated with the serum triglyceride (TG) level in a high-fat diet-fed rat model. In addition, it has been reported that blood urea nitrogen levels were negatively correlated with Ruminococcaceae in a sow model (Shao et al., 2020). However, in breeding donkeys, information regarding how the gut microbiota varies throughout pregnancy is limited. Likewise, little information is known about whether and how the gut microbiota contributes to serum biochemical changes during the normal pregnancy period compared with the non-pregnancy state. To address this deficiency, we hypothesized that the mother donkey exhibits blood physiology changes and dramatic changes in the gut microbiota during the pregnancy period and that blood metabolic disorders and immune injury are caused by microbial composition changes.
The aim of the present study was to evaluate the dynamic changes in the gut microbiota in donkeys during the pregnancy period. Biomarker monitoring was conducted to assess metabolic functional and health status across different pregnancy stages. Moreover, the association between changes in the levels of biomarkers and gut bacteria was also identified. Data obtained from this study will provide useful information for improving the reproductive management in Dezhou donkeys.
Animal Selection, Husbandry and Sample Collection
A total of 24 healthy donkeys ranging between 3 and 5 years of age were selected for this study. According to their pregnancy stages, the donkeys were divided into four groups: the nonpregnant (C, as a control), early stage of pregnancy (EP, between 1 and 3 months), middle stage of pregnancy (MP, between 6 and 9 months) and late stage of pregnancy (LP, 1 month before parturition) groups. Each group consisted of six animals. All donkeys were raised under the same conditions at a Dezhou donkey original breeding farm authorized by Shandong Province (Dezhou city, Shandong, China). Female donkeys were fed a commercial concentrate diet (Hekangyuan Group Co., Ltd., Shandong, China). Non-pregnant and pregnant donkeys were administered twice daily (08:00 and 16:00) at 0.25 and 1.5% of their body weight, respectively. Wheat straw (ratio of 60:40) and water were provided ad libitum throughout the rearing period. Additionally, none of the donkeys had received antibiotics for at least 3 months before sampling. The animal care protocol in this study followed commercial management practice and was approved by the Animal Welfare Committee of Liaocheng University.
All the samples were collected between 9 and 11 am on the same day. Blood samples were collected in tubes (5 mL) from a jugular vein before feeding. Serum samples were obtained after centrifugation at 3,000 × g for 10 min at 4 • C and then snap frozen in liquid nitrogen. Fecal samples were obtained from donkey rectum content, transferred to separate sterilized 5 mL tubes, and then stored immediately in liquid nitrogen for DNA extraction. Then, all frozen samples stored in dry ice were transported to the laboratory and stored at −80 • C for further analysis.
DNA Extraction and PCR Amplification
Samples were obtained from the rectum content of donkeys in different gestation periods (n = 6) and then used for bacterial composition analysis. Genomic DNA was isolated using an E.Z.N.A. R Soil DNA Kit (Omega Bio-tek, Norcross, GA, United States) according to the manufacturer's instructions. DNA yield and quality were tested with a NanoDrop2000 (Thermo Scientific, Wilmington, United States). The V3-V4 region of the bacterial 16S rRNA gene was amplified by a thermocycler PCR system (Gene Amp 9700, ABI, United States) using the primers 338F (5 -ACTCCTACGGGAGGCAGCAG-3 ) and 806R (5 -GGACTACHVGGGTWTCTAAT-3 ). PCRs were performed in triplicate in a 20 µL mixture containing 4 µL of 5 × FastPfu Buffer, 2 µL of 2.5 mM dNTPs, 0.8 µL of each primer (5 µM), 0.4 µL of FastPfu Polymerase, 0.2 µL of BSA, and 10 ng of template DNA. The PCR conditions were as follows: 95 • C for 3 min to allow DNA denaturation, with amplification lasting 27 cycles (95 • C for 30 s, 55 • C for 30 s and 72 • C for 45 s) and a final extension period lasting for 10 min at 72 • C.
Illumina MiSeq Sequencing and Bioinformatic Analysis
The purified amplicons were pooled in equimolar amounts and paired-end sequenced on an Illumina MiSeq platform (Illumina, San Diego, United States) according to standard protocols by Majorbio Bio-Pharm Technology Co., Ltd. (Shanghai, China). Raw fastq files were quality filtered by Trimmomatic and merged by FLASH (Magoè and Salzberg, 2011). Operational taxonomic units (OTUs) were clustered with a 97% similarity cutoff using UPARSE (version 7.1) with a novel "greedy" algorithm that simultaneously performs chimera filtering and OTU clustering (Edgar, 2013). The taxonomy of each 16S rRNA gene sequence was analyzed by the RDP Classifier algorithm against the Silva (SSU123) 16S rRNA database using a confidence threshold of 70%. To minimize the effects of sequencing depth on alpha and beta diversity measure, the number of reads from each sample was rarefied to 24,581. OTUs diversity and richness estimators were determined on the basis of the Shannon index and Simpson index (diversity) and the abundance-based coverage estimator (ACE) and bias-corrected Chao estimator (Chao 1) (richness) using the MOTHUR (version 1.30.2) program. A Venn diagram shows the number of OTUs shared among the groups. β-diversity was calculated by measuring the Bray-Curtis distance using QIIME (version 1.9.1) software, and visualized using principal coordinate analysis (PCoA). Taxonomic community composition was analyzed through the visualization of the data sets of the relative abundances in different samples. By performing linear discriminant analysis coupled with effect size (LEfSe), we identified the most differentially abundant taxa between different groups (LDA > 3.5). Finally, microbial functions were predicted using phylogenetic investigation of communities by reconstruction of unobserved states (PICRUSt) (Langille et al., 2013). Additionally, raw data of high throughput sequencing have been deposited to the NCBI Sequence Read Archive with accession no. PRJNA784020.
Bacterial DNA Quantitative PCR
16S rRNA gene copies of the phyla Firmicutes and Bacteroidetes were determined in the rectum contents of donkeys by realtime quantitative PCR (qPCR) as previously described . In detail, the genes were quantified with a standard curve for gene copy number by cloning specific primer sequences into pMD18-T plasmids. Standard curves were constructed from 10 8 to 10 0 (10-fold serial dilutions) copies of amplified bacterial 16S rRNA genes from reference strains. The primer information for specific bacteria is listed in Supplementary Table 1. The qPCR assay was performed with an ABI7300 PCR Detection System (Applied Biosystems) with a ChamQ SYBR Color qPCR Master Mix (2X) Kit (Vazyme Biotech Co., Ltd., Nanjing, China). The PCR results were expressed as 16S rRNA gene copies per gram (copies g −1 ) of wet fecal sample. All measurements were performed in duplicates.
Statistical Analyses
Data on the serum biochemical parameters and qPCR results (16S rRNA gene copies of the phyla Firmicutes and Bacteroidetes) were subjected to one-way ANOVA followed by Duncan multiple comparison using the SPSS statistical software package (version 22). The variability of the results is expressed as the mean ± standard error. Means were considered significantly different at p < 0.05. The results were performed using the GraphPad Prism 6.0 version.
Gut bacterial data were analyzed on the online platform of Majorbio Cloud Platform 1 . We used a Wilcoxon rank-sum test to determine the variance of α diversity index of gut microbiota. PCoA and NMDS plots based on the Bray-Curtis distance were used to visualize differences in bacterial community composition among samples. R 4.0.3 was used perform the differences in PICRUSt between groups. Finally, connections between biochemical indices and microbial abundances at the family and genus levels were evaluated by Spearman's correlation coefficients based on the heatmap analysis and visualized using R (version 3.3.1), and p < 0.05 were considered to be significantly correlated.
Serum Biochemical Parameters in Donkeys
The EP group had the highest GLU content, while the MP and LP groups showed a higher trend than the C group ( Table 1). The EP group showed significantly lower levels of TG, TP, ALB and T-BIL in serum than the other groups (p < 0.05; Table 1). The TC level during pregnancy (EP, MP and LP) was significantly lower than that in non-pregnancy (C group), which was opposite to the change in AST activity. The activity of serum ALP was significantly highest in the LP group, while there was a higher trend in the C and EP groups than in the MP group ( Table 1). The activity of serum CK was significantly higher in the EP and LP groups than in the C and MP groups (p < 0.05; Table 1). Moreover, the activities of ALT and γ-GT in serum presented no differences among the four groups (Table 1).
There was no change in SOD activity among the four groups ( Figure 1A). GSH-P X activity in serum was significantly higher in the EP and LP groups than in the MP group (p < 0.05; Figure 1B) but had a higher trend than that in the C group. However, the T-AOC levels during pregnancy (EP, MP and LP) were significantly lower than those during non-pregnancy (C group; p < 0.05; Figure 1C). In addition, the level of MDA in serum was significantly highest in the EP group, whereas the trend was lower in the LP group than in the C group (p < 0.05; Figure 1D). The levels of the proinflammatory cytokines IL-6 and IL-1β were highest in EP (p < 0.05; Figures 1E,F). Moreover, the level of TNF-α was significantly higher in the EP and LP groups than in the C and MP groups (p < 0.05; Figure 1G). Therefore, our data suggest that metabolic disorders and low-grade inflammation might exist in donkeys during the gestation period, especially at the early stage (in the EP group).
Diversity Changes in the Gut Microbiota
To evaluate the effect of pregnancy status on the donkey fecal microbiota, we collected 24 rectum content samples from donkeys in different pregnancy periods and non-pregnancy and analyzed the bacterial community structure by 16S rRNA high throughput sequencing. After quality control, a total of 1,250,751 high-quality sequences were obtained from 24 samples with an average of 52,114 ± 6007 sequences per sample. In addition, 2,499 OTUs were obtained based on the 97% sequence similarity level, 1,399 of which existed in all groups and were thus defined as core OTUs (Supplementary Figure 1). The core OTUs comprised 56.0% of the total OTUs, whereas 65, 44, 52, and 64 OTUs were uniquely identified in the C, EP, MP, and LP groups, respectively (Supplementary Figure 1). Diversity differences between groups were assessed. The abundance and α-diversity of the microorganisms among the pregnancy periods were observed. The observed ACE and Chao 1 indices were lowest in the C group (Figures 2A,B). No remarkable change was noted in the observed Chao 1 and ACE indices among the EP, MP, and LP donkeys. The Shannon index in the EP group was lower than that in the MP and LP groups, and there was no change between the C and EP groups ( Figure 2C). Moreover, the Simpson index in the EP group was higher than that in the C, MP, or LP groups ( Figure 2D). The dramatic changes in the four indices indicated that the community richness and diversity of the four groups were varied. In addition, through PCoA based on Bray-Curtis dissimilarity, we found that the fecal microbiota of donkeys was obviously segregated in the non-pregnant group (control) and pregnant groups, especially between in the control and EP groups, but was less dispersed in the MP and LP groups ( Figure 2E). Moreover, the results of the non-metric multidimensional scaling (NMDS) revealed a similar change pattern ( Figure 2F). Taken together, the fecal bacterial components and diversity of donkeys is profoundly altered during pregnancy.
Composition Changes of the Fecal Microbiota
We then further studied the changes in fecal community phylotypes among the four groups. The relative abundance (%) of the gut microbiota in the four groups at the phylum and family levels is shown in Figure 3. As expected, we found that the dominant bacterial phyla in the feces in all donkeys were Frontiers in Microbiology | www.frontiersin.org (Figures 3A,B), which accounted for more than 80% of the relative abundance. After comparing the differences among the four groups, it was found that the relative abundance of Firmicutes in the EP group was significantly higher than that in the C group (p < 0.05; Figure 3B) but had a higher tendency at the LP stage. However, the relative abundance of the phylum Bacteroidetes was the lowest at EP stage (p < 0.05; Figure 3B). Furthermore, we quantified their abundances by using qPCR and observed a similar result (Supplementary Figure 2). The increased abundance of Firmicutes in the EP group was mainly attributed to the enrichment of Clostridiaceae and Streptococcaceae, whereas the decreased abundance of Bacteroidetes was primarily due to the depletion of Rikenellaceae (Figures 3B,D). The family Lachnospiraceae also belongs to the phylum Firmicutes, but its relative abundance at the EP stage was significantly lower than that in the C and MP groups (p < 0.05; Figure 3D). At the genus level, the abundances of Clostridium_sensu_stricto_1 and Streptococcus in the EP group were the highest (p < 0.05; Supplementary Figure 3). Moreover, LEfSe confirmed the above results (Figure 4). Overall, these results indicate that the gut microbiota composition of donkeys is profoundly altered during the pregnancy period, especially at the EP stage.
Metabolic Functional Changes of the Fecal Microbiota
Based on the significant changes in the bacterial composition of the fecal microbiota, we then analyzed metabolic functional differences. PICRUSt was used to predict the metabolic function of the microbiome based on the results from the 16S rRNA gene sequencing at KEGG taxonomy level 3. We observed the important bacterial functions (top 25) identified by random forest analysis and a heatmap based on the KEGG data (Figures 5A,B). Sixteen pathways (e.g., the proteasome, PPAR signaling, cancer, zeatin biosynthesis, protein processing in endoplasmic reticulum, cellular antigens, other glycan degradation, secondary bile acid biosynthesis, electron transfer carriers, renal cell carcinoma, and adipocytokine signaling pathways) were significantly more abundant in the C group. Ten pathways (e.g., the Staphylococcus aureus infection, RIG-1-like receptor signaling pathway, ether lipid metabolism and apoptosis pathways) were enriched in the EP group. Eight pathways (e.g., the proteasome, PPAR signaling, cancer, and zeatin biosynthesis pathways) were more abundant in the MP group, while two other pathways (e.g., type II polyketide product biosynthesis and stilbenoid, diarylheptanoid and gingerol biosynthesis) were more abundant in the LP group. Altogether, these data suggest that pregnancy alters the metabolic functions of the fecal microbiota and therefore deserves further exploration. Additionally, the fecal microbial functions were clearly separated between the C and EP groups and were similar between the MP and LP groups, which was consistent with the β-diversity analysis (Figures 2E,F).
Correlation of the Gut Microbiota With Biochemical Parameters
The gut microbiota plays a critical role in host metabolism and immune function. Thus, Spearman correlation analysis was performed to evaluate potential associations between the changes in the gut microbiota at the family and genus levels and biochemical parameters included among the sixteen Frontiers in Microbiology | www.frontiersin.org Figure 6). Interestingly, the Clostridiaceae_1 and Streptococcaceae families displayed a very similar pattern of correlations with most serum biochemistry parameters, and these correlative patterns were in contrast to the patterns observed for the Lachnospiraceae and Rikenellaceae families ( Supplementary Figure 4). At the genus level (Figure 6), two genera were positively associated with serum proinflammatory cytokines (IL-6, IL-1β and TNF-α), implying a positive correlation with the inflammatory response. Conversely, ten genera were negatively correlated with serum IL-6, IL-1β and TNF-α levels, suggesting that these genera were negatively correlated with inflammation status. However, nine genera were positively associated with serum lipids and protein metabolism, and two genera were negatively associated with TG, TC, TP and ALB. Notably, genera abundantly enriched in the EP groups included Streptococcus and Clostridiaceae_sensu_stricto_1, which were significantly, positively correlated with increased serum IL-6, IL-1β and TNF-α levels but negatively correlated with decreased serum lipid content and protein content. The abundances of Rikenellaceae_RC9_gut_group, unclassifized_f_lachnospiraceae, Lachnospiraceae_UCG_009, Prevotellaceae_UCG_003, Ruminococcaceae_UCG_005 and Prevotellaceae_UCG_003 were significantly (p < 0.05) negatively associated with serum IL-6, IL-1β and TNF-α levels but positively associated with serum lipid content and protein content. The abundance of Akkermansia was significantly, positively associated with serum T-AOC and TC contents but negatively (p < 0.05) associated with serum AST content. In addition, the abundances of Streptococcus, Clostridiaceae_sensu_stricto_1 and [Eubacterium]_coprostanoligenes_group were positively associated with fecal LPS concentration. However, the abundances of Lachnospiraceae_AC2044 _group, norank_f_Lachnospiraceae and Prevotellaceae_UCG_003 were negatively correlated with fecal LPS concentration, and serum IL-6 and MDA levels.
DISCUSSION
Pregnancy is a period of dramatic shift and adaptation as a mother. It is believed that the gut microbiota plays a fundamental role in responding and adapting to the host environment, and then supports host health and normal reproduction. In addition, serum physiological biochemical parameters are also considered as useful biomarkers for monitoring physiological responses and host health. Remarkable changes occur in the gut microbiota during the pregnancy period in humans, rats and several domestic animals. However, little is known about whether the donkey exhibits similar changes in the gut microbiota and serum biochemical parameters during the whole pregnancy period. Therefore, we first investigated the difference of gut microbiota composition as well as the association with serum biochemical indices in different stages of pregnant donkeys. Our results suggest the dramatic changes in fecal microbiota diversity and composition as well as serum biochemical parameters in donkeys during the pregnancy period, especially at the early stage of pregnancy.
In this study, pregnant donkeys (EP, MP and LP) exhibited increased gut microbial richness. However, with regard to evenness, we observed that the Shannon index was lower and the Simpson index was higher in the EP group than in the other three groups, indicating that the additional richness was not evenly distributed during the pregnancy period, especially in the EP group. This finding is similar to the result reported by Koren et al. (2012) who noted that the diversity of the gut microbial community decreased at 1 month postpartum (Koren et al., 2012). This similarity may be because donkeys exhibit foal heat breeding, which results in the simultaneous existence of pregnancy and breastfeeding in the early stages of pregnancy. Moreover, the β diversity results demonstrated significant differences in the microbial composition between the non-pregnant (C) and pregnant donkeys (EP, MP and LP), while the MP and LP groups mostly clustered together. The change indicated that the composition of gut microbes was prone to modulation by the early stage of pregnancy and then gradually stabilized. The greatest effects on α and β diversity were exhibited in the early stage of pregnancy compared to those during the other stages (MP and LP), which is consistent with the change in serum biochemical indices in the study subjects.
In addition to gut microbial diversity, the microbial community composition in donkeys also shifts during different stages of pregnancy. In this study, the dominant phyla found in the donkey fecal microbiota in all groups were Firmicutes and Bacteroidetes (Figure 3A), consistent with previous studies showing that the most abundant phyla were Firmicutes and Bacteroidetes with respect to the breeding stages FIGURE 6 | Correlation plot showing Spearman's correlations among serum metabolism, oxidative status and inflammation, fecal LPS and core genera (top 30). *p < 0.05, **p < 0.01, ***p < 0.001. (Kong et al., 2016;Cheng et al., 2018;Shao et al., 2020). In the equine gut, Firmicutes generally displays the highest relative abundance, followed by Bacteroidetes (Venable et al., 2016;Hashimoto-Hill and Alenghat, 2021), which is similar to our study. Furthermore, we found that the relative abundance of the phylum Firmicutes was highest in EP, whereas Bacteroidetes showed the opposite result. The LP group had a higher trend in the relative abundance of the phylum Firmicutes than the C and MP groups. We next quantified their abundances by using qPCR. A previous study showed that an increase in Firmicutes abundance is considered to support the fetal growth by enhancing energy metabolism (Cheng et al., 2018). In addition, we found lower serum concentrations of TG, TP, and ALB in the EP group (lactating donkeys) than in the other groups (C, MP and LP), which is similar to previous results in Chinese Liaoxi donkeys (Liao Q. et al., 2021). Based on its foal heat breeding character, as we mentioned, these results could be explained by the increase in energy metabolism, nutrition (such as lipid and protein) transfer for milk yield in the mammary glands and fetal development during EP. Similarly, we also found an increase in inflammation and a decrease in total antioxidant ability in EP and LP. However, the inflammatory response in the MP group did not change compare to that in the C group. These findings regarding inflammation were consistent with an earlier report by Mor and Cardenas (2010). The first trimester and third trimester of pregnancy are associated with an inflammatory response, which is necessary for blastocyst implantation and labor, respectively. The second trimester is commonly characterized by an antiinflammatory status, which is required for fetal growth (Mor and Cardenas, 2010). Accounting for this information, we should consider adopting different management modes according to the different aspects of metabolism, immunity and the microflora during pregnancy, especially in EP in donkeys. On the other hand, the shift in the gut microbiota might be closely linked with physiological changes.
Remarkably, bacteria in the phylum Firmicutes were EP biomarkers ( Figure 4B). Likewise, the genera Streptococcus (order Lactobacillales) and Clostridium_sensu_stricto_1 (order Clostridia) were also enriched in EP. A previous study showed that compared with prepartum mares, postpartum mares had an increased relative abundance of the Firmicutes phylum (specifically family Streptocococcaceae) and a corresponding decrease in the relative abundance of the family Lachnospiraceae (also in the Firmicutes phylum) (Weese et al., 2015). Generally, Clostridium sensu stricto and Streptococcus commonly considered two opportunistic pathogens of the animal intestine (Milinovich et al., 2008;Boyle et al., 2018;Liang et al., 2018;. For instance, enrichment of Streptococcus spp. has been linked with a disturbance in the microbial community in oligofructose-induced laminitis (Milinovich et al., 2008). Notably, we also observed that the serum proinflammatory cytokine (IL-6, IL-1β and TNF-α) concentrations in donkeys were increased in EP, and Spearman correlation analysis showed that the relative abundances of Streptococcus and Clostridium_sensu_stricto_1 were positively correlated with the levels of biomarkers of systemic low-grade inflammation. In contrast, recent evidence suggests that the decreased abundance of Clostridium_sensu_stricto_1 in intrauterine growth restricted (IUGR) piglets is negatively correlated with plasma proinflammatory cytokine (IL-1β, TNF-α, and IFN-γ) levels . This difference might be explained by the fact that Clostridium has been identified as a highly diverse genus, that contains both potential pathogens and beneficial species (Venable et al., 2016). In addition, Streptococcus, one of the major genera in the horse gut (Venable et al., 2016), was the predominant genus in the feces of donkeys at EP in this study. Streptococcus, a starch-utilizing bacterium observed in the horse gastrointestinal tract (Goodson et al., 1988), is also considered beneficial to animal health due to its complex interaction with the host. However, although these changes occur in donkeys in EP, the implication of causality and their interplay between enrichment of Streptococcus and Clostridium_sensu_stricto_1 and low-grade inflammation in donkeys remain to be further confirmed. In addition to potential opportunistic pathogens, the abundances of some beneficial bacteria were also influenced in EP. For instance, Lachnospiraceae, which has been associated with maintaining gut health (Vojinovic et al., 2019) and strongly negatively correlated with intestinal inflammation (Zhao et al., 2017), was less abundant on average in EP. Furthermore, Spearman correlation analysis showed that the abundances of Lachnospiraceae_XPB1014_group, Lachnospiraceae_AC_2044 and Lachnospiraceae_UCG-009 (family Lachnospiraceae) were negatively correlated with the increased levels of serum biomarkers of inflammation of in donkeys in EP. Lachnospiraceae and Ruminococcaceae were mainly enriched in MP. Genera from the families Lachnospiraceae and Ruminococcaceae exhibit anti-inflammatory functions including Ruminococcaceae_UCG-005, and are also reported to be involved in the production of short-chain fatty acids (SCFAs) Vojinovic et al., 2019;Liao R. et al., 2021), which are essential for the regulation of intestinal microbiota balance and the maintenance of intestinal epithelium integrity (Tan et al., 2014;Kelly et al., 2015). Moreover, Ruminococcaceae bacteria have the ability to degrade cellulose and starch, which is closely related to feed efficiency in herbivorous animals . The main biomarkers of LP belonged to the Proteobacteria phylum and Burkholderiales order. It has been reported that the enrichment of Proteobacteria is also closely related to gut dysfunction (Litvak et al., 2017), although Proteobacteria play a minor role in maintaining gut balance (Eckburg et al., 2005). This finding is similar to the results of another study showing that the abundance of the order Burkholderiales (phylum Proteobacteria) in LP of Meishan sows was higher than that in EP and confirmed that elevated Burkholderiales abundance also contributes to inflammation (Xue et al., 2017). We also observed that the level of the proinflammatory cytokine TNF-α was increased in LP. Similarly, a significant relative increase in Proteobacteria abundance was found in women during LP, which caused inflammatory responses in germ-free mice (Koren et al., 2012). Although we found the pregnant donkeys might undergo metabolic disturbances including in fat and protein metabolism and low-grade inflammation, during pregnancy, especially in EP, any interpretation is limited due to small sample number.
PICRUSt was used to observe the metabolic functional changes among the four groups. Microbial gene functions such as Staphylococcus aureus infection, the RIG-1-like receptor signaling pathway and apoptosis were enriched in EP, and were related to the inflammatory response and apoptosis. The enrichment of the Sraphylococcus aureus infection pathway is related to aggravated intestinal inflammation and hence leads to impaired intestinal barrier function (Hauck and Ohlsen, 2006). The RIG-I-like receptor is an intracellular pattern recognition receptor that specifically recognizes viruses (Yoneyama and Fujita, 2009), which indicates the risk of viral or bacterial infection at EP. Our microbial function results imply that changes in metabolic disorders and the inflammatory response are closely related to the shifts in the gut microbiota at EP. However, functions linked with anti-inflammatory pathways, such as the PPAR pathway and zeatin biosynthesis, were significantly enriched in the gut microbiome in MP. In addition, the predicted pathways involved in the type II polyketide product biosynthesis and stilbenoid, diarylheptanoid and gingerol biosynthesis were enriched in LP. Polyketide-synthesizing bacteria are strongly associated with chronic intestinal inflammation (Arthur et al., 2012). Moreover, in post-weaning diarrhea pigs, the increased abundance of the type II polyketide product biosynthesis pathway might be responsible for the increased inflammatory response (Dou et al., 2017;Hashimoto-Hill and Alenghat, 2021). However, stilbenoid diarylheptanoid and gingerol, secondary metabolites of plants, have been reported to have anti-inflammatory or anticancer activities (Park et al., 1998;Yadav et al., 2003). For example, the stilbenoid, diarylheptanoid, and gingerol biosynthesis pathway has been found to be enriched in high-body-weight rabbits (Zeng et al., 2015). These data indicated that it might be a compensatory mechanism to ameliorate microbial dysbiosis by regulating these microbial processes.
In summary, the present study suggests that pregnant donkeys might undergo metabolic disturbances including in fat and protein metabolism and low-grade inflammation, during pregnancy, especially in EP. The gut microbiota of donkeys changes dramatically throughout pregnancy. The representative changes included an increase in bacterial richness throughout pregnancy, a decrease in bacterial diversity and the relative abundances of Lachnospiraceae and Rikenellaceae in EP and LP, and an increase in the relative abundances of Clostridiaceae and Streptococcaceae in EP. Functional prediction was also influenced by the different pregnancy stages. In addition, the metabolic disturbance in the serum of pregnant donkeys, at least in part, is attributable to the shift in the gut microbiota, especially EP. Therefore, this study provides systematic data on the gut microbiota shift and host metabolism of donkeys throughout pregnancy. However, a major limitation of this study was that samples were obtained from different animals in different pregnancy stages, which might result in variability derived from the different individuals. Another limitation of this study is small sample size. Further research is needed to monitor the shift in microbes in the same individuals while at the same time increasing the sample sizes, and then elucidate the mechanisms involved in cross-talk between the intestinal microbiota or their metabolites and host metabolism and its role in host health.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi.nlm. nih.gov/, accession ID: PRJNA784020.
ETHICS STATEMENT
The animal study was reviewed and approved by the animal care protocol in this study followed commercial management practice and was approved by the Animal Welfare Committee of Liaocheng University. Written informed consent was obtained from the owners for the participation of their animals in this study.
AUTHOR CONTRIBUTIONS
YL conceived the study. YL and QM drafted the manuscript. | 2022-03-17T13:24:39.755Z | 2022-03-17T00:00:00.000 | {
"year": 2022,
"sha1": "21f53a723deaafa1c4aba1de992577d978bc7da7",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2022.801976/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "21f53a723deaafa1c4aba1de992577d978bc7da7",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226264890 | pes2o/s2orc | v3-fos-license | Metastasis to lateral lymph nodes with no mesenteric lymph node involvement in low rectal cancer: a retrospective case series
Purpose The aim of this study is to examine the pattern of lymph node metastasis (lateral vs. mesenteric lymph nodes) in low rectal cancer. Methods This retrospective analysis included all patients undergoing laparoscopic total mesorectal excision plus lateral lymph node dissection for advanced low rectal cancer (up to 8 cm from the anal verge) during a period from July 1, 2017, to August 31, 2019, at the Department of Colorectal Surgery, Tianjin Union Medical Center. The decision to conduct lateral lymph node dissection was based on positive findings in preoperative imaging assessments. Results A total of 42 patients were included in data analysis. Surgery was successfully completed as planned, without conversion to open surgery in any case. A minimum of 10 mesenteric lymph nodes and 1 lateral lymph node on each side were dissected in all patients. Pathologic examination of resected specimens showed no metastasis to either mesenteric or lateral lymph nodes in 7 (16.7%) case, metastasis to both mesenteric and lateral lymph nodes in 26 (61.9%) cases, metastasis to mesenteric but not lateral lymph nodes in 4 (9.5%) cases, and metastasis to lateral but not mesenteric lymph nodes in 5 (11.9%) cases (n = 2 in the obturator region; n = 3 in the iliac artery region). Conclusion A clinically significant proportion of low rectal cancer patients have metastasis to lateral lymph nodes without involvement of mesenteric lymph nodes. More carefully planned prospective studies are needed to verify this preliminary finding.
Introduction
In patients with low rectal cancer (up to 8 cm from the anal verge), estimated rate of lateral lymph node metastasis is 16-23% [1]. The most recent Japanese Society for Cancer of the Colon and Rectum (JSCCR) Guidelines for the treatment of colorectal cancer classify metastasis to lateral lymph nodes as local metastasis and recommend lateral lymph node dissection (LLND) in both stage II and III low rectal cancers [2]. The NCCN Guidelines recommend chemoradiotherapy (CRT) plus total mesorectal excision (TME) treatment for lateral lymph node metastasis [3]. A recent study reported 19.5% 5-year local recurrence rate after CRT plus TME versus 5.5% 5-year local recurrence rate after CRT plus TME and LLND in patients with lateral lymph nodes at least 7 mm in diameter, supporting the notion that lateral lymph node involvement represents local metastasis [4].
In this retrospective analysis, we examined the metastasis profile (lateral vs. mesenteric lymph nodes) in a group of low rectal cancer patients with suspected lateral lymph node involvement based on preoperative imaging assessments. The results showed metastasis to lateral but not mesenteric lymph nodes in 5 out of 42 patients, supporting the notion that lateral lymph node metastasis should be regarded as local metastasis.
Patients and methods
We identified all patients receiving laparoscopic TME plus lateral lymph node dissection for advanced low rectal cancer (up to 8 cm from the anal verge on magnetic resonance imaging (MRI)) during a period from July 1, 2017, to August 31, 2019, at the Department of Colorectal Surgery, Tianjin Union Medical Center.
If the short diameter of the largest lymph node was at least 7 mm in diameter in MRI or the rectal lesions accord with the standard of neoadjuvant CRT therapy evaluated by a multidisciplinary team, neoadjuvant CRT was recommended to the patients. In long-course radiotherapy, a total dose of 45-50 Gy was given over 5 weeks. Typical chemotherapy regimen was CapeOX: two cycles of an intravenous oxaliplatin (130 mg/m 2 per day) for 1 day and oral capecitabine (1000 mg/m 2 twice per day) for 14 days in the first and fourth weeks of radiotherapy. The short diameter of the largest lymph was re-evaluated after neoadjuvant therapy. If the preoperative short diameter of the largest lymph node was at least 5 mm, TME + LLND was performed 6-8 weeks after neoadjuvant CRT. If the patient refused to receive neoadjuvant CRT, TME + LLND was performed immediately. TME and lateral lymph node dissection were performed using a fascial space priority approach, as previously described [5].
Statistical analysis
In addition to descriptive statistics, we also compared the demographic and pathologic features among subjects with different metastasis pattern (i.e., metastasis to both mesenteric and lateral, metastasis to mesenteric but not lateral, and metastasis to lateral but not mesenteric lymph nodes). Continuous variables are expressed as mean ± standard deviation if following normal distribution, and as median (range) otherwise. Categorical variables are presented as numbers (%). All analyses were conducted using SPSS Statistics (Version 25.0).
Results
A total of 42 patients were included in data analysis. Surgery was completed as planned, with no conversion to open surgery. Median distance from the lesion to the anal verge was 4.8 cm (range 0-8) ( Table 1). Sixteen patients received neoadjuvant CRT. Twenty-eight patients received unilateral lateral lymph node dissection, and the remaining 14 patients received bilateral dissection. A minimum of 10 mesenteric lymph nodes and 1 lateral lymph node on each side were dissected in all patients.
Metastasis was verified in both mesenteric and lateral lymph nodes in 26 (61.9%) patients, in mesenteric but not in lateral lymph nodes in 4 (9.5%) patients, and in lateral but not mesenteric lymph nodes in 5 (11.9%) patients. In the 5 cases with metastasis to lateral but not mesenteric lymph nodes, involved lymph nodes were in the obturator region in 2 cases, and in the iliac artery region in the remaining 3 cases.
Surgical approach, pathologic staging, and the extent of lymph node dissection in the entire study sample and in patients with different patterns of lymph node metastasis are shown in Table 2. The median follow-up in the 5 patients with lateral but no mesenteric lymph node metastasis was 13 (1-31) months; no recurrence was observed.
Discussion
In a previous study, we reported a fascial space priority approach for lateral lymph node dissection in patients with rectal cancer [5]. Using this approach, we found in the current study that 5 out of a total of 42 patients (11.9%) with low rectal cancer had metastasis to lateral lymph nodes but not to mesenteric lymph nodes. This finding supports AV anal verge, BMI body mass index, lat lateral lymph nodes, meso mesenteric lymph nodes managing lateral lymph node involvement as local metastasis [6][7][8] and suggests the possibility that lateral lymph nodes may be sentinel lymph nodes in some patients. Lymphatic drainage of the lower rectum passes to external pelvic (inguinal area) or pelvic (iliac vessels and anterior sacral) lymph nodes, or to the root of inferior mesenteric artery along the superior rectal artery. In a study by Akiyoshi and colleagues, prognosis did not differ significantly between patients with N2a and those with lymph node metastasis either in the external iliac artery region (5year overall survival, 45% vs 45%, P = 0.9585; 5-year cancer-specific survival: 51% vs 49%, P = 0.5742) or in the internal iliac artery region (5-year overall survival: 32% vs 29%, P = 0.3342; 5-year cancer-specific survival: 37% vs 34%, P = 0.4347) [9], suggesting that the lateral lymph node involvement should be regarded as local metastasis. The findings from the current study supported such a notion. Lymphatic mapping technology can be adopted to study drainage pattern of low rectal cancer [10].
Few studies have examined the prognosis of patients with lateral lymph node metastasis but no mesenteric lymph node metastasis. Based on a study by Takahashi et al. [6], the 5-year survival rate was 90.1% in patients with no metastasis to either mesenteric or lateral lymph nodes, 75% in patients with metastasis to lateral but not mesenteric lymph nodes, 67.7% in patients with metastasis to mesenteric but not lateral lymph nodes, and 32% in patients with metastasis to both lateral and mesenteric lymph nodes. Akiyoshi and colleagues argued that metastasis to lymph nodes located in the area medial to internal iliac artery should be classified as N2a and those located in the area lateral to internal iliac artery should be classified as N2b [9]. Despite such detailed differences, the prognosis of patients with metastasis to lateral but not mesenteric lymph nodes is clearly better than in patients with metastasis to both lateral and mesenteric lymph nodes. Studies with larger sample size and with a focus on the long-term survival in patients with distinct lymph node metastasis (lateral vs mesenteric) are needed to examine the clinical significance.
Lateral lymph node dissection could influence pathologic staging and hence postoperative management of the patients. In the current study, the 5 patients with metastasis to lateral but not mesenteric lymph nodes would have been classified as pN0 and stage II if lateral lymph nodes were not dissected. With erroneous staging, adjuvant chemotherapy after surgery would not be recommended. In low rectal cancer patients with MRI evidence for lateral lymph node involvement but no metastasis to mesenteric lymph node, CRT should be initiated; in patients who do not respond to CRT, LLND should be conducted. For patients with no mesenteric lymph node metastasis upon pathologic examination (regardless of the lateral lymph node status), the 2020 NCCN Guideline recommends the "watch and wait" approach. The results from the current study suggest that more attention should be given to lateral lymph node metastasis after neoadjuvant chemoradiation [3]. The AJCC colorectal cancer staging Guideline [11] classifies lymph nodes in the iliac artery region as regional, but considers metastasis to lymph nodes in obturator artery region as distant metastasis. Two patients in the current study had metastasis to lymph nodes in the obturator but not iliac artery lymph nodes or mesenteric lymph nodes. Based on this finding, we speculate that obturator lymph nodes should also be regarded as regional. Cirocchi et al. reported that the pooled prevalence estimate of LCA absence was 1.2% (95% CI 0.0-3.6%). This rare absence of the left colonic artery/superior rectal artery or variation in lymphatic drainage may also contribute to this phenomenon [12]. Due to very small number of the cases, this speculation must be examined in future studies.
There are several important limitations in the current study. First, this is a retrospective analysis of the patients receiving TME plus LLND for low rectal cancer. Due to the retrospective nature, there were no strict criteria for LLND. Nevertheless, we adopted a general set of indications for LLND. Another important limitation is the use of neoadjuvant CRT in some but not all patients, which may have influenced the pathologic staging. Third, we did not conduct systematic follow-up. As a result, the clinical significance of metastasis to lateral but not mesenteric lymph nodes remains ambiguous. The sample size is also relatively small, and we could not compare the baseline features across patients with different pattern of lymph node metastasis.
Conclusion
A clinically meaningful proportion of low rectal cancer patients had metastasis to lateral but not mesenteric lymph nodes. The presence of this group of patients indicates a need to re-evaluate whether metastasis to lateral lymph nodes represents local or distant metastasis. | 2020-11-07T14:57:18.854Z | 2020-08-13T00:00:00.000 | {
"year": 2020,
"sha1": "220d558ba94b2c61b59c75bc99605604b8345067",
"oa_license": "CCBY",
"oa_url": "https://wjso.biomedcentral.com/track/pdf/10.1186/s12957-020-02068-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "220d558ba94b2c61b59c75bc99605604b8345067",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237409852 | pes2o/s2orc | v3-fos-license | Anemia, hematinic deficiencies, and hyperhomocysteinemia in serum gastric parietal cell antibody-positive burning mouth syndrome patients without serum thyroid autoantibodies
Background/purpose Our previous study found that 70 of 884 burning mouth syndrome (BMS) patients have serum gastric parietal cell antibody (GPCA) positivity but without thyroglobulin antibody (TGA) and thyroid microsomal antibody (TMA) (so-called GPCA+TGAˉTMAˉBMS patients). This study assessed whether these 70 GPCA+TGAˉTMAˉBMS patients had significantly higher frequencies of macrocytosis, anemia, hematinic deficiencies, and hyperhomocysteinemia than 553 GPCA-negative, TGA-negative, and TMA-negative BMS (GPCAˉTGAˉTMAˉBMS) patients or 442 healthy control subjects. Materials and methods Complete blood count, serum iron, vitamin B12, folic acid, homocysteine, GPCA, TGA, and TMA levels in 70 GPCA+TGAˉTMAˉBMS patients, 553 GPCAˉTGAˉTMAˉBMS patients, and 442 healthy control subjects were measured and compared. Results We found that 15.7%, 28.6%, 20.0%, 11.4%, 2.9%, and 25.7% of 70 GPCA+TGAˉTMAˉBMS patients and 3.8%, 17.7%, 15.9%, 3.8%, 2.7%, and 20.1% of 553 GPCAˉTGAˉTMAˉBMS patients had macrocytosis, blood hemoglobin, iron, vitamin B12, and folic acid deficiencies, and hyperhomocysteinemia, respectively. Moreover, both 70 GPCA+TGAˉTMAˉBMS patients and 553 GPCAˉTGAˉTMAˉBMS patients had significantly greater frequencies of macrocytosis, blood hemoglobin, serum iron, vitamin B12, and folic acid deficiencies, and hyperhomocysteinemia than 442 healthy control subjects (all P-values < 0.05). In addition, 70 GPCA+TGAˉTMAˉBMS patients also had greater frequencies of macrocytosis, anemia, serum vitamin B12 deficiency, and hyperhomocysteinemia than 553 GPCAˉTGAˉTMAˉBMS patients (all P-values < 0.05). Conclusion The GPCA + TGAˉTMAˉBMS patients have significantly greater frequencies of macrocytosis, anemia, serum iron, vitamin B12, and folic acid deficiencies, and hyperhomocysteinemia than healthy control subjects and significantly greater frequencies of macrocytosis, anemia, serum vitamin B12 deficiency, and hyperhomocysteinemia than GPCAˉTGAˉTMAˉBMS patients.
Introduction
Burning mouth syndrome (BMS) is a disease characterized by burning sensation of the oral mucosa in the absence of clinically apparent oral mucosal alterations. 1,2 Our previous study has shown that 109 (12.3%), 191 (21.6%), and 201 (22.7%) of 884 BMS patients have serum gastric parietal cell antibody (GPCA), thyroglobulin antibody (TGA), and thyroid microsomal autoantibody (TMA, also known as anti-thyroid peroxidase antibody, anti-TPO antibody) positivities, respectively. 2 It is well known that GPCA can induce destruction of gastric parietal cells, resulting in failure of intrinsic factor and hydrochloric acid (HCl) production. 3,4 The intrinsic factor deficiency can cause vitamin B12 deficiency and finally lead to pernicious anemia (PA) in some of the vitamin B12-deficient patients. 5,6 The HCl deficiency can cause malabsorption of iron and finally result in iron deficiency. 7,8 The vitamin B12 deficiency may also lead to hyperhomocysteinemia in BMS patients. 9,10 Thus, GPCA positivity may have a significant influence on the red blood cell size and blood Hb, iron, vitamin B12, and homocysteine levels in GPCA-positive BMS patients. 1,9,10 Moreover, we also demonstrated that 19.3%, 30.3%, 16.5%, 16.5%, 1.8%, and 29.4% of 109 GPCA-positive BMS patients have macrocytosis, blood hemoglobin (Hb), iron, vitamin B12, and folic acid deficiencies and hyperhomocysteinemia, respectively. 10 Of the 109 GPCA-positive BMS patients, 20 also have serum TGA and TMA positivities, 7 also have serum TGA positivity, 12 also have serum TMA positivity, and 70 have serum GPCA positivity only but without TGA and TMA positivities (so-called GPCA þ TGAˉT-MAˉBMS patients). 2 Thus, these 70 GPCA þ TGAˉTMAˉBMS patients could be used to assess the relatively pure role of serum GPCA positivity in causing macrocytosis, blood hemoglobin (Hb), iron, vitamin B12, and folic acid deficiencies, and hyperhomocysteinemia in BMS patients. Furthermore, our previous study also discovered 553 BMS patients who were GPCA-negative, TGA-negative, and TMAnegative (so-called GPCAˉTGAˉTMAˉBMS patients). 2 These 553 GPCAˉTGAˉTMAˉBMS patients could be used to clarify the disease of BMS itself in the final development of macrocytosis, blood hemoglobin (Hb), iron, vitamin B12, and folic acid deficiencies, and hyperhomocysteinemia in BMS patients. Therefore, the main purpose of this study was to evaluate whether 70 GPCA þ TGAˉTMAˉBMS patients had significantly higher frequencies of macrocytosis, anemia, hematinic deficiencies, and hyperhomocysteinemia than 553 GPCAˉTGAˉTMAˉBMS patients or 442 healthy control subjects. In addition, we also explored whether 553 GPCAˉTGAˉTMAˉBMS patients had significantly higher frequencies of macrocytosis, anemia, hematinic deficiencies, and hyperhomocysteinemia than 442 healthy control subjects.
Subjects
In this study, 70 (19 men and 51 women, age range 21e85 years, mean age 56.7 AE 14.8 years) GPCA þ TGAˉTMAˉBMS patients and 553 (166 men and 387 women, age range 18e90 years, mean age 56.0 AE 15.2 years) GPCAˉTGAˉTMAˉBMS Journal of Dental Sciences 16 (2021) 1110e1116 patients were retrieved from our 884 BMS patients (212 men and 672 women, age range 18e90 years, mean 56.1 AE 14.5 years) whose anemia statuses, hematinic deficiencies, hyperhomocysteinemia, and frequencies of serum GPCA, TGA and TMA positivities were published before. 1,2 For comparisons of blood data, 442 age-(AE2 years of each patient's age) and sex-matched healthy control subjects (106 men and 336 women, age range 18e90 years, mean 57.5 AE 13.5 years) were also retrieved from our previous study and included in this study. 1,2 All the BMS patients and healthy control subjects were seen consecutively, diagnosed, and treated in the Department of Dentistry, National Taiwan University Hospital (NTUH) from July 2007 to July 2017. Patients were diagnosed as having BMS when they complained of burning sensation and other symptoms of the oral mucosa but no apparent clinical oral mucosal abnormality was found. 1,2,10e14 The detailed inclusion and exclusion criteria for our BMS patients and healthy control subjects have been described previously. 1,2,10e14 In addition, none of the BMS patients had taken any prescription medication for BMS at least 3 months before entering the study.
The blood samples were drawn from our BMS patients and healthy control subjects for measurement of complete blood count, serum iron, vitamin B12, folic acid, and homocysteine concentrations as well as serum GPCA, TGA, and TMA levels. All the BMS patients and healthy control subjects signed the informed consent forms before entering the study. This study was reviewed and approved by the Institutional Review Board at the NTUH (201212066RIND).
Determination of complete blood count and serum iron, vitamin B12, folic acid and homocysteine concentrations The complete blood count and serum iron, vitamin B12, folic acid, and homocysteine concentrations were determined by the routine tests performed in the Department of Laboratory Medicine of NTUH as described previously. 1,2,10e14 This study defined the Hb and hematinic deficiencies according to the World Health Organization (WHO) criteria. Thus, men with Hb < 13 g/dL and women with Hb < 12 g/dL were defined as having Hb deficiency or anemia. 15 Patients with serum iron level <60 mg/dL, 7,8,13 vitamin B12 level <200 pg/ mL 16 or folic acid level <4 ng/mL 17 were defined as having iron, vitamin B12 or folic acid deficiency, respectively. Moreover, patients with the serum homocysteine level >12.3 mM (which was the mean serum homocysteine level of healthy control subjects plus two standard deviations) were defined as having hyperhomocysteinemia. 1,10e14 Determination of serum gastric parietal cell antibody, thyroglobulin antibody, and thyroid microsomal antibody levels GPCA, TGA, and TMA levels were measured by the routine tests performed in the Department of Laboratory Medicine, NTUH. Serum GPCA level was measured by indirect immunofluorescence assay. Sera were scored as positive for GPCA when they produced fluorescence at a serum dilution of 10fold or more. Moreover, serum TGA and TMA levels were measured by chemiluminescent microparticle immunoassay. Sera were scored as positive for TGA or TMA when the serum TGA level was greater than 14.4 IU/mL or when the serum TMA level was greater than 5.6 IU/mL, respectively. 2
Statistical analysis
Comparisons of the mean corpuscular volume (MCV), the mean blood levels of Hb, iron, vitamin B12, folic acid, and homocysteine between 70 GPCA þ TGAˉTMAˉBMS patients or 553 GPCAˉTGAˉTMAˉBMS patients and 442 healthy control subjects as well as between 70 GPCA þ TGAˉTMAˉBMS patients and 553 GPCAˉTGAˉTMAˉBMS patients were performed by Student's t-test. The differences in frequencies of microcytosis, macrocytosis, blood Hb, iron, vitamin B12, and folic acid deficiencies and hyperhomocysteinemia between 70 GPCA þ TGAˉTMAˉBMS patients or 553 GPCAˉT-GAˉTMAˉBMS patients and 442 healthy control subjects as well as between 70 GPCA þ TGAˉTMAˉBMS patients and 553 GPCAˉTGAˉTMAˉBMS patients were compared by chi-square test. The result was considered to be significant if the Pvalue was less than 0.05.
Results
The MCV and mean blood concentrations of Hb, iron, vitamin B12, folic acid, and homocysteine in 70 GPCA þ T-GAˉTMAˉBMS patients, 553 GPCAˉTGAˉTMAˉBMS patients, and 442 healthy control subjects are shown in Table 1. Because men and women usually had different normal blood Hb and iron levels, these two mean levels were calculated separately for men and women. We found that both 70 GPCA þ TGAˉTMAˉBMS and 553 GPCAˉTGAˉTMAˉBMS patients had significantly lower mean blood Hb (for both men and women) and serum iron (for both men and women) and vitamin B12 levels as well as a significantly higher mean serum homocysteine level than 442 healthy control subjects (all P-values < 0.05, Table 1). Moreover, we also found a significantly lower mean serum vitamin B12 level (P Z 0.030) and a higher mean serum homocysteine level (marginal significance, P Z 0.067) in 70 GPCA þ TGAˉT-MAˉBMS patients than in 553 GPCAˉTGAˉTMAˉBMS patients (Table 1).
Discussion
This study found significantly greater frequencies of microcytosis, macrocytosis, blood Hb, iron, and vitamin B12 deficiencies, and hyperhomocysteinemia in 70 GPCA þ T-GAˉTMAˉBMS patients than in 442 healthy control subjects. Moreover, 70 GPCA þ TGAˉTMAˉBMS patients did have significantly greater frequencies of macrocytosis, anemia, vitamin B12 deficiency, and hyperhomocysteinemia than 553 GPCAˉTGAˉTMAˉBMS patients. These two findings suggest that the significantly greater frequencies of microcytosis, macrocytosis, blood Hb, iron, and vitamin B12 deficiencies, and hyperhomocysteinemia in 70 GPCA þ T-GAˉTMAˉBMS patients may be attributed to both serum GPCA positivity and the disease of BMS itself. Moreover, the significantly greater frequencies of macrocytosis, anemia, vitamin B12 deficiency, and hyperhomocysteinemia are mainly caused by the serum GPCA positivity and the significantly greater frequencies of microcytosis and serum iron and folic acid deficiencies in 70 GPCA þ TGAˉTMAˉBMS patients are predominantly caused by the disease of BMS itself.
In addition, this study also discovered that 553 GPCAˉT-GAˉTMAˉBMS patients did have significantly greater frequencies of microcytosis, macrocytosis, blood Hb, iron, vitamin B12, and folic acid deficiencies, and hyperhomocysteinemia than 442 healthy control subjects. This finding indicates that the disease of BMS itself does play a significant role in causing microcytosis, macrocytosis, anemia, serum iron, vitamin B12, and folic acid deficiencies, and hyperhomocysteinemia in 553 GPCAˉTGAˉTMAˉBMS patients.
We further explained why GPCA might result in microcytosis, macrocytosis, blood Hb, iron, and vitamin B12 deficiencies, and hyperhomocysteinemia in 70 GPCA þ T-GAˉTMAˉBMS patients. GPCA can induce destruction of gastric parietal cells, resulting in failure of intrinsic factor and HCl production. 3,4 The intrinsic factor deficiency can cause malabsorption of vitamin B12 from the terminal ileum and vitamin B12 deficiency. 5,6 The vitamin B12 deficiency in turn leads to macrocytosis, anemia, and hyperhomocysteinemia. 5,6,11,12,18e20 Moreover, the HCl deficiency can cause malabsorption of iron from the stomach and upper portion of the duodenum and iron deficiency. 7,13 The iron deficiency subsequently results in Table 3 Anemia types of 20 anemic gastric parietal cell antibody (GPCA)-positive but thyroglobulin antibody (TGA)-negative and thyroid microsomal antibody (TMA)-negative burning mouth syndrome (GPCA þ TGAˉTMAˉBMS) patients and 98 anemic GPCAnegative, TGA-negative, and TMA-negative BMS (GPCAˉTGAˉTMAˉBMS) patients. microcytosis and anemia. 7,13,14 The size of red blood cell is influenced by the serum levels of iron, vitamin B12 and folic acid. 4e8,13e20 If the vitamin B12 deficiency plays a more important role than iron deficiency in GPCA þ TGAˉTMAˉBMS patients as seen in this study, then the MCV in our GPCA þ TGAˉTMAˉBMS patients may be slightly larger than that in healthy control subjects. 13,14,20 Our previous studies demonstrated that vitamin B12 and folic acid deficiencies can lead to high serum homocysteine level in oral mucosal disease patients. 1,22e26 Supplementation of multiple B vitamins especially vitamin B12 and folic acid can reduce the serum homocysteine levels in patients with atrophic glossitis or BMS. 9,22 In this study, GPCA þ TGAˉTMAˉBMS patients did not have a lower mean serum folic acid level and a higher frequency of folic acid deficiency than healthy control subjects or GPCA-TGA-TMA-BMS patients, but had a significantly lower serum vitamin B12 level and a significantly higher frequency of vitamin B12 deficiency than healthy control subjects or GPCA-TGA-TMA-AG patients. These findings suggest that the higher frequency of hyperhomocysteinemia in GPCA þ TGAˉTMAˉBMS patients than in healthy control subjects or in GPCA-TGA-TMABMS patients may be predominantly due to vitamin B12 deficiency in GPCA þ TGAˉTMAˉBMS patients.
In this study, 20 (28.6%) of 70 GPCA þ TGAˉTMAˉBMS patients and 98 (17.7%) of 553 GPCAˉTGAˉTMAˉBMS patients had anemia according to the strict WHO criteria. 15 Therefore, the frequency of anemia (28.6%) in 70 GPCA þ TGAˉT-MAˉBMS patients was significantly higher than that (17.7%) in 553 GPCAˉTGAˉTMAˉBMS patients. PA was the most common type of anemia in 70 GPCA þ TGAˉTMAˉBMS patients (8 cases, 11.4%). Of the 8 GPCA þ TGAˉTMAˉBMS patients with PA, 5 had iron deficiency, 8 had vitamin B12 deficiency, and none had folic acid deficiency. Thus, the serum GPCA positivity and iron and vitamin B12 deficiencies were the major causes resulting in anemia in these 8 GPCA þ TGAˉTMAˉBMS patients with PA. Normocytic anemia was the second common type of anemia in 70 GPCA þ TGAˉTMAˉBMS patients (4 cases, 5.7%; two had iron deficiency and none had vitamin B12 or folic acid deficiency) and was the most common type of anemia in 553 GPCAˉTGAˉTMAˉBMS patients (62 cases, 11.2%; 30 had iron deficiency, 3 had vitamin B12 deficiency, and 7 had folic acid deficiency). Although the normocytic anemia was predominantly associated with chronic diseases, inflammatory diseases, infections, bone marrow hypoplasia, decreased production of erythropoietin or a poor response to erythropoietin, hemolytic disorders, mild but persistent blood loss from gastrointestinal tract, and cytokine-induced suppression of erythropoiesis, 27e29 the normocytic anemia in our GPCA þ TGAˉTMAˉBMS and GPCAˉTGAˉTMAˉBMS patients was also partially attributed to the iron deficiency with occasional and concomitant vitamin B12 and/or folic acid deficiencies.
The present study revealed that only a small percentage (11.4%) of 70 GPCA þ TGAˉTMAˉBMS patients had PA. Our previous studies showed that 12.9% of 124 GPCA-positive oral mucosal disease patients (including 75 AG and 49 burning mouth syndrome patients) have PA, 18 22 (7.7%) of 284 GPCA-positive atrophic glossitis patients have PA, 30 7.3% of 41 GPCA-positive erosive oral lichen planus patients have PA, 31 14.1% of 92 GPCA-positive erosive oral lichen planus patients with desquamative gingivitis have PA, 32 13.3% of 15 GPCA-positive recurrent aphthous stomatitis patients with TGA/TMA positivity have PA, 33 and 9.7% of 31 recurrent aphthous stomatitis patients with GPCA positivity only (without TGA or TMA positivity) have PA. 34 These findings indicate that not all GPCA-positive oral mucosal disease patients have PA and only approximately 7.3%e14.1% of GPCA-positive oral mucosal disease patients have PA.
Our previous study found burning sensation of oral mucosa, dry mouth, numbness of oral mucosa, and dysfunction of taste in 100.0%, 48.1%, 30.7%, and 16.7% of 884 BMS patients, respectively. The oral mucosa-associated symptoms such as burning sensation, dry mouth, numbness, and dysfunction of taste all may interfere with the eating and swallowing function of BMS patients. 1 The eating and swallowing difficulties may result in reduced food intake that in turn leads to anemia, hematinic deficiencies, and hyperhomocysteinemia in a certain percentage of our BMS patients. 1 We conclude that the GPCA is a major factor causing vitamin B12 deficiency, macrocytosis, and hyperhomocyteinemia in GPCA þ TGAˉTMAˉBMS patients. BMS itself does play a significant role in causing anemia, hematinic deficiencies, and hyperhomocysteinemia in both GPCA þ TGAˉTMAˉBMS and GPCAˉTGAˉTMAˉBMS patients. In addition, the GPCA þ TGAˉTMAˉBMS patients have significantly greater frequencies of macrocytosis, blood Hb, serum iron, vitamin B12, and folic acid deficiencies, and hyperhomocysteinemia than healthy control subjects and significantly greater frequencies of macrocytosis, anemia, serum vitamin B12 deficiency, and hyperhomocysteinemia than GPCAˉTGAˉTMAˉBMS patients.
Declaration of competing interest
The authors have no conflicts of interest relevant to this article. | 2021-09-05T05:16:56.690Z | 2021-06-19T00:00:00.000 | {
"year": 2021,
"sha1": "5491379d873bae68ba394565aef1fd3f37dcaad9",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jds.2021.05.017",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5491379d873bae68ba394565aef1fd3f37dcaad9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54082909 | pes2o/s2orc | v3-fos-license | Quantized escape and formation of edge channels at high Landau levels
We present nonlocal resistance measurements in an ultra high mobility two dimensional electron gas. Our experiments show that even at weak magnetic fields classical guiding along edges leads to a strong non local resistance on macroscopic distances. In this high Landau level regime the transport along edges is dissipative and can be controlled by the amplitude of the voltage drop along the edge. We report resonances in the nonlocal transport as a function of this voltage that are interpreted as escape and formation of edge channels.
The investigation of nonlocal effects in electrical transport has provided new insights on non classical conduction mechanisms.These effects are responsible for the appearance of a potential difference across a region of the sample well outside of the classical current paths.They have been reported in conductors that exhibit quantum coherence [1][2][3], ballistic transport [4,5] or in the quantum Hall effect regime of a two dimensional electron gas [6][7][8].In the latter case the non local resistance appears due to the formation of edge channels that are isolated from the bulk and can carry the current to classically non accessible regions.The propagation of edge channels in this regime, has attracted a significant interest due to their potential for quantum computation and interferometry [9][10][11][12].Here, using non local measurements we consider the opposite limit of high Landau levels where the bulk density of states is gap-less.We show that in this limit the exchange of charges between bulk and edge states can be controlled by the voltage drop along the edges, which leads to the formation of resonances in the non linear transport that allows us to observe directly a quantization of edge channels at high Landau levels.
We have investigated the magnetic field dependence of nonlocal transport in a GaAs/Ga 1−x Al x As 2DEG with density n e 3.3 × 10 11 cm −2 , mobility µ 10 7 cm 2 /Vs corresponding to transport time τ tr 0.4 ns and a mean free path of e = 100 µm.The Hall bar with a channel width W = 100 µm was patterned using wet etching.The non local resistance R nl was measured in a geometry illustrated on Fig. 1 where current was injected along the y axis and the voltage was detected between two probes distant by D x 50 µm at a distance L W from the current injection points.The experimental data on Fig. 2, show that R nl exhibits an unusual dependence on magnetic field that in striking difference from ρ xx behavior.Indeed, in contrast to ρ xx (B), R nl (B) is a strongly asymmetric function of the magnetic field that almost vanishes for negative magnetic fields and exhibits a sharp onset at low positive magnetic fields reaching a value of the order of ρ xx for B > 0.1 Tesla.for edge states for a hard wall potential, k is the wavenumber and lB is the magnetic length [13].
We have first checked whether this dependence can be explained using the continuum theory of a Hall bar.For this purpose, it is convenient to describe our sample as a 2DEG stripe, and to approximate the current injection leads by point-like sources.This stripe can be parametrized by complex numbers z = x + iy with y ∈ (0, W ). The potential V (z) created by a current source I positioned at x = x 0 along the top/bottom edges then reads V ± (z) = R ± (z, x 0 )I (plus/minus sign for top/bottom edge) where: The function R p gives the potential created by an unit arXiv:1212.2026v1[cond-mat.mes-hall]10 Dec 2012 current source located at the origin in the semi-infinite 2DEG half plane y > 0: where we introduced the notation α = ρ xy /ρ xx .Subtracting these two expressions we find the potential V = (R + (z, 0) − R − (z, 0))I created by a current between point-like source and drain located opposite to each other along the channel.Using these equations for the particular case of the potential generated along the top edge z = iW , far from the sources |x| W , we find the following expression for the non local resistance: where D x is the spacing between the voltage probes and L is their distance from the source along the channel (for simplicity we have assumed D x W ). The geometrical parameters in our experiment are L 500 µm, W 130 µm and D x 50 µm (see Fig. 1), which lead to a numerical estimation R nl 4.4 × 10 −6 ρ xx .Thus according to this point source model the nonlocal resistance is proportional to ρ xx with an exponentially small damping factor which is independent of the magnetic field.This conclusion however is in strong disagreement with the experimentally observed dependence.In order to check the validity of this analytical estimation in our more complex experimental geometry, we have performed a finite element simulation of the potential which (see Fig. 1) confirms the exponential decay of the field amplitudes away from the current polarization contacts.
Thus even at small magnetic fields (≤ 0.1 Tesla), our experiments indicate a large non local resistance that cannot be described within the continuum theory.Due to the macroscopic dimensions of our sample (channel width W 130 µm) quantum coherence effects cannot explain the origin of the non local resistance in our measurements.An explanation relying solely on the formation of Landau levels is also unlikely since we observe R nl ∼ ρ xx even at weak magnetic fields B 0.1Tesla where Shubnikov-de Haas oscillations are absent.We thus propose guiding along sample edges as a possible explanation for the observed behavior and attempt to include the physics of skipping orbits within the continuum model.The formation of skipping orbits occurs due to the bending of the Landau levels at the edge of the 2DEG [13] which is represented on Fig. 2. It can lead to noticeable effects even when individual Landau levels are not resolved [14].
In presence of skipping orbits, electrons can propagate along the edges before being injected into the bulk of the 2DEG.This gives rise to edge currents I + , I − along the top and bottom edges of the sample.Due to the influence of disorder electrons will progressively detach from the edges causing a progressive drop of the edge 2: Dependence of the non local resistance R nl (as defined in Fig. 1) and of the longitudinal resistance Rxx ρxx on the magnetic field B. The longitudinal resistance Rxx is almost a symmetric function of B whereas R nl (B) is strongly asymmetric and almost vanishes for B < 0. The insets illustrate typical classical electron orbits for a capture and an escape event due to the parallel electric field Ex for B > 0 where electrons propagate along the upper edge in the positive x direction.Capture occurs for V nl < 0 and escape occurs for V nl > 0.
current in the direction of propagation of the electrons.The drop in the current carried by the edges dI + /dx and dI − /dx creates a distributed current source for the bulk of the 2DEG.The equations Eqs.(1) derived within the continuum model, allow us to find the potential created by this distributed current source: We will assume that the edge currents are non zero only in the direction of propagation of the electrons and decay exponentially with a characteristic length-scale λ e that we will call the mean free path along edges, this leads to: where s B = ±1 for positive/negative magnetic fields and η is the Heaviside function.It is straightforward to check that the total current − dI+(x) dx dx, injected into the bulk 2DEG from the top electrode is I. Assuming |α| 1 and combining Eqs. ( 4),( 5) we find the following approximation for the non local resistance: Since this equation was derived assuming that electrons were guided only in one direction, it predicts a vanishing non local resistance for negative magnetic fields, in qualitative agreement with the experiment.We note however that for B < −0.5 Tesla, a finite non local resistance of oscillating sign appears that is not expected within this model.A possible origin of this effect, could be from electrons that are recaptured by the edges after moving through the bulk of the samples and that are not accounted for in the present model.At positive magnetic fields, this equation can be used to estimate λ e from the experimental data, which yields for B ≥ 0.1 Tesla, λ e 90 µm.Weak variations of λ e as a function of the magnetic field (at most 10%) can explain the presence of Shubnikov-de Haas oscillations in R nl (B).We note that the obtained value λ e is very close to mean free path in the sample e 100 µm.
Even if the proposed model describes qualitatively the observed non local resistance, it is based on a phenomenological assumption on the distribution of the edge currents I e (x), and a microscopic theory is needed to determine self consistently, the potential inside the device and the distribution of the edge currents.Several approaches have been proposed to treat the interaction between bulk and edge transport in the quantum limit at low filling factors [6,[15][16][17] and do not directly apply to the present case.Indeed, the propagation along edge FIG. 4: Dependence of the differential non local resistance dV nl /dI (in arbitrary units) on the dimensionless quantity x = |eV nl |/ ωc at magnetic fields between 0.3 and 0.9 Tesla, a voltage offset was applied to fix the position of the first resolved peak at x = 1.The period, of the oscillations is plotted as a function of magnetic field in the inset for positive and negative V nl .It corresponds to the distance between the first resolved peaks at magnetic fields where only a few oscillations could be resolved.
channels has mainly been studied at an integer Quantum Hall effect plateau, where the transport is non dissipative R xx = 0 and a gap in the density of states opens in the bulk [10,12].
In our case, due to the low magnetic fields the gap is not present and electrons can escape to the bulk or on the contrary approach towards the edge.To look for signatures of the escape and creation of edge channels, we have measured the differential nonlocal resistance dV nl /dI as a function of magnetic field and DC excitation current I.At positive magnetic fields, when the potential V nl is positive electrons lose an energy |e|V nl as they cross the separation distance between the voltage probes, thus some electrons will escape from the edge, because their Larmor radius becomes smaller as they propagate.If the potential V nl is negative, electrons in the bulk will tend to drift towards the edge under the action of the electric field E x = V nl /D x and new edge channels may be formed.The typical trajectories for a capture and an escape event are represented on Fig. 2. We thus expect that the transport properties along the edge will strongly depend on the sign of V nl .
In agreement with our heuristic arguments, the experimental results displayed on Fig. 3 exhibit a striking asymmetry between positive and negative currents.For positive currents (at B ≥ 0.5 Tesla) we measure positive dV nl /dI for I > 0 whereas for I < 0, dV nl /dI drops and exhibits sharp oscillations around zero.To ensure that this difference is not related to some asymmetry of the sample, we have also measured dV nl /dI at negative magnetic fields.Except for the region around I = 0 where the differential resistance almost vanishes in agreement with our guiding model, we find that after the transformation I → −I, results are very similar to those obtained at B > 0. This observation confirms that our findings cannot be attributed to a geometrical asymmetry which would not depend on the sign of the magnetic field.To understand, the origin the approximate symmetry observed in Fig. 3, we note that a mirror symmetry around the Hall bar channel changes, I → −I and B → −B and inter-exchanges top and bottom edges.The non local voltage across the bottom edge is therefore expected to be V nl (−B, −I), the electrons emitted from the bottom edge can then be recaptured at the top edge where dV nl /dI is measured, giving a contribution proportional to dV nl /dI at B > 0 and current −I damped by the propagation through the bulk.Hence from now on we will focus on the analysis of the data obtained at B > 0.
The dependence on I displayed in Fig. 3, exhibits several intriguing features.To gain an understanding on their physical origin, we will concentrate on the region of weak magnetic fields (B between 0.2 and 0.9 Tesla).In this region dV nl /dI exhibits smooth oscillations as a function of I, integrating on current we find the dependence V nl (I) and display the differential resistance as a function of eV nl / ω c (see Fig. 4), where ω c is the spacing between Landau levels.After this transformation, the origin of the oscillations becomes more conspicuous, at low magnetic fields an oscillation dV nl /dI occurs whenever eV nl is changed by approximately ω c .The dependence of the period ∆V nl on the magnetic field is displayed on the inset of Fig. 4. For V nl < 0 where we expect formation of new edge channels due to drift of bulk electrons towards the edge, ∆V nl is almost equal to ω c /e (we attribute the 20% difference, to the aspect ratio between the distance between voltage probes and their width).However for V nl > 0, when electrons loose energy as they propagate and edge channels progressively escape to the bulk, the ratio e∆V nl / ω c progressively increases with magnetic field.Our interpretation is that at V nl < 0, we are probing the outermost edge channels that have an energy spacing close to ω c , while for V nl > 0 edge channels escape progressively and only the inner channels with an energy spacing larger than ω c are still propagating (see level diagram in Fig. 1).
As the magnetic field increases the following trends can be noted, for I > 0 the smooth oscillations develop into sharp resonances at certain values of V nl , while for negative currents dV nl /dI start to change sign as function of I rendering our analysis as a function of V nl impossible.Experiments with a larger separation between voltage probes D x 500 µm, did not display the described oscillation and resonances, which suggests that their observation is possible only when D x is smaller than the mean free path.In a control sample with wide voltage probes of around 300 µm, a zero differential resistance plateau was observed at I < 0, indicating that in this regime the electrostatic potential oscillates as a function of the distance along the edge and averages to zero when voltage is measured on a large length scale [18].A vanishing differential resistance has previously been reported in local measurement geometries [19,20] where bulk and edge contributions are intermixed; our experiments show that a zero differential resistance state can be created by edge effects alone.Additional experimental and theoretical investigations are needed to fully understand edge transport at high Landau levels in the nonlinear regime.It would also be interesting to perform similar experiments under microwave irradiation where stabilization of edge channels is expected [21] and where non local effects can also be present [22].
To summarize, we have demonstrated through non local resistance measurements that guiding effects can strongly modify the potential distribution in ultra high mobility samples even in the limit of weak magnetic fields B ≤ 0.1 Tesla.In the linear transport regime, our observations are consistent with a spreading of the distribution of the current source in the direction of propagation along edges.As opposed to the quantum Hall regime where transport in the bulk is suppressed, an exchange between edge and bulk conduction paths takes place in our experiments.We show that this exchange can be controlled by the amplitude of the potential drop along the edge.Additional edge channels can be formed if the electrons gain energy as they propagate along the edge, in the opposite case when electron loose energy the edge channels can escape to the bulk.We propose that oscillations in non linear transport when the amplitude of the voltage drop along the edge is changed by the spacing between Landau levels are a signature of quantized escape and formation of edge channels.Thus edge transport in the limit of high filling factors allows to explore a rich physical regime that may have deep implications in our understanding of electron transport in ultra clean systems.
Quantized escape and formation of edge channels at high Landau levels : supplementary materials A.D. Chepelianskii (a,b) , J. Laidet (c) , I. Farrer (a) , D.A. Ritchie (a) , K. Kono (b) , H. Bouchiat (c) (a) Cavendish Laboratory, University of Cambridge, J J Thomson Avenue, Cambridge CB3 OHE, UK (b) Low Temperature Physics Laboratory, RIKEN, Wako, Saitama 351-0198, Japan (c) LPS, Univ.Paris-Sud, CNRS, UMR 8502, F-91405, Orsay, France The supplementary materials describe the evolution of the non-linear transport in non-local geometry, from a series of resonances corresponding to escape and creation of edge channels (presented in the main article) towards a zero differential resistance state when the voltage drop is measured on length scales much larger than the mean free path.The magnetic field (B → −B) and DC current (I → −I) symmetry properties of the reported zero-differential state strongly supports its edge transport origin.Finally we provide a more detailed derivation for the equations of the continuum theory.
I. NON LOCAL DIFFERENTIAL RESISTANCE WITH DISTANT VOLTAGE PROBES
We have measured the non local differential resistance (NLDR) dV nl;F /dI in a geometry where the voltage probes were separated by a distance D x = 500 µm larger than the mean free path e = 100 µm in the sample.The experiment was performed on the same sample as in the main text but with a different arrangement of voltage and current probes, the current sources were located 500 µm away from the voltage probes.In the linear response regime the dependence of R nl;F = dV nl;F /dI(I = 0) on the magnetic field, was very similar to the data shown on Fig. 2 (from main article).The quantity R nl:F was finite for positive magnetic fields and almost vanished for B < 0. The dependence of dV nl;F /dI on the magnetic We have also studied NLDR in a macroscopic geometry with geometrical parameters larger than the mean free path.The sample was made in a lower mobility 2DEG, with mobility µ = 3×10 6 cm 2 /Vs and a carrier density of n e = 3.2 × 10 11 cm −2 .The geometry of the measurement is sketched in in Fig. 2.This figures summarizes our results on the NLDR in this sample, for positive magnetic fields for which the non local resistance is non-vanishing.
The strong asymmetry between positive and negative currents is also observed in this lower mobility 2DEG, however the characteristic magnetic field where the asymmetry appears is around a factor three stronger as compared to the µ = 10 7 cm 2 /Vs sample, this difference is consistent with the ratio between the mobilities of the two samples.As in Fig. 1, the separation between the voltage probes was larger than the mean free path e = 30 µm and the oscillations as a function of the DC current cannot be resolved.However, in the present experiment NLDR is almost zero in a large region of negative currents which contrasts with previous data where NLDR could be negative for I < 0 (see Fig. 2 from the main article and Fig. 1).
In order to highlight the presence of a zero differential resistance state (ZDRS), we have calculated the dependence of V nl;L on current by integrating the experimental differential resistance data.The results obtained after this procedure are represented on Fig. 3 which shows that the voltage V nl;L exhibits a plateau at negative I where it is almost independent on current in a wide range of magnetic fields while for positive currents the voltage dependence is almost ohmic.The inset in Fig. 3, shows the dependence of the voltage on the magnetic field for several values of current inside the ZDRS plateau.These results confirm that the voltage saturates to a constant value independent on current in this regime, the value of the saturation voltage grows almost linearly with magnetic field with weak oscillations that are probably related to the Shubnikov-de Haas oscillations in the longitudinal resistance.
The observed zero-differential state possesses the symmetry of an edge effect.It appears only for the sign of magnetic field which ensures guiding towards the voltage probe electrodes from the distant current sources, and for a specific sign of the DC current that creates a voltage drop along the edge tending to stabilize propagation along edges.Therefore it seems likely that an edge transport related mechanism is leading to the formation of ZDRS in this case.On the higher mobility sample where the dimension of the voltage probes were smaller than the mean free path, negative values of NLDR were observed (see Fig. 3 from main article and Fig. 1), this suggests that ZDRS is formed due to the clamping of the poten- tial on large length scales by the voltage probe electrodes.On the contrary, if the electrodes are not invasive the potential exhibits sharp variations whenever the energy of the electrons propagating along the edge is changed by an amount close to ω c (see main text).These voltage oscillations are probably indicative of a spatially modulated charge density distribution, and could explain the observation of oscillating/negative differential resistances in our experiments.It would be highly interesting to understand the role played by the edge mediated ZDRS mechanism in ZDRS experiments realized in the conventional longitudinal resistance measurement geometry.However due to the absence of a reliable theoretical framework to describe the edge effects reported in this article, it is not possible to estimate the amplitude of their contribution in the measurement of longitudinal resistance.
III. CONTINUUM THEORY
In this section we provide a mode detailed derivation of formulas from continuum theory that we used in the main article.
We start our calculations from the potential created by a point source of current I located at z = 0 in a semiinfinite two dimensional electron gas.It is convenient to represent points in the 2DEG as complex numbers z = x + iy where (x, y) are the point Cartesian coordinates, and the half plane fills the space y > 0. In this case we find the potential V p (z) = R p (z)I with:
FIG. 1 :
FIG. 1: Sample geometry in our non local transport experiments.Arrows indicate the geometrical parameters in our experiment, the position of the source and drain electrodes and the electrodes across which the nonlocal potential drop V nl is measured.The nonlocal resistance is then defined as R nl = V nl /I.The closed black contour highlights the geometry of the domain used in our finite element simulations whose results are displayed in the top panels for α = ρxy/ρxx = −1 and α = −100, the color/gray-scale level indicate the potential values (source/drain potentials are fixed to ±1); the potential gradients are concentrated in the center of the sample.The curves on the right represent the dispersion relation n(k)for edge states for a hard wall potential, k is the wavenumber and lB is the magnetic length[13].
FIG. 3 :
FIG.3: Dependence of the differential non local resistance dV nl /dI on magnetic field B and on the DC current amplitude I for positive and negative magnetic fields (top/bottom panels).The data at negative magnetic fields is displayed as a function of −B and −I.Temperature was 1.2 K.
FIG. 1 :FIG. 2 :
FIG.1: Dependence of the non local differential resistance dV nl;F /dI on magnetic field and DC current amplitude, this quantity was measured in a geometry where the separation between voltage probes was Dx 500 µm on the µ = 10 7 cm 2 /Vs sample from the main text.Temperature was T = 1.2 K.
FIG. 3 :
FIG. 3: Non local voltage/current characteristics V nl;L (I) for the µ = 3 × 10 6 cm 2 /Vs mobility sample at several magnetic fields (for resemblance with data from ZDRS experiments in local geometries, we have shown −V nl;L as function of −I in this figure).The inset shows the voltage as a function of magnetic field for several currents inside the plateau regime.Temperature was T = 0.3 K. | 2012-12-10T10:44:58.000Z | 2012-12-10T00:00:00.000 | {
"year": 2012,
"sha1": "5be2c56d3fec66c872e049f57b01c32a5f2acc07",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5be2c56d3fec66c872e049f57b01c32a5f2acc07",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255082588 | pes2o/s2orc | v3-fos-license | Nitrogen-Doped TiO2/Nitrogen-Containing Biochar Composite Catalyst as a Photocatalytic Material for the Decontamination of Aqueous Organic Pollutants
In this study, a waste walnut shell-derived biochar enriched with nitrogen (N-biochar) is mixed with nitrogen-doped TiO2 (N-TiO2) to fulfill an affordable composite material for the degradation of methyl orange (MO). Results showed that porous structure and oxygen-containing functional groups of biochar facilitate contact with MO during the reaction process. Meanwhile, doped nitrogen has a positive effect on improving the reaction activity due to the existence of a substituted state and a gap state in the catalyst. It was revealed that the N-TiO2/N-biochar (NCNT0.2/1) exhibited better photocatalytic degradation efficiency (97.6%) and mineralization rate (85.4%) of MO than that of TiO2, N-TiO2, and TiO2/N-biochar due to its stronger synergistic effect of N, TiO2, and biochar, in accordance with its high charge separation by photoluminescence (PL) analysis. Trapping experiments showed that ·OH is the predominant active species during the decolorization and mineralization process of MO. After five repeated use, the loss of activity of the catalyst was negligible. In addition, the catalytic degradation process was consistent with the pseudo-first-order kinetic model with the rate constant of 4.02 × 10–2 min–1.
INTRODUCTION
Photocatalytic technology becomes an emerging advanced oxidation technology that converts solar energy into chemical energy and decomposes pollutants, often used in air purification and wastewater treatment. 1−4 TiO 2 has low cost, good chemical stability, low toxicity, and excellent photoelectric performance. 5,6 Nevertheless, TiO 2 has a large band gap (E g = 3.4 eV) and can only be activated by ultraviolet light, which indicates that the visible light utilization rate is low. At the same time, the charge recombination efficiency of TiO 2 is very high. 7 Therefore, TiO 2 is often modified to reduce its charge recombination efficiency.
The nonmetal-doped TiO 2 will form new impurity energy levels above the valence bond band, which reduces the band gap and promotes the red-shift of the absorption edge. 8−14 That means that the doped TiO 2 can obtain significant photocatalytic activity under visible-light irradiation. For instance, Chen et al. 15 found that the nitrogen-doped method could result in the enhancement of visible-light absorption and the generation of photoinduced surface oxygen vacancies. Giannakas et al. 16 prepared an N-TiO 2 catalyst with ammonium chloride as the nitrogen source and applied it to the reduction of Cr(VI). It showed that N-TiO 2 had a higher reduction ability than TiO 2 , and the absorption of N-TiO 2 was enhanced at visible wavelength. Zhou et al. 17 prepared four different ratios of N-TiO 2 /sepiolite photocatalysts and applied them to degrade lightfast orange G. The result indicated the activity of the four catalysts was superior to that of the TiO 2 / sepiolite catalyst.
It is reported that the adsorbent on the surface of TiO 2 can significantly promote the photocatalytic process, such as activated carbon, 18 carbon nanotubes, 19 and biochar. 20,21 Biochar, as an electron acceptor, can accelerate charge separation and transfer, thereby promoting the generation of active oxygen and improving the degradation performance of TiO 2 . 22 Matos et al. 23 first prepared nitrogen-doped biochar as an adsorbent and then compounded it with TiO 2 to obtain a TiO 2 /nitrogen-doped biochar catalyst. It was found that the nitrogen functional group in the biochar played an important role in the enhancement of reaction activity. Vahidzadeh et al. 24 prepared TiO 2 /GO, N-TiO 2 /GO, TiO 2 /N-GO, and N-TiO 2 /N-GO composite catalysts and then used them to degrade acetaldehyde. The results showed that N-TiO 2 /N-GO catalysts had the highest activity compared with other catalysts due to the improvement of charge separation by their nitrogen site.
In our previous work, 25 we found that a proper amount of biochar was beneficial to improving the photocatalytic activity of TiO 2 . For the TiO 2 /biochar composite system, if nitrogen was simultaneously doped into the structure of TiO 2 and biochar, which might provide an opportunity to enhance charge separation and facilitate the generation of active oxygen. However, it was rarely studied in the photocatalytic process. For the above reasons, a series of nitrogen-doped TiO 2 / nitrogen-containing biochar composite catalysts were prepared for the photodegradation of MO (as the model molecule of dye). The morphology, microstructure, and light absorption of the samples were systematically studied by various characterization methods, including scanning electron microscopy (SEM), X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), Raman spectra (Raman), ultraviolet− visible diffuse reflectance spectra (UV−vis DRS), and photoluminescence (PL). In addition, the photodegradation kinetics, mechanism, and reproducibility of the catalysts were discussed in detail.
Catalyst Preparation.
Walnut shell was prepared through deionized water cleaning, followed by being dried in a blast drying oven at a temperature of 80°C. After drying, the walnut shell was pulverized and passed through a sieve of 60 mesh to select particles with a diameter of less than 0.25 mm. The rice husk powder was screened and placed in a tube furnace and heated to 700°C under the atmosphere of nitrogen (purity 99.99%) at a rate of 10°C/min and thermostated for 2 h to obtain walnut shell biochar, which was named WB700. The prepared walnut shell biochar was heated to 500°C in an NH 3 atmosphere for 1 h and calcined to obtain the nitrogen-doped biochar, which was named N-WB700.
TiO 2 /biochar composites were prepared by direct hydrolysis combined with high-temperature calcination. Ten milliliters of tetrabutyl titanate was added dropwise into 100 mL of ultrapure water and stirred for 12 h. Then, a certain amount of biochar (biomass and titanium mass ratio of 0.2/1) was added and stirred for 12 h. The prepared suspension was dried for 24 h in an air dry oven at 80°C. After drying and grinding, the sample was heated in a tube furnace to 500°C for 1 h under the atmosphere of nitrogen. The obtained sample was named CT0.2/1.
The preparation procedure of the TiO 2 /nitrogen-doped biochar composite material was similar to that of CT0.2/1. In detail, the nitrogen-doped TiO 2 /nitrogen-doped biochar composite catalyst was prepared by simple heat treatment under an NH 3 atmosphere. Ten milliliters of tetrabutyl titanate was dispersed into 100 mL of ultrapure water dropwise. After being stirred for 12 h, a certain amount of biochar (the mass ratio of biochar and titanium was 0.1/1, 0.2/1, 0.5/1, 0.8/1, and 1/1, respectively) was added to the mixed solution and uninterruptedly stirred for 12 h. The suspension was dried in an air dry oven at 80°C for 24 h. After drying and grinding, the sample was placed in a tube furnace and heated at a rate of 2°C/min to 500°C for 1 h under the atmosphere of NH 3 . The calcined samples were nitrogen-doped composite catalysts, which were named NCNT0.1/1, NCNT0.2/1, NCNT0.5/1, NCNT0.8/1, and NCNT1/1, respectively. For comparison, a nitrogen-doped TiO 2 catalyst without the support of biochar was prepared and named N-TiO 2 .
Catalyst Characterization.
The crystal structure of the catalyst was measured by XRD spectra (PANALYTICAL Incorporated) under room temperature with the range of angle (10−80°). The surface chemistry of the catalyst was analyzed by FT-IR (TENSOR27) and Raman spectra (LabRAM HR800-LSS5). While the surface morphology of catalysts was studied by SEM (Hitachi S-4800), and the surface element distribution of the catalyst was observed by an energy spectrometer matched with a scanning electron microscope. The chemical bonding state in the catalyst was analyzed by Xray photoelectron spectroscopy (XPS, Thermo Scientific ESCALAB250Xi) measured by a spectrometer. The UV−vis DRS of the catalyst was measured at room temperature using a Shimadzu UV 3600 spectrophotometer within the wavelength of 200−700 nm with BaSO 4 as a reflectance standard. The PL was measured by LabRAM HR800-LSS5 with an excitation wavelength of 325 nm.
Photocatalytic Degradation of MO.
The photocatalytic degradation experiment of methyl orange was carried out by a multisite photocatalytic reactor with a circular reaction platform as well as a quartz cold trap. A long arc mercury lamp (500 W) was used as an ultraviolet light source with a constant voltage and current, and the dominant wavelength was 360 nm. During the experiment, 10 mg of the catalyst was added into 40 mL of the MO solution (20 mg/ L), followed by stirring for 1 h in the dark to achieve adsorption equilibrium, and then the mercury lamp was turned on while stirring. To remove the catalyst, an appropriate amount of the solution was taken out from the test tube and filtered with a filter (PES, JINTENG) of 0.22 μm for subsequent analysis. The photocatalytic activity of the catalyst was assessed by the decolorization and mineralization rate of MO; the absorbance of MO was measured by a spectrophotometer (Lambda 750, Perkinelmer) with the maximum absorption wavelength of 464 nm to determine the concentration. The mineralization rate is determined by the total organic carbon (TOC) obtained from the total organic carbon analyzer (Vario TOC, Elementar).
The decolorization rate is calculated by the following formula (1) In eq 1, [MO] 0 represents the MO initial concentration (mg/ L); [MO] represents the real-time concentration of t-methyl orange (mg/L).
The mineralization rate is calculated with the following formula (2) In eq 2, [TOC] 0 represents the initial total organic carbon (mg/L) of MO; [TOC] t represents MO in real-time total organic carbon (mg/L).
Ultraperformance Liquid Chromatography-Quadrupole-Time-of-Flight Mass Spectrometry (UPLC-QTOF).
Intermediates during the degradation were analyzed by an ultrahigh-resolution quadrupole time-of-flight mass spectrometer (Agilent 1290-6540, America). During the experiment, a C18 column (50 mm × 2.1 mm, particle size 1.9 μm) and a mobile phase consisting of 10 mM ammonium acetate (A) and acetonitrile (B) were used at a flow rate of 0.2 mL/min. 2.6. Catalyst Reuse Experiment. NCNT0.2/1 with the best catalytic performance was selected for reuse experiments. After the degradation experiment, the catalyst was washed with deionized water and dried for 24 h at 80°C. The dried catalyst was continued to be used for the next experiment. The experimental data was acquired by three replicates at the same condition and expressed as mean value ± SD. Figure 1a,b, both WB700 and N-WB700 prepared by pyrolysis of charcoalshelled charcoal at 700°C exhibited a porous structure and high porosity, which is beneficial for the adsorption of organic matter. The morphologies of TiO 2 , N-TiO 2 , and NCNT0.2/1 are shown in Figure 1c,f−h. It was clear that TiO 2 particles with a size of several nanometers were agglomerated into large particles of several tens of micrometers. N-TiO 2 obtained by calcination under an NH 3 atmosphere is shown in Figure 1d, and its morphology was similar to that of TiO 2 . As shown in Figure 1e, compared with TiO 2 , the TiO 2 particles in NCNT0.2/1 are more uniformly distributed on the surface of biochar, resulting in the improvement of the agglomeration phenomenon of the TiO 2 particles. At the same time, the porous structure of biochar was beneficial to the adsorption of organic matter, enhancing the interaction between the organic matter and catalyst. The heterostructure of TiO 2 and biochar facilitated the photogenerated electrons transfer from TiO 2 to biochar, which enhances the photocatalytic performance of TiO 2 . Figure 2 shows the SEM-EDS mapping of N-WB700, N-TiO 2 , and NCNT0.2/1, suggesting the presence of nitrogen in the three catalysts. The EDS mapping of N-WB700 showed that the N site was doped into biochar by pyrolysis of the walnut shell under an NH 3 atmosphere. Also, a similar conclusion was observed by N-TiO 2 based on the EDS mapping image. As for NCNT0.2/1, it was clear that the nitrogen element in the samples was present in both biochar and TiO 2 . Figure 3 demonstrated the XRD patterns of TiO 2 , N-TiO 2 , CT0.2/1, NCT0.2/1, NCNT0.2/1, and NCNT0.8/1. The two characteristic diffraction peaks at 2θ = 25.3 and 37.8°c orresponded to the (101) and (004) crystal planes of anatase TiO 2 . 26 In comparison to the brookite crystal and the rutile crystal, anatase TiO 2 had high catalytic activity, which is mainly due to its larger oxygen vacancies, lower dielectric constant, and higher electron mobility. 27 As shown in Figure 3, the diffraction peak shape of TiO 2 corresponded to the anatase TiO 2 characteristic peak. However, the characteristic peak of nitrogen did not appear in N-TiO 2 , which was attributed to small amount of nitrogen, and the approximate radius between N and Ti results in the weak intensity of its characteristic peaks. For CT0.2/1, NCT0.2/1, NCNT0.2/1, and NCNT0.8/ 1, the peak of anatase showed that the higher intensity and peak shape were intact, suggesting that the presence of biochar did not cause a change in the crystal form of TiO 2 . There was no characteristic peak of biochar in the spectrum, which was mainly because the characteristic peak intensity of biochar was low and close to the characteristic peak of anatase crystal. For the NCT0.2/1, NCNT0.2/1, and NCNT0.8/1 composite catalysts, no characteristic peaks of nitrogen were observed in the XRD spectrum, mainly due to the small amount of nitrogen in the sample. In short, we successfully synthesized the composite catalysts with the anatase crystal form. Figure 4 exhibits the Raman spectrum of the prepared samples. As for WB700, N-WB700, CT0.2/1, NCT0.2/1, NCNT0.2/1, and NCNT0.8/1, the D peak and G peak were located at 1348.1 and 1590.1 cm −1 , which was assigned to a carbon defect-induced Raman peak and an ordered graphite structure peak, respectively. Therefore, the presence of biochar can be confirmed for the composite catalyst. In general, for TiO 2 with the anatase phase, the peaks at 142.0, 396.3, 513.1, and 637.2 cm −1 can be clearly observed, corresponding to the Raman vibration modes of E g , B 1g , B 1g + A 1g , and weak E g , respectively. 28 1s were observed in the spectra of N-WB700, N-TiO 2 , and NCNT0.2/1, indicating that the catalyst was composed of C, Ti, O, and N elements. Figure 6 shows the XPS spectra of N 1s in N-WB700, N-TiO 2 , and CNT0.2/1. As shown in Figure 6a, the weak intensity of N 1s in the CT0.2/1 catalyst was observed, indicating that there was basically no nitrogen on the surface of the catalyst. As shown in Figure 6b, the N 1s of the N-WB700 sample could be deconvoluted into three peaks, corresponding to pyridinium nitrogen at 398.3 eV, pyrrole nitrogen at 399.6 eV, graphite nitrogen at 400.7 eV, and NO x at 402.0 eV. 11 The N 1s peak of N-TiO 2 was fitted to four peaks located at 397.2, 29 In this work, the peak of 397.3 eV was attributed to the Ti−N bond from substituted N dopant, while the peaks of 399.2 and 400.2 eV corresponded to interstitial N dopants, typically, as stated by Fujishima et al. 30 The peak of 402.0 eV was attributed to the presence of chemically adsorbed NO x . Generally, when nitrogen entered the TiO 2 lattice in the form of substitution or gap state, new impurity levels were formed between valence and conduction bands, which would promote small band gap and improve the photocatalytic activity of TiO 2 . 31,32 The N 1s spectrum of NCNT0.2/1 is shown in Figure 6d and deconvoluted into five peaks located at 397.3, 398.4, 399.4, 400.7 and 402.6 eV. Peaks at 397.3 and 402. 6 eV were assigned to the substituted nitrogen and NO x , respectively. The peaks of 398.4, 399.4, and 400.7 eV were derived from interstitial nitrogen, indicating that the nitrogen atom had entered the interstitial site. For NCNT0.2/1, nitrogen entered the TiO 2 lattice in the form of substituted and gap states, which decrease its band gap and improve its visible-light response.
Characterizations of the TiO 2 /Nitrogen-Doped Biochar Composite Catalyst. As shown in
As shown in Figure 7, all of the catalysts exhibited an absorption peak below 380 nm, indicating that samples had strong ultraviolet light absorption. Due to the presence of nitrogen and biochar, the light absorption of the composite catalyst in the visible-light band was enhanced. With the increase of the biochar content, the absorbance in the visiblelight region increased gradually.
The band gap of the catalyst was estimated by the Kubelka− Munk equation and the value was given by the baseline method provided by Makuła. 33 As shown in Figure 8, the E g values of N-TiO 2 , NCNT0.1/1, NCNT0.2/1, NCNT0.5/1, NCNT0.8/1, and NCNT1/1 catalysts were 3.01, 3.00, 3.01, 3.03, 3.03, and 3.05 eV. The decrease in the band gap of the nitrogen-doped catalyst was attributed to the fact that the nitrogen element in the TiO 2 promotes the generation of a substituted or gap state. PL spectra were commonly used to study the efficiency of electron−hole pair separation. In general, the greater intensity of PL spectra is closely associated with the higher recombination rate of photogenerated charge and the lower photocatalytic activity. 34 As shown in Figure 9, with the excitation wavelength of 325 nm, similar PL spectra of these samples were observed (the strongest peak at about 390 nm). NCNT0.2/1 had the lowest intensity of PL spectra, indicating its highest efficiency of charge separation and photocatalytic activity. Figure 10, the decolorization efficiencies of CT0.
ACS Omega
http://pubs.acs.org/journal/acsodf Article catalyst was mixed with nitrogen, the degradation rate of MO was improved because of the synergistic effect of nitrogendoped TiO 2 and nitrogen-doped biochar. Also, it can be observed that NCNT0.2/1 had the highest photocatalytic activity, which was mainly due to the optimal value of biochar content. Because excessive biochar had a filtering effect on light, it would hinder light from reaching the catalyst surface and suppress the photon absorption on the surface of TiO 2 , thus reducing the performance of the catalyst.
To explore the mechanism of the catalytic activity of nitrogen-doped catalyst/nitrogen-doped biochar composite catalyst, a series of catalysts were prepared for control experiments. Figure 11 shows the results of a series of experiments for the degradation of MO. During the photolysis experiment of MO, the decolorization and mineralization efficiencies were 2.1 and 1.8%, indicating that the MO molecules were hard to self-degrade under only visible light. When TiO 2 was present, the decolorization rate and mineralization rate were 52.6 and 34.2%. When N-TiO 2 was presented, the decolorization rate and mineralization rate were 78.4 and 60.6%. An increase in activity may be due to the formation of a new nitrogen impurity level above the valence band from the interstitial nitrogen or substituted nitrogen in the structure of TiO 2 , which resulted in a decrease in the semiconductor band gap and an increase in the absorption in the visible-light region. 35 In comparison with pure TiO 2 , the decolorization and mineralization rate of MO on CT0.2/1 increased by 45.2 and 64.4%, respectively. The increased degradation rate was mainly attributed to the fact that biochar promoted photogenerated electron separation of TiO 2 and inhibited its electron−hole recombination. The degradation rate of MO by NCT0.2/1 was higher than that of CT0.2/1, and the reason may be that the presence of pyridinium nitrogen, pyrrole nitrogen, and graphite nitrogen in biochar improves the electron transfer. A similar conclusion has also been reported in the literature, and it has been shown that nitrogen-doped biochar can separate photogenerated charge carriers more efficiently than biochar. 36 NCNT0.2/1 had the highest degradation efficiency for methyl orange, and its decolorization rate and degradation rate reached 97.6 and 83.1%, respectively. Also, the electron-transfer efficiency of biochar was improved after adding nitrogen, which promoted the photoelectron transfer of N-TiO 2 . Therefore, NCNT0.2/1 had the best performance, indicating that the synergistic effect of nitrogen-doped TiO 2 and nitrogen-doped biochar was beneficial to the improvement of catalyst activity. Figure 12a shows the time curves of methyl orange for NCNT0.1/1, NCNT0.2/1, NCNT0.5/1, NCNT0.8/1, and NCNT1/1 composite catalysts. As illustrated in Figure 12b, the pseudofirst-order kinetic (PFOK) model is fitted well with most of the experimental data. The kinetic parameters determined by the (PFOK) model are listed in Table S1. The photocatalytic degradation was consistent with the first-order kinetics when all R 2 values were greater than 0.99. The catalyst NCNT0.2/1 had the largest degradation rate of 4.0 × 10 −2 min −1 , and its degradation rate was obtained through theoretical analysis to provide guidance for practical application.
Degradation Kinetics of Methyl Orange.
3.4. Catalyst Reproducibility. The stability and reproducibility of the catalyst are important for industrial applications. In this work, NCNT0.2/1 was selected for five repeated-use experiments due to its highest photocatalytic activity. As shown in Figure 13, after five cycles, NCNT0.2/1 maintains relatively high activity in the removal of MO, and the decolorization and mineralization rates reached 92.7 and 75.7%, respectively. The slightly decreased degradation efficiency was mainly because of the loss of catalysts. The results showed that NCNT0.2/1 exhibited desirable stability and reproducibility. RT 12.2, and RT 15.3 min were attributed to the intermediates produced by the self-degradation of MO. After irradiation by UV light for 40 min, the peak intensity at RT for 4.5 min decreased sharply due to the rapid decolorization of MO. Meanwhile, new peaks appeared at RT at 0.9, 1.5, 3.1, 3.2, and 3.8 min. The peak at 0.9 min (m/z 290) was related to the negative ion from a certain N−C bond cleavage of the dimethylamino group and a proton-substituted methyl group. 15 The peak at 3.1 min (m/z 276) corresponded to the cleavage of two N−C bonds of the dimethylamino group. 37 Also, the peak at 3.2 min was related to the monohydroxylation product of the MO molecule. The peak at 12.2 min (m/z 255) belonged to the dihydroxylated product of the MO molecule. The azo bond is then destroyed to form certain kinds of intermediates, like m/z 205, m/z 199, and m/z 157, which are attacked by reactive oxygen species and ultimately converted to Figure 14 presents the intermediate structure as well as the degradation pathway of MO. For the photocatalytic reaction, hydroxyl radicals (·OH), superoxide anion radicals (O 2 · − ), and holes (h + ) are common active matters and probably participated in the degradation of organic matters. Active species trapping experiments were carried out using TEOA, IPA, and BQ as h + , O 2 · − , and ·OH trapping agents, respectively. As presented in Figure 15, after the addition of TEOA, IPA, and BQ, the decolorization efficiency of MO decreased to 74.3, 29.6, and 47.4%, respectively. It can be concluded that the three active substances could possibly participate in the oxidation process, among which, ·OH plays a dominating role.
The possible degradation mechanism of MO by NCNT0.2/ 1 is shown in Figure 16. The well-developed pore structure and various oxygen-containing functional groups of biochar on NCNT0.2/1 provide more sites for the adsorption of organic pollutants. When irradiated by UV light, electrons of the N-TiO 2 were stimulated and migrated to the conduction band, while h + could leave in the valence band. Simultaneously, substitution nitrogen and interstitial nitrogen in the structure of NCNT0.2/1 could improve the utilization efficiency of visible light. On the other hand, this nitrogen could also promote the photogenerated electron transfer from TiO 2 to biochar, which enhances charge separation efficiency. The photogenerated electrons on the surface of biochar may react with O molecules to produce O 2 · − and further react with H 2 O molecules to generate ·OH. In addition, h + are also trapped by H 2 O molecules, producing ·OH. In the photocatalytic process, MO molecules could easily react with these active species (h + , O 2 · − , and ·OH) and finally mineralized into CO 2 .
CONCLUSIONS
A series of nitrogen-doped TiO 2 /nitrogen-doped biochar catalysts were prepared for the photodegradation of MO in an aqueous solution. In the photocatalytic control experiment, the removal efficiency followed the trend of NCNT0.2/1 > NCT0.2/1 > N-TiO 2 > CT0.2/1 > TiO 2 , and NCNT0.2/1 showed the highest catalytic activity, with the decolorization rate and mineralization rate of 92.7 and 75.7%, respectively. By characterization analysis, it was found that the synergistic effect between nitrogen-doped TiO 2 and nitrogen-doped biochar improved the activity of catalysts. Furthermore, NCNT0.2/1 still showed relatively high degradation activity for MO after five repeated use. This NCNT0.2/1 catalyst with desirable activity and reproducibility for the treatment of environmental pollutants brings benefits to the disposal of discarded walnut shells.
Parameters of the degradation kinetics of MO, chromatograms monitored in full scan MS, MS spectrum, and UV absorption spectra (PDF) | 2022-12-25T16:06:23.684Z | 2022-12-23T00:00:00.000 | {
"year": 2022,
"sha1": "fa440cb7a235ca6cc535b39638b2ee55bd0a8896",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ff654eb8a7ea387823027dfa35fa2bb8fc53a5d7",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252341668 | pes2o/s2orc | v3-fos-license | Dehumanization as a Response to Uncivil and Immoral Behaviors
Theoretical approaches to dehumanization consider civility to be an attribute of human uniqueness (HU). However, studies that explore the links between civility and humanness are scarce. More precisely, the present research tests whether there is a consistent relationship between civility and HU. Method and results: The first study (N = 192; Mage = 19.91; SD = 2.70; 69% women) shows that individuals infer more HU traits in the agents of civil behaviors compared to agents of other positive behaviors that are not related to civility. The second study (N = 328; Mage = 19.69; SD = 3.65; 77% women) reveals that uncivil and immoral behaviors displayed a similar pattern of inference of HU traits; however, moral behaviors were more associated with human nature than civil behaviors. Conclusions: Overall, results confirmed that civil behaviors facilitate the inference of humanness, specifically of HU traits, and that civil and moral behaviors are not equivalent in terms of the human inferences to which they lead.
Introduction
Social norms are essential sources of information that people use to orient themselves in the complex social world. Among social norms, research points out the importance of civility in human interactions [1][2][3][4]. When we talk about civility, we refer to a type of ethical behavior that includes courtesy, manners, and good citizenship. However, experts today argue that civility encompasses more than good manners and etiquette. It also requires an awareness that extends beyond the self, involving respect and concern for the well-being of others and of the community [5,6]. Therefore, along with good manners, civil behavior involves tolerance, self-restraint, commitment to others, social involvement, responsibility, and an active engagement in creating, protecting, and sustaining the community [7].
In the last few decades, the social sciences have recently focused on its opposite, incivility [8,9]. In fact, uncivil behavior increasingly appears as the subject of daily conversation and media coverage because it involves an attack on the community's social norms. Unlike criminal acts, uncivil behaviors are not serious or dangerous enough to merit police attention or be a reason for systematic repression. In addition, on many occasions, they indicate behavior with an unknown intent to damage other citizens [10]. However, they do negatively affect people and often threaten those who are affected [11]. In this sense, uncivil behaviors have been evaluated as one of the primary urban factors that produce the most stress among citizens and that most reduce their quality of life within the context of the community [12,13]. Incivility thus becomes a negative element of social human behavior that clearly affects public health; it must be understood in order to create effective interventions to eradicate it.
The two studies presented below examine the links between uncivil behavior and the denial of full human potential to be considered an individual, that is, dehumanization. More precisely, the aim of the first study is to verify whether civility is related to one of the specific components of humanness, human uniqueness (HU). In the second study, we extend the analysis of the link between civility and human uniqueness to morality, which constitutes a theoretical advancement in the understanding of the constructs of civility and humanness.
The Relationship between Humanness and Civility
Most of the measures of dehumanization have indirectly shown a possible link between civility and humanness [14][15][16]. Probably the most direct link between civility and humanness appears in the dual model of dehumanization [17], which defines humanness in two ways: via essential attributes that do not distinguish humans from other creatures (but that constitute humans' natural attributes) or as attributes exclusive to humans (compared to other species). The former comprises attributes of human nature (HN), including emotions, warmth, open-mindedness, agency, and the capacity for depth. Therefore, dehumanizing a group by depriving it of these attributes would equate to turning its members into automata (mechanistic dehumanization). The latter comprises attributes of human uniqueness (HU), which include civility, refinement, moral sensitivity, rationality, and maturity. Therefore, dehumanizing a group by depriving it of these attributes would equate to turning its members into animals (animalistic dehumanization).
The distinction between these two types of dehumanization has been the subject of much research [18]. However, within the framework of this theory, there is a general lack of empirical studies showing that the characteristics thought to be associated with HU are indeed empirically associated with this sense of humanness. Specifically, no exploration has been made of the extent to which people perceive a relationship between civil behaviors and humanness, and, consequently, the extent to which they make dehumanizing inferences about those who exhibit uncivil behaviors. In this sense, exploring the link between civility and HU traits constitutes a theoretical advancement in the understanding of the construct of dehumanization, especially in the theoretical framework of the dual model of dehumanization.
Moral Behaviors, Civil Behaviors, and Humanness
Previous research has found that people perceive morality as distinctively human, with immorality representing a lack of full humanness. For example, the relationship between immoral behavior and dehumanization appears in several studies by Bastian et al. [19,20], in which agents who performed harmful behaviors often faced dehumanization. Other investigations found that targets who were perceived as lacking moral qualities (e.g., low levels of honesty, sincerity, or trustworthiness) were attributed fewer human traits than were highly moral targets.
According to the dual model of dehumanization [17], morality and civility are both a feature of HU. Haslam [17] argued that the skills necessary to demonstrate competence (rationality and maturity) and to be moral (moral sensibility) are both high-order cognitions exclusive to human beings, that is, HU traits. In line with this reasoning, it can be expected that there will be no differences in the attribution of HU traits to targets performing both civil and moral behaviors. The present research sought to extend prior research on the dehumanized perception of perpetrators of immoral behaviors by testing whether uncivil behaviors may produce a similar effect upon the ascription of humanness.
The Present Research
In spite of the theoretical relevance of the concept of civility in the study of dehumanization, there has been little research into civil behavior or how it is associated with humanness. A recent study by Rodríguez-Gómez et al. [21] concluded that both uncivil and civil behaviors are implicitly associated with human concepts. However, to our knowledge, there is no evidence of the role of civil and uncivil behaviors in the inference of uniquely human traits. Since previous studies showed that the way people may be subtly ascribed or denied humanness has implications for judgments toward blame or punishment [19], no study has tested the relationship regarding the inference of humanness when viewing others doing (un)civil and (im)moral acts.
The purpose of the present research is to clarify the association between civility and humanness. Specifically, it seeks to test out the hypothesis that civility is related to one of the specific dimensions of humanness: human uniqueness.
In the first study, we compare the trait inferences displayed toward the targets of civility versus other positive behaviors, and incivility versus other negative behaviors. The second study further analyses how civility and HU link to morality. Specifically, we compare the trait inferences that are displayed toward the targets of civil, moral, and other positive behaviors, and the trait inferences that are displayed toward the targets of uncivil, immoral, and other negative behaviors. All data and study materials are available for download at https://osf.io/jfg6a/?view_only=6a1e6b210338480e9eb383d191c80a76 (accessed on 12 September 2022).
Study 1
The present study aims to explore whether civil behaviors could more often lead to the attribution of HU traits compared to other positive behaviors unrelated to civility. Simultaneously, we explore whether uncivil behaviors (through a lower attribution of HU traits compared to other negative behaviors unrelated to incivility) more often cause animalistic dehumanization compared to these other behaviors, as predicted by Haslam's theory [17]. Therefore, we expect: a greater inference of traits with high HU-low HN compared to other traits for civil behaviors (Hypothesis 1-H1), a lower inference of traits with high HU-low HN compared to other traits for uncivil behaviors (Hypothesis 2-H2), a greater inference of traits with high HU-low HN for civil behaviors compared to positive behaviors (Hypothesis 3-H3), and a lower inference of traits with high HU-low HN for uncivil behaviors compared to negative behaviors (Hypothesis 4-H4).
Participants
A total of 192 university students (132 female), all residents in Spain, participated in this study (n = 47 in civil, n = 49 in uncivil, n = 46 in positive neutral, and n = 50 in negative neutral conditions). The average age was 19.91 (SD = 2.70) and ranged from 18 to 43 years old. A sensitivity analysis conducted with G*Power [22] revealed that the sample was sufficient to detect small effects of f = 0.10 (equivalent to partial η 2 p = 0.01) assuming an alpha coefficient of 0.01 and power of 0.95 (mean correlation among repeated measures = 0.83).
Instruments
The behaviors. Two civil behaviors and two uncivil behaviors were selected from a pretest study of civil and uncivil behaviors (N = 360; n = 261 female participants; M age = 20.01; SD = 3.46). Specifically, the two civil behaviors were: "Think about the people who pick up after their dog after it has done its business when out on a walk" and "Think about the people who deposit their glass bottles in the glass recycling bins". The two uncivil behaviors were: "Think about the people who don't use the bike lane" and "Think about the people who leave their garbage out on the street instead of placing it in the bin". In addition, four behaviors unrelated to civility were included in the study. These two positive and two negative behaviors had been identified in another pre-test study (N = 64; n = 52 female participants; M age = 20.50; SD = 3.91). The positive behaviors were "Think about the people who do their grocery shopping" and "Think about the people who tend to go for walks". The negative behaviors were "Think about the people who give their uninformed opinion about anything" and "Think about the people who waste their time instead of making the most of it". The criteria for selecting civil and uncivil behaviors required they represent high versus low civility and simultaneously be comparable in valence to the behaviors unrelated to civility. We compared the means of civil and uncivil behaviors to ensure differences in civility and valence for each type of behavior. Another analysis of means verified that the civil behaviors were different from the neutral positive behaviors in civility but not in valence. The same analysis was performed for the uncivil behaviors and for the neutral negative behaviors and confirmed the differences in civility but not in valence (see the Supplementary Materials for details of the analysis).
The traits. The traits presented were all positive and represented the four groups defined by Haslam and Bain [23], resulting from crossing the HN and HU dimensions. The traits were selected from a pretest study that included 144 traits with scores for HN, HU, and valence (N = 100; n = 70 females; n = 30 males; M age = 20.12; SD = 3.48). The 16 traits selected included the high HN-high HU traits of passionate, idealistic, imaginative, and rational; the high HN-low HU traits of active, curious, efficient, and emotional; the low HNhigh HU traits of cultured, humble, tolerant, and refined; and the low HN-low HU traits of uninterested, relaxed, satisfied, and serene (see the Supplementary Materials for all analyses).
Procedure
Participants completed one of four versions of a paper-and-pencil questionnaire that asked them to form an impression of the type of people who exhibit certain behaviors and then to respond to a list of traits. Each questionnaire described two behaviors. For the civil condition, the questionnaire included two episodes of civil behaviors. For the uncivil condition, the questionnaire described two episodes of uncivil behaviors. Two control conditions contained two episodes of positive (vs. negative) behaviors that were unrelated to civility.
After reading each description, the participants received a list of 32 traits. Specifically, they were asked to score each trait according to the image they had formed of the person exhibiting the behaviors, on a scale from 1 (The trait does not describe at all the type of person I imagined) to 5 (The trait fully describes the type of person I had imagined). Of the 32 traits listed, 16 corresponded to the four types of traits defined by Haslam and Bain [23], and the other 16 were fillers. The experimental traits were mixed with the fillers, and the list was presented in two different orders. Half of the participants responded to one list, and the other half responded to the same list presented in reverse order.
Data Analysis
IBM SPSS Statistics (Version 25) was used for the analyses, with a significance level set at 0.05. After conducting an analysis of variance (ANOVA) of 2 × 2 × 2 × 2, several post hoc Bonferroni-corrected tests were conducted with the four types of traits derived via crossing high versus low HU traits and high versus low HN traits. First, the scores on the four traits in each behavior were compared to test H1 and H2. Second, the four behaviors in each trait were compared to test H3 and H4.
Results
To verify whether HU traits are more characteristic than other traits for civil behaviors and less characteristic than other traits for uncivil behaviors, a 2 (type of behavior: civility vs. neutral) × 2 (valence of behavior: positive vs. negative) × 2 (HN traits: high vs. low) × 2 (HU traits: high vs. low) ANOVA was carried out, with the behavior type and behavior valence as between-subjects variables and HN and HU as within-subjects variables. The differences in the means of the traits that scored high and low in HN and HU for each type of behavior are shown in Supplementary Table S4.
The ANOVA showed that the four-way interaction was statistically significant, F (1,188) = 3.96, p = 0.048, η 2 p = 0.02 (the ANOVA results are summarized in Supplementary Table S5). To test our hypothesis, we made several post hoc Bonferroni-corrected tests between traits for each behavior. First, the results for civil behaviors revealed that, in line with H1, the traits for high HU-low HN were more characteristic than were the other traits (p < 0.001, Cohen's d = 0.64 for high HU-high HN; p = 0.037, d = 0.25 for low HU-high HN; p < 0.001, d = 96 for low HU-low HN).
Second, the results also confirmed H2's expected lower inference of traits that were high HU-low HN compared to other traits for uncivil behaviors. Specifically, the post hoc Bonferroni-corrected tests showed that traits for high HU-low HN were less characteristic than were other traits (p < 0.001, d = 0.70 for high HU-high HN; p < 0.001, d = 0.83 for low HU-high HN; p < 0.001, d = 1.51 for low HU-low HN).
This pattern of responses in civility behaviors did not occur in non-civility positive behaviors. Here, instead of highlighting traits that were high HU-low HN, the traits for low HU-high HN were more characteristic than were other traits (p < 0.001, d = 0.95 for high HU-high HN; p < 0.001, d = 1.2 for high HU-low HN, and p < 0.001, d = 1.09 for low HU-low HN). Finally, for negative behaviors, traits that were high HU-low HN were less characteristic than were other traits (p = 0.003, d = 0.41 for high HU-high HN; p = 0.021, d = 0.30 for low HU-high HN; p < 0.001, d = 1.26 for low HU-low HN).
Third, to verify whether high HU and low HN are more characteristic of civil behaviors than of non-civility positive behaviors (H3) and less characteristic of uncivil behaviors than of non-civility negative behaviors (H4), we performed several post hoc Bonferronicorrected tests between types of behavior on each trait. The results showed that traits with high HU-low HN were more characteristic of civil behaviors than for positive behaviors (p < 0.001, d = 1.20). As expected, there is a greater inference of high HU traits in civil behaviors than in positive ones. No statistically significant differences appeared in the other three types of traits between the behaviors. Finally, confirming H4, the traits for high HU-low HN were less characteristic of uncivil than of negative behaviors (p < 0.001, d = 0.79). The same pattern appeared for traits that were high HU-high HN for uncivil behaviors and for negative behaviors (p = 0.037, d = 0.46).
Discussion
The aim of Study 1 was to test whether the agents of civil behaviors display the attribution of HU traits to a greater extent than the agents of other positive behaviors. We expected the reverse to be the case for uncivil behaviors. The results confirmed our hypothesis, showing that people infer high HU traits more often when observing other people performing civil behaviors than when performing other positive behaviors, and infer high HU traits less often when observing other people performing uncivil behaviors than when performing other negative behaviors.
The results support Haslam's theory [17], which associates civility with the attributes of human uniqueness. Indeed, in comparison with the other positive behaviors, only civil behaviors facilitate the attribution of this type of trait, while they are not associated with uncivil behaviors, nor with behaviors that are not related to civility behaviors, whether positive or negative. Conversely, the perpetrators of uncivil behaviors show a low score in this type of trait. In accordance with Haslam's model [17], and in line with our hypotheses, the observation of uncivil behaviors gives rise to an animalistic dehumanization of those exhibiting the behavior.
Furthermore, our results for civil and uncivil behaviors differed from those obtained for positive and negative behaviors. HU traits are related to a cognitively sophisticated sense of humanness, in which the socialization process in a particular culture plays an important role [24]. Civility clearly reflects the specific cultural learning required by HU traits, whereas our results indicate that positive behaviors are less closely related to HU.
Study 2
Study 1 showed that civil and uncivil behaviors are associated with the attribution of traits of HU and that civility varies from other types of positive and negative behaviors. In Study 2, we investigated the difference between civil and moral behaviors in the attribution of HU traits.
Is there any difference between civility and morality in the attribution of humanness? Insofar as morality serves to regulate cooperation [25] and to suppress or regulate selfinterest [26], it can also be confused with civility. In fact, different authors have linked civil behaviors to morality [27,28]. However, several studies have shown the relevant differences between moral acts and civil acts. For some scholars, morality is based on moral norms, whereas civility is based on conventional norms [29,30]. Moral norms include acts perceived as "objectively obligated," whereas conventional norms follow situationdependent rules [31]. In this sense, moral norms are considered universal, because they have also been used to proscribe behaviors in other countries and at other times in history, whereas conventional norms are often localized [32,33].
However, although civility and morality have been conceptually differentiated, the literature on dehumanization considers both dimensions to be equally related to the attribution of HU. In one of the early works about the dual model of dehumanization, Haslam [17] (p. 257) posits that "when UH characteristics are denied to others, they should be seen as lacking in refinement, civility, moral sensibility, and higher cognition". In a recent work conducted by Rodríguez-Pérez [34], the relationship between the dimensions of sociability, morality, and competence and the dual model of dehumanization was explored. The authors concluded that, although competence has great power in predicting HU, morality also plays a relevant role. Therefore, according to the scarce previous research, it can be expected that there will be no differences in the attribution of HU traits when observing people performing civil behaviors and moral behaviors.
We developed four hypotheses in this study to confirm a greater inference of high HU-low HN traits than with other traits for civil and moral behaviors (Hypothesis 1-H1), a lower inference of high HU-low HN traits than with other traits for uncivil and immoral behaviors (Hypothesis 2-H2), an equal inference of high HU-low HN traits in civil and moral behaviors but higher than with positive behaviors (Hypothesis 3-H3), and an equal inference of high HU-low HN traits in uncivil and immoral behaviors but lower than with negative behaviors (Hypothesis 4-H4).
Participants
The participants in this study were 328 university students from Spain (n = 51 in civil, n = 51 in uncivil, n = 55 in moral, n = 51 in immoral, n = 60 in positive neutral, and n = 60 in negative neutral conditions). The participants' ages ranged from 18 to 58 years (M = 19.69, SD = 3.65); 253 were female. A sensitivity analysis conducted with G*Power [22] revealed that the sample was sufficient to detect small effects of f = 0.10 (equivalent to partial η 2 p = 0.01), assuming an alpha coefficient of 0.01 and a power of 0.99 (mean correlation among repeated measures = 0.79).
Instruments
The behaviors. The same two civil behaviors and two uncivil behaviors presented in Study 1 were included. In the moral condition, two moral behaviors were included: "Think about the people who do not cheat on a test even if they have the answers in front of them" and "Think about the people who stand up for a friend when they are being teased or harassed," whereas in the immoral condition, the following behaviors were used: "Think about the people who cheat on their wife/husband/girlfriend/boyfriend" and "Think about the people who bad-mouth a good friend behind their back". A pretest study was conducted to test differences in the civility and morality of behaviors. The analyses showed that there were differences in civility between civil behaviors and moral behaviors and between uncivil behaviors and immoral behaviors. Civic and uncivil behaviors were more related to civility than moral and immoral behaviors, respectively. The same analyses were conducted for morality, again obtaining differences between civil behaviors and moral behaviors and between uncivil behaviors and immoral behaviors. Moral and immoral behaviors were more closely related to morality than civic and uncivil behaviors, respectively (see the Supplementary Materials for details of the analysis).
The traits. The same traits that were presented in Study 1 were included.
Procedure
Following Study 1, in the classroom, the participants completed one of six versions of a paper-and-pencil questionnaire in which they were asked to form an impression of the type of people who exhibit certain behaviors and then respond to a list of traits. Each questionnaire contained a description of two behaviors. After reading each description, the participants were given a list of 32 traits. Specifically, they were asked to score each trait in accordance with the image that they had formed of the person exhibiting the behavior, on a scale from 1 (The trait does not describe at all the type of person I imagined) to 5 (The trait fully describes the type of person I had imagined).
Data Analysis
IBM SPSS Statistics (Version 25) was used for the analyses. A significance level of 0.05 was set. After carrying out a 3 × 2 × 2 × 2 ANOVA, we conducted several post hoc Bonferroni-corrected tests with the four types of traits derived from the crossing of high versus low values in HU and high versus low values in HN. First, we compared the scores on the four traits in each behavior to test H1 and H2. Then we compared the four behaviors in each trait to test H3 and H4.
Results
To verify the hypotheses, a 3 (type of behavior: civility vs. moral vs. neutral) × 2 (valence of behavior: positive vs. negative) × 2 (HN traits: high vs. low) × 2 (HU traits: high vs. low) ANOVA was carried out, with the type of behavior and valence of behavior as between-subjects variables and HN and HU as within-subjects variables. The differences in means of the traits that scored high and low in HN and HU for each type of behavior are shown in Supplementary Table S6.
The ANOVA showed that the four-way interaction was statistically significant, F(2, 322) = 7.10, p < 0.001, η 2 p = 0.42 (for details, see the ANOVA results summarized in Supplementary Table S7). Several post hoc Bonferroni-corrected tests were carried out between traits on each behavior to verify if both the civil behavior agents and the moral behavior agents were attributed more to high HU-low HN traits than other traits (H1). The results confirmed what we had expected in H1 in civil behaviors but not in moral behaviors. Specifically, for civil behaviors, the traits for high HU-low HN were more characteristic than the other three traits (p < 0.001, d = 0.51 for high HU-high HN; p = 0.045, d = 0.27 for low HU-high HN; and p < 0.001, d = 0.84 for low HU-low HN).
However, a different pattern was found for moral behaviors, in which there was no difference in the attribution of the traits for high HU-low HN and traits for high HU-high HN (p = 0.646, d = 0.05) and for the traits for low HU-high HN (p = 0.205, d = 0.16). This is in contrast to civil behaviors because the participants considered all categories of traits to be equally characteristic, except those for low HU-low HN (p < 0.001, d = 0.58).
For uncivil and immoral behaviors, the post hoc Bonferroni-corrected tests between traits on each behavior confirmed a lower inference of high HU-low HN traits than other traits for both uncivil and immoral behaviors (H2). Specifically, we found that for uncivil behaviors, the traits for high HU-low HN were considered less characteristic than the other three traits (p = 0.043, d = 0.21 for high HU-high HN; p < 0.001, d = 0.60 for low HU-high HN; and p < 0.001, d = 1.56 for low HU-low HN). Additionally, in terms of immoral behaviors, traits that were high HU-low HN were considered less characteristic than the other three traits (p < 0.001, d = 1.32 for high HU-high HN; p < 0.001, d = 1.73 for low HU-high HN; and p < 0.001, d = 0.68 for low HU-low HN).
In contrast to civil and moral behaviors, in positive behaviors, the traits for low HUhigh HN were more characteristic than the other traits (p < 0.001, d = 0.92 for high HU-high HN; p < 0.001, d = 0.95 for high HU-low HN; and p < 0.001, d = 0.81 for low HU-low HN). Finally, for negative behaviors, the traits for high HU-low HN were considered less characteristic than the other three traits (p < 0.001, d = 0.79 for high HU-high HN; p = 0.030, d = 0.32 for low HU-high HN; and p < 0.001, d = 1.34 for low HU-low HN).
For H3, we expected to verify whether high HU-low HN traits were considered equal in civil and moral behaviors but higher than in positive behaviors. The post hoc Bonferronicorrected tests between the types of behavior showed no statistical differences between traits for civil and moral behaviors (p = 0.072, d = 0.31) in the traits for high HU-low HN. However, these traits were less commonly attributed to positive behaviors (p < 0.001, d = 1.09 for civil and p < 0.001, d = 0.67 for moral). Furthermore, the traits for high HU-high HN were considered less characteristic for positive than for civil (p = 0.012, d = 0.55) and moral behaviors (p < 0.001, d = 0.66).
Finally, in H4, we expected to verify an equal attribution of HU in uncivil and immoral behaviors but lower than in negative behaviors. In line with our hypothesis, the post hoc Bonferroni-corrected tests between behaviors showed no statistical differences between traits for uncivil and immoral behaviors (p = 0.056, d = 0.37) in the traits of high HU-low HN. However, these traits were considered more characteristic for negative behaviors than for uncivil (p < 0.001, d = 0.99) and immoral behaviors (p = 0.001, d = 0.80).
The traits for high HU-high HN were considered less characteristic for uncivil than for immoral behaviors (p < 0.001, d = 1.13) and negative behaviors (p < 0.001, d = 1.29). A different pattern was found for the traits for low HU-low HN, with lower scores for immoral behaviors than for the other behaviors (p = 0.002, d = 0.60 for uncivil behaviors and p < 0.001, d = 0.96 for negative behaviors). Moreover, the traits for low HU-high HN were considered more characteristic for immoral behaviors (M = 2.92) than for uncivil (p < 0.001, d = 1.15) and negative behaviors (p < 0.001, d = 0.71).
Discussion
The aim of Study 2 was to test whether the agents of civil and moral behaviors displayed the inference of HU traits to the same extent. The results confirmed those that we obtained in Study 1 for civil and uncivil behaviors: The inference of HU traits is higher when presenting civil behaviors compared with other positive behaviors.
However, our results showed that civil and uncivil behaviors do not have an exact correspondence with moral and immoral behaviors. Whereas, for civil behaviors, the HU traits were attributed to a greater extent than were the other traits, a different pattern was found for moral behaviors, in which there was no difference in the attribution of traits for high HU-low HN, traits for high HU-high HN, and traits for low HU-high HN. Morality seems to be a complex facet of humanness because it is related not only to HU traits but also to HN traits. Previous research [19] has related the HN traits to moral acts, pointing out that the desire to actively engage in moral behavior (proactive agency) is related to warmth and the emotional characteristics of HN traits.
In fact, the results of our study also indicate that immoral behaviors promote the inference of HN traits. Specifically, the traits for high HN-low HU were considered more characteristic of immoral behaviors than of uncivil and neutral behaviors. It would make sense to relate HN to morality when the theoretical difference between civility and morality is based on the universal character of the latter. Future studies could help elucidate the difference in HN traits between morality and civility and how this relates to the universality or specificity of culture.
General Discussion
Despite the theoretical relevance of the concept of civility in the study of dehumanization, little research has been conducted into civil behavior and how it is associated with humanness. Across two studies, our results showed a consistent association between civil behaviors and humanness; specifically, civil behaviors lead to the inference of HU traits. Additionally, our results revealed that civil and uncivil behaviors display a different pattern of associations with human traits than moral and immoral behaviors.
The data from the first study confirmed that civil behaviors display a differential attribution of HU traits compared with other positive behaviors, which is congruent with Haslam's theory [17]. Furthermore, we observed that uncivil behaviors constitute an obstacle to the inference of HU traits to a greater extent than any other negative behavior not related to civility. In this sense, our data confirm that uncivil behaviors promote the animalistic dehumanization of those exhibiting this type of behavior.
The second study extends the hypothesis of a link between civility and uniquely human traits to moral behaviors. Our results showed that agents of civil and moral behaviors display a different pattern in terms of humanness. Specifically, morality is related not only to HU traits but also to HN traits. An explanation could be related to the link between universality and moral norms. In this sense, moral norms are considered "objectively obligated", that is, they are common to other countries and other times, and therefore are unlike the norms of civility that represent conventional norms that are determined locally by the concrete learning of a culture; moral standards are universal [29][30][31][32]. For their part, Haslam et al. [35] state that HU is related to enculturated humanness, while HN corresponds to common humanness. That is, HU is related to what is culturally learned, and HN to what is universal and characteristic of the human being. In this sense, considering the literature that associates morality with universal norms and civility with conventional norms [29], one could argue that morality should be associated with HN to a greater degree than civility. However, the dual model of dehumanization does not suggest this relationship [17]. Despite this, Bastian et al. [19] verified that moral status is associated in distinctive ways with the two dimensions of humanness. While aspects of moral status, such as the inhibiting agency (i.e., responsibility for immoral behavior) are related to HU, others such as the pro-active agency (i.e., the capacity to engage in moral behavior) and moral patiency (i.e., the capacity to be recipients of morally relevant actions) relate to HN. This research has theoretical implications for dehumanization theory. First, our results constitute an advance in the understanding of relationships between civility and humanness. To date, no empirical studies have explored the social perception of incivility framed within dehumanization theory. Previous studies have provided evidence that moral judgments and dehumanization are closely connected [e.g., 19]. Indeed, immoral actions, even if they are distinctively human, such as torture, clearly have dehumanizing outcomes for the perpetrators [20]. A recent study [21] revealed an automatic association between uncivil behaviors and humanness, suggesting a possible link between this association and the social acceptance of these types of behaviors when they are framed as typically human actions. In our view, the results of this research represent an advance that empirically confirms the theoretical link between uncivil behavior and uniquely human perception.
Research that leads to a deeper understanding of the social perception of perpetrators of uncivil and immoral behaviors should compare and differentiate them. Studies that help toward a better understanding of the links between civility, morality, and humanness could clarify several outstanding questions in dehumanization theory. To date, the differences between morality and civility have been presented theoretically but have not been sufficiently explored empirically. Recently, despite the substantial body of research and theory on dehumanization that has been developed over the last two decades, the explanatory power of dehumanization theory has been questioned [36]. To refute this position, experts in the field have highlighted that humanness and moral evaluation are two related but distinct processes [37]. Future studies would help us better understand the links and differences between the inferences drawn for each type of behavior. In this sense, the analysis of civility and its consideration as a "partially restrictive" or "hierarchically restrictive" dimension [38,39] would be interesting. Hierarchically restrictive traits are concerned with morality or ability, but no studies have shown whether civility is associated with hierarchically restrictive or partially restrictive traits. If a moral person performs an immoral act, the impressions of that person change to considering them immoral, but if an immoral person performs a moral act, impressions of the person are not changed [38,40]. What can one expect of a civil person? Does a single behavior that contradicts this impression lead to changing the impression of that person to an uncivil person? If an uncivil person performs a civil behavior, do the impressions of others regarding that person change? Future studies will help shed light on these and other issues of interest to differentiate between moral and civil acts.
The results we obtained should be considered with caution, given that there are several limitations to the studies presented here. First, the studies worked with only two behaviors of each type (civil/uncivil/neutral positive/neutral negative). The list of traits that was used to evaluate the target was long and placed a mental burden on the participant; therefore, only two items were used to ensure that the participants did not complete the task randomly due to fatigue. This could lead to results that are closely linked to the behaviors presented. To generalize the results, it would be necessary to obtain consistent results in additional studies with a broader range of behaviors because there are different kinds of moral violations-some are due to an excess of "animal passion" (such as lust or anger), while others are due to alienation from feelings (such as cruelty). It is quite possible that the results may depend on which kind of immorality is most salient. Moreover, different thematic behaviors were used. Employing the same thematic contents for civil and uncivil behaviors would have allowed for testing the effect of civil versus uncivil behaviors on the dependent variables while controlling for the effects of additional characteristics of the stimuli. Future studies could address this issue. Furthermore, the sample is not balanced by gender due to convenience sampling. Future studies could consider whether the results are mediated by the gender of the evaluator and also by the gender of the perpetrator who performs the uncivil behavior. In addition, an intercultural perspective must be considered to account for the variability in behavioral norms and patterns. It would also be interesting to carry out these studies with different samples to determine whether the participant's sex or age might lead to differences in evaluating civil behaviors. Finally, another interesting future line of research would be to study how the traits of HU are inferred when the same person performs both civil and uncivil behaviors.
Conclusions
In conclusion, the results of our studies corroborate the theoretical proposal that civility is a central dimension of one of the types of humanness that Haslam [17] proposed, that of HU. According to our results, observing others exhibiting civil behaviors facilitates the inference of HU traits. Importantly, the agents of civil and moral behaviors are differentially perceived in terms of humanness. Civil and moral behaviors, therefore, constitute a way to explore the attribution of humanness in interpersonal and intergroup relations.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ejihpe12090098/s1, Pretest study: Pretest study of 120 civil and uncivil behaviors (Study 1), Table S1: Sample size be-fore and after correlation analysis and Cronbach's alpha (Pretest Study 1), Table S2: Mean scores and standard deviation of 120 civil and uncivil behaviors in eight dimensions related to the per-ception of humanness (Pretest Study 1), Table S3: Means and standard deviation of each of the clusters, and correlations between dimensions (Pretest Study 1),Behaviors selection (Study 1), Traits selection (Study 1), Behaviors selection (Study 2), Table S4: Descriptive Statistics for Positive and Negative Behaviors Ratings in Humanity Traits in Study 1, Table S5: F-ratios resulting from the repeated-measures ANOVA (Study 1), Table S6: Descriptive Statistics for Positive and Nega-tive Behavior Ratings in Humanity Traits in Study 2, Table S7: F-ratios resulting from the repeat-ed-measures ANOVA (Study 2). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. | 2022-09-18T15:19:18.933Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "47abc470ae7968f019c112f5d9f46e4502a6191d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2254-9625/12/9/98/pdf?version=1663237845",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "057b2932d409093dea08f8afba680b31b37eb605",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18329543 | pes2o/s2orc | v3-fos-license | miR-497 suppresses epithelial–mesenchymal transition and metastasis in colorectal cancer cells by targeting fos-related antigen-1
Objective MicroRNAs have key roles in tumor metastasis. The acquisition of metastatic capability by cancer cells is associated with epithelial–mesenchymal transition (EMT). Here, we describe the role and molecular mechanism of miR-497 in colorectal cancer (CRC) cell EMT, migration, and invasion. Methods Quantitative real-time polymerase chain reaction and Western blot assays were performed to detect the expression levels of miR-497 and Fos-related antigen-1 (Fra-1) in the CRC cells. HCT116 and SW480 cells with miR-497 overexpression or Fra-1 low expression were constructed by lipofection. Target prediction and luciferase reporter assays were performed to investigate whether Fra-1 is one of the targets of miR-497. Western blot and Transwell assays were performed to detect the effects of miR-497 and Fra-1 on CRC cell EMT, migration and invasion. Results We searched the miRanda, TargetScan, and PicTar databases and found that Fra-1, a key driver of CRC metastasis, is a potential target of miR-497. Quantitative real-time polymerase chain reaction and Western blot analysis verified downregulation of miR-497 and upregulation of Fra-1 in CRC cells. Western blot and Transwell assays showed that overexpression of miR-497 suppresses CRC cell EMT, migration, and invasion. Luciferase gene reporter assay revealed that Fra-1 is a downstream target of miR-497 as miR-497 bound directly to the 3′ untranslated region of Fra-1 messenger RNA. An inverse correlation was also found between miR-497 and Fra-1 in HCT116 and SW480 cells. Furthermore, knockdown of Fra-1 recuperated the effects of miR-497 overexpression. Conclusion miR-497 suppresses CRC cell EMT, migration, and invasion partly by targeting Fra-1.
Introduction
Metastasis is responsible for almost 90% of cancer-associated mortality, which is a process whereby cancer cells spread from a primary site to a secondary site and form tumors. 1 Metastasis involves a complex, multistep invasion-metastasis cascade. 2,3 Although extensive efforts have been made, metastasis remains the most poorly understood processes in cancer biology. Recently, many studies reported that the acquisition of metastatic capability by cancer cells can be associated with epithelial-mesenchymal transition (EMT). [4][5][6] EMT is a process whereby epithelial cells with a cobblestone phenotype acquire the characteristics of mesenchymal cells with a spindle-shaped fibroblast-like morphology. With the changes in cell phenotype, the prototypical epithelial markers E-cadherin and β-catenin decrease, and the mesenchymal markers submit your manuscript | www.dovepress.com
6598
Zhang et al fibronectin and vimentin increase. Upon EMT, tumor cells have the ability to invade through the basement membrane of the primary tissue and stroma, and circulate in the blood. They often become resistant to anoikis, which enables them to survive in the absence of attachment. Finally, they associate with the endothelium and extravasate to a secondary tissue and form new tumors. 7,8 Although major advances in cancer therapy have been achieved, metastases remain difficult to cure due to the fact that they can be widespread, leading to tissue function damage, and they are often resistant to conventional therapy.
Colorectal cancer (CRC) is among the most common cancers and one of the most frequent causes of cancer-related deaths worldwide. In these CRC patients, it is not the primary tumor, but its metastases at distant sites are the main cause of death. 9,10 Undoubtedly, a better understanding of the molecular mechanisms underlying CRC metastasis is very important to develop therapeutic strategies for metastatic CRC patients.
MicroRNAs (miRNAs) consist of about 19-22 nucleotides and post-transcriptionally modulate gene expression by base pairing to 3′ untranslated region of targeted messenger RNAs (mRNAs). 11 Precise chronological and topological regulation of post-transcriptional gene silencing by miRNAs is necessary for tissue differentiation and animal development, 12 and dysregulated expression is connected with various human diseases, including cancer. 13 Recent emerging evidence indicates a critical role of miRNAs in cancer metastasis. miR-497, a cancer metastasis-related miRNA, has been widely reported to be dysregulated in many cancers, including CRC. 14-20 Guo et al 21 also reported that miR-497 inhibits CRC cell invasion by targeting insulinlike growth factor 1 receptor. Fos-related antigen-1 (Fra-1) is a member of the Fos family, which is a key driver of CRC metastasis. [22][23][24] By target prediction analysis, Fra-1 was also found to be a potential target of miR-497. Based on these findings, we speculate that miR-497 might regulate CRC metastasis partly by targeting Fra-1. In this study, we investigated whether miR-497 targets Fra-1 to modulate EMT, invasion and migration in CRC cells.
Materials and methods cell culture and transfection
This study was performed with the approval of the Ethical Committee of Henan University of Chinese Medicine. The written informed consent was obtained from all patients. The normal fetal human colon epithelial cell line CRL-1831 and human CRC cell lines LoVo, RKO, HCT15, HCT28, HCT116, and SW480 were purchased from the American Type Culture Collection (ATCC, Manassas, VA, USA). These cells were maintained in Dulbecco's Modified Eagle's Medium (Invitrogen, Carlsbad, CA, USA) supplemented with 2 mM glutamine and 10% fetal bovine serum (HyClone, Logan, UT, USA) in a 37°C humidified atmosphere of 5% CO 2 .
The miR-497 mimics, small interfering RNA (siRNA) for Fra-1 (siFra-1), and their respective controls were obtained from GenePharma (Shanghai, People's Republic of China). Cells were incubated in antibiotic-free reduced serum medium for 24 hours, and then transfected with miRNA mimics or siRNA using Lipofactamine 2000 (Invitrogen). The medium was replaced with complete medium 6 hours after transfection. Subsequent experiments were conducted 48 hours after transfection.
Quantitative real-time polymerase chain reaction
Total RNA was isolated from cultured cells using Trizol agent (Invitrogen) in accordance with the manufacturer's instructions, and was inversely transcribed to complementary DNA by a first strand complementary DNA synthesis kit (Promega, Madison, WI, USA). miR-497 expression was detected by a mirVana miRNA detection kit (Genmed Scientifics, Arlington, MA, USA) and normalized to the level of U6 snRNA. Fra-1 mRNA level was detected by SYBR Green polymerase chain reaction (PCR) master mix on a 7500 Fast Real-time PCR system (Applied Biosystems, Foster City, CA, USA) and normalized to the level of glyceraldehyde-3-phosphate dehydrogenase.
Western blot analysis
Anti-E-cadherin, anti-β-catenin, and anti-β-actin antibodies were obtained from Santa Cruz Biotechnology (Santa Cruz, CA, USA). Anti-fibronectin, anti-vimentin anti-matrix metalloproteinase (MMP)-2, and anti-MMP-9 antibodies were bought from Abcam (Cambridge, MA, USA). All these antibodies were monoclonal rabbit antibodies. Briefly, cultured cells were transferred to tubes containing radioimmunoprecipitation assay buffer (Thermo Scientific, Waltham, MA, USA) and vortexed briefly. After the cells were lysed, the mixed contents were centrifuged at 14,000× g for 30 minutes at 4°C, and the lysate supernatant was collected. Proteins were denatured with loading buffer at 100°C for 3 minutes, and protein concentrations were determined using a bicinchoninic acid protein assay kit (Keygen Biotech. Co. Ltd., Nanjing, China). Western blot analysis was carried out as previously described. 25
statistical analyses
Statistical analyses were performed using GraphPad Prism software. Values were expressed as mean ± standard error. Comparisons between the two groups were carried out by the Student's t-test. Statistical significance was achieved at *P,0.05, **P,0.01 or ***P,0.001.
Downregulation of mir-497 and upregulation of Fra-1 exist in crc cells
The expression levels of miR-497 and Fra-1 mRNA were detected by using quantitative real-time PCR and Fra-1 protein level was detected by using Western blot assays. MiR-497 level was decreased in CRC cells compared with that in CRL-1831 cells ( Figure 1A). Fra-1 mRNA and protein levels were significantly increased in CRC cells compared with those in CRL-1831 cells ( Figure 1B and C). These results indicated downregulation of miR-497 and upregulation of Fra-1 in CRC cells.
Mir-497 inhibits crc cell eMT, migration, and invasion
To explore the effects of miR-497 on CRC cell EMT, migration, and invasion, we transfected HCT116 and SW480 cells with miR-497 mimics or negative control mimics, respectively, and then performed cell EMT, migration, and invasion detection. EMT detection showed that HCT116 and SW480 cells transfected with miR-497 mimics had obvious increases in E-cadherin and β-catenin protein expression, and marked decreases in fibronectin and vimentin protein expression
6600
Zhang et al compared with control cells, which indicated that miR-497 suppresses HCT116 and SW480 cell EMT (Figure 2A and B). Transwell assays revealed that miR-497 inhibits HCT116 and SW480 cell migration and invasion ( Figure 2C-F). The expression levels of markers of tumor invasion, MMP-2 and MMP-9 proteins, were significantly lower in HCT116 and SW480 cells transfected with miR-497 mimics than those in control cells, which further proved that miR-497 suppresses HCT116 and cell invasion ( Figure 2G and H). These data indicated that miR-497 suppresses HCT116 and SW480 cell EMT, migration, and invasion.
Fra-1 targeted by mir-497
To investigate the exact molecular mechanisms of miR-497 in CRC, we searched the miRanda, TargetScan, and PicTar database and found that Fra-1 is a potential target of miR-497 ( Figure 3A). Then quantitative PCR, Western blot, and luciferase gene reporter assay were conducted to validate our hypothesis. As shown in Figure 3B-E, overexpression of miR-497 in HCT116 and SW480 cells caused the marked reduction in Fra-1 mRNA and protein expression. Forced expression of miR-497 inhibited luciferase activity in 3′UTR-WT, but it did not influence the luciferase activity of reporter containing 3′UTR-MUT of Fra-1 ( Figure 3F and G). These data indicated that miR-497 can directly target Fra-1.
inhibition of Fra-1 recuperated the effects of mir-497
To examine whether miR-497 regulated CRC cell EMT, migration and invasion by targeting Fra-1, we transfected HCT116 and SW480 cells with siFra-1 or si-control and then conducted cell EMT, migration, and invasion detection. Figure 4A showed that the expression levels of Fra-1 were significantly reduced in HCT116 and SW480 cells transfected with siFra-1. EMT detection showed that HCT116 and SW480 cells transfected with siFra-1 had significant increases in E-cadherin and β-catenin protein expression and remarkable decreases in fibronectin and vimentin protein expression compared with control cells ( Figure 4B). Transwell assays showed that the siFra-1 groups had less invaded and migrated cells than si-control groups ( Figure 4C-F). In addition, siFra-1 groups had lower levels of MMP-2 and MMP-9 proteins than si-control groups ( Figure 4G and H). These results showed that Fra-1 inhibition recuperated the effects of miR-497 overexpression on CRC cell EMT, migration, and invasion, which suggested that the functional effects of miR-497 on CRC cells at least in part depends on its a direct target Fra-1.
Discussion
MiR-497 belongs to the miR-15/16/195/424/497 family, whose members share the same 3′-UTR seed sequence. As a cancerrelated miRNA, miR-497 has been widely reported to act as a tumor suppressor in various tumors. For instance, miR-497 is downregulated and targets multiple cell cycle regulators to suppress the development of hepatocellular carcinoma, 15 induces apoptosis of breast cancer cells, 16 suppresses tumor growth and angiogenesis by targeting hepatoma-derived growth factor in non-small cell lung cancer, 17 targets the insulin-like growth factor one receptor to suppress the development of human cervical cancer, 18 suppresses proliferation and induces apoptosis in prostate cancer cells, 19 and increases apoptosis in V-myc myelocytomatosis viral related oncogene-amplified neuroblastoma cells by targeting the key cell cycle regulator WEE1. 20 CRC is the fourth and third most common cancer in males and females, respectively, of which over 1.2 million cases are diagnosed each year worldwide with about 600,000 deaths. 27 The primary cause of CRC-induced death is metastasis to the liver. 10 Therefore, understanding the molecular mechanisms underlying CRC metastasis is very significant to develop novel and effective therapeutic strategies for advanced CRC patients. MiR-497 has been reported to be downregulated and inhibit CRC cell survival and invasion by targeting insulinlike growth factor 1 receptor. 21,28 We further investigated whether miR-497 exerts regulatory effects on CRC metastasis by targeting other genes. We searched the miRanda, TargetScan, and PicTar databases and found that Fra-1 is also a potential target of miR-497. Fra-1, an important member of the Fos family, is frequently elevated by oncogenic signaling in various human tumors and involved in metastasis and poor prognosis. In contrast to the tumorigenic activity of c-Fos, Fra-1 seems to function in the motile and invasive phenotypes of tumor cells. 29 For example, Fra-1 has been reported to promote metastasis through a variety of molecules: MMPs in breast cancer and lung epithelial cells, 30,31 adenosine receptor A2b in breast cancer, 32 receptor tyrosine kinase Axl in bladder cancer, 33 and CD44 in mesothelioma. 34 Recently, the role of Fra-1 in CRC metastasis has been widely studied. Iskit et al 22 reported that Fra-1 is a key driver of CRC metastasis and a Fra-1 classifier predicts disease-free survival. Diesch et al 24 found that Fra-1 is strongly expressed in tumor cells at the invasive front of human CRC, and that its depletion suppresses mesenchymal-like features in CRC cells in vitro. Liu et al 23 found that aberrantly expressed Fra-1 by IL-6/STAT3 transactivation promotes CRC aggressiveness through EMT. Based on these findings, we speculated that miR-497 might regulate CRC cell EMT, migration, and invasion in part by targeting Fra-1. In this study, we detected the expression levels of miR-497 and Fra-1 in CRC cell lines, and found downregulation of miR-497 and upregulation of Fra-1 in CRC cells, which were completely consistent with previous reports. [21][22][23][24]28 We further investigated the effects of miR-497 on CRC cell EMT, migration, and invasion, and found that overexpression of miR-497 inhibits CRC cell EMT, migration, and invasion. We performed luciferase assay and confirmed that Fra-1 can be directly targeted by miR-497. The expression levels of Fra-1 mRNA and protein were both regulated by miR-497, and silencing Fra-1 by siRNA recapitulated the effects of miR-497 on CRC cells. These results confirmed our speculation that miR-497 inhibits CRC cell EMT, migration, and invasion partly by targeting Fra-1.
All together, this study shows that miR-497 is downregulated in CRC cells and inversely related to cell EMT, migration, and invasion, while Fra-1, a potential oncogene in CRC, is upregulated and positively correlated with CRC cell EMT, migration, and invasion. The role of miR-497 is partially mediated by the target gene Fra-1. These findings facilitate a better understanding of the molecular mechanisms underlying CRC metastasis and provide novel potential therapeutic targets for metastatic CRC. | 2018-04-03T04:36:47.148Z | 2016-10-25T00:00:00.000 | {
"year": 2016,
"sha1": "5aa019d50cca85b00376a233e676a7afcc9c172e",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=33165",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "99c1fcad3a0d4cf626931d689a0edc97a0db29d9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
30439482 | pes2o/s2orc | v3-fos-license | Necrolytic migratory erythema ( Glucagonoma )-like skin lesions induced by EGF-receptor inhibition
ZD1839 (Iressa ) is an orally active, selective epidermal growth factor receptor (EGF-R) tyrosine kinase inhibitor that blocks signal transduction pathways involved in cell proliferation [1]. In many human malignancies, application of the ZD1839 alone or in combination with chemotherapy has already demonstrated both effectiveness and tolerability as well as probably dose related side effects, e.g. diarrhoea [1–3]. Despite the fact that EGF-R is also expressed in various structures of the human skin [4, 5], besides rash and acne-like skin lesions, severe skin toxicities under treatment with ZD1839 are rare [6, 7]. However, the histopathological consequences of EGF-R inhibition in the human skin in patients with a history of concurrent skin diseases have not been characterised. Here we describe a 55-years old patient with non-small cell lung cancer who developed a grade 4 skin toxicity after commencing a monotherapy with ZD1839. Initially she had received total brain irradiation (20 Gray) for symptomatic cerebral metastases. Intercurrent chemotherapy with Gemcitabine (weekly 1000 mg/m2) was stopped after stable disease was assessed. Two months later oral monotherapy with ZD1839 (compassionate use, Astra Zeneca ) was initiated at 250 mg/day to treat progressive cerebral metastases. At the same time the patient’s antiepileptic therapy with carbamazepine (400 mg/day) was changed to sodium valproate (300 mg/day), based on data reporting an anti-tumour activity for this substance, mediated by a potential effect upon histone deacetylases inhibition [8, 9]. Six weeks later the patient presented with metastatic infiltration of three vertebral bodies. Following immediate local irradiation (8 Gy) the patient’s condition rapidly improved. Due to the rapid appearance of painful, necrolytic, migratory, erythema-like skin lesions in the lower trunk and most prominently on both legs in addition to a pre-existing livedo reticularis (figure 1a, b) a skin biopsy was taken. Histology revealed necrosis of the epidermal layer and an unspecific vasculopathy. Immunological and laboratory parameters however revealed no evidence for a systemic collagenosis, activated coagulation, paraneoplastic glucagonoma [10, 11] or pseudoglucagonoma syndromes (e.g. hepatitis, liver cirrhosis, pancreatitis, malabsorption, danazol therapy and heroin abuse) [12, 13]. Immunohistochemistry demonstrated strong expression of EGF-R (Chemicon mAb) in the epidermal layer. Morphologically no changes in the eccrine or sebaceous glands, assumed to result from EGF-R mediated inhibition of migration and apoptosis [14] (figure 2a) were found and no positive staining for EGF-R expression in oc22 Peer reviewed clinical letter S W I S S M E D W K LY 2 0 0 3 ; 1 3 3 : 2 2 · w w w. s m w. c h
ZD1839 (Iressa
) is an orally active, selective epidermal growth factor receptor (EGF-R) tyrosine kinase inhibitor that blocks signal transduction pathways involved in cell proliferation [1].In many human malignancies, application of the ZD1839 alone or in combination with chemotherapy has already demonstrated both effectiveness and tolerability as well as probably dose related side effects, e.g.diarrhoea [1][2][3].Despite the fact that EGF-R is also expressed in various structures of the human skin [4,5], besides rash and acne-like skin lesions, severe skin toxicities under treatment with ZD1839 are rare [6,7].However, the histopathological consequences of EGF-R inhibition in the human skin in patients with a history of concurrent skin diseases have not been characterised.
Here we describe a 55-years old patient with non-small cell lung cancer who developed a grade 4 skin toxicity after commencing a monotherapy with ZD1839.Initially she had received total brain irradiation (20 Gray) for symptomatic cerebral metastases.Intercurrent chemotherapy with Gemcitabine (weekly 1000 mg/m 2 ) was stopped after stable disease was assessed.Two months later oral monotherapy with ZD1839 (compassionate use, Astra Zeneca ® ) was initiated at 250 mg/day to treat progressive cerebral metastases.At the same time the patient's antiepileptic therapy with carbamazepine (400 mg/day) was changed to sodium valproate (300 mg/day), based on data reporting an anti-tumour activity for this substance, mediated by a potential effect upon histone deacetylases inhibition [8,9].Six weeks later the patient presented with metastatic infiltration of three vertebral bodies.Following immediate local irradiation (8 Gy) the patient's condition rapidly improved.Due to the rapid appearance of painful, necrolytic, migratory, erythema-like skin lesions in the lower trunk and most prominently on both legs in addition to a pre-existing livedo reticularis (figure 1a, b) a skin biopsy was taken.Histology revealed necrosis of the epidermal layer and an unspecific vasculopathy.Immunological and laboratory parameters however revealed no evidence for a systemic collagenosis, activated coagulation, paraneoplastic glucagonoma [10,11] or pseudoglucagonoma syndromes (e.g.hepatitis, liver cirrhosis, pancreatitis, malabsorption, danazol therapy and heroin abuse) [12,13].Immunohistochemistry demonstrated strong expression of EGF-R (Chemicon mAb) in the epidermal layer.Morphologically no changes in the eccrine or sebaceous glands, assumed to result from EGF-R mediated inhibition of migration and apoptosis [14] (figure 2a) were found and no positive staining for EGF-R expression in oc-
22
Peer reviewed clinical letter S W I S S M E D W K LY 2 0 0 3 ; 1 3 3 : 2 2 • w w w. s m w. c h casional (1% to 3%) endothelial cells [1] of the dermal capillary plexus was noted (figure 2b).After ZD1839 withdrawal (sodium valproate at confirmed therapeutic serum level was maintained) and oral steroid therapy the patient's skin gradually improved.
Peer reviewed article
In this patient exceptionally severe alterations of skin homeostasis due to EGF-R inhibition alone or as an adverse and potential synergistic event in combination with sodium valproate [15,16] were observed.The possibility that these lesions were triggered by a pre-existing livedo reticularis cannot be excluded.
Figure 1
Figure 1 Characteristic net-like pattern of cyanotic mottled discoloration of the skin (pre-existing livedo reticularis) and erythema combined with erosions after superficial vesicles in the intertriginous areas.a: overview; b: detail.
Figure 2 a
Figure 2a.Prominent EGFR expression in the basal layer of the interfollicular epidermis, and in the outer root sheath of hair follicles as well as the eccrine glands (during ZD1839 therapy); capillaries were filled with erythrocytes.Eosinophils were not present.b.High magnification (x400) revealed an increased number of apoptotic cells in the stratum corneum, but no EGFR expression in the endothelial cells of the dermal capillary plexus.
Department of Medicine Universitätsspital Rämistr. 100 CH-8091 Zürich E-Mail: andreas.trojan@usz.chSwiss Medical Weekly: Call for papers Swiss Medical Weekly The many reasons why you should choose SMW to publish your research Official journal of the Swiss Society of Infectious disease the Swiss Society of Internal Medicine the Swiss Respiratory Society Med Wochenschr (1871-2000) Swiss Med Wkly (continues Schweiz Med Wochenschr from 2001) Editores Medicorum Helveticorum | 2018-04-03T01:38:56.793Z | 2003-01-11T00:00:00.000 | {
"year": 2003,
"sha1": "9a8a2c52d6bec81d488c56252d85619983a854b2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4414/smw.2003.10117",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "9a8a2c52d6bec81d488c56252d85619983a854b2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12995831 | pes2o/s2orc | v3-fos-license | The Schrodinger particle in an oscillating spherical cavity
We study a Schrodinger particle in an infinite spherical well with an oscillating wall. Parametric resonances emerge when the oscillation frequency is equal to the energy difference between two eigenstates of the static cavity. Whereas an analytic calculation based on a two-level system approximation reproduces the numerical results at low driving amplitudes, epsilon, we observe a drastic change of behaviour when epsilon>0.1, when new resonance states appear bearing no apparent relation to the eigenstates of the static system.
We study in this article the behaviour of a Schrödinger particle confined in a spherical cavity with an oscillating boundary that constitutes a particular kind of time-dependent perturbation. Our study provides a conceptually simple "laboratory" in which the subtle and nontrivial aspects of the resonant coupling between the oscillating wall and a particle trapped inside the cavity can be investigated. Our original motivation in this work comes from our attempt to construct a dynamical bag model of hadrons [1]; however, our results may bear implications on the physics of a wide range of systems such as cavity QED [2] and perhaps even sonoluminescence [3].
The system of a one-dimensional vibrating perfect cavity with quantized electromagnetic fields has been well studied [2]. It was found that the electromagnetic field energy density inside a cavity vibrating at one of its resonance frequencies concentrates into narrow peaks regardless of the detailed trajectories of the oscillating cavity wall [4,5,6]. Furthermore, the amplitudes of these energy wave packets grow rapidly in time, producing sharp and intense pulses of photons. The distortion of the vacuum fields arising from the cavity wall motions leads to dynamical modifications of the Casimir effects [7], which represents a fundamentally important and interesting feature of quantum physics. The problem of a quantum particle in a box with moving walls has also been studied with an analytical approach [10], but the possibility of resonances was not discussed, which is the main interest in this work.
If the oscillation amplitude ǫR 0 is small compared to the original cavity radius R 0 , perturbation theory can be used to calculate the transition amplitudes between two states of the unperturbed system. This corresponds to what is usually observed in experiments. However the non-perturbative solutions of the complete time-dependent Hamiltonian (H = H 0 + H 1 (t)), where H 0 is the time-independent part of the Hamiltonian, can in principle be remarkably different from the perturbative ones and can give rise to non-trivial features.
We consider, as a first step, an infinite spherical well with oscillating walls: where R(t) = R 0 (1 + ǫ sin νt) ≡ R 0 /α(t). Transforming to a fixed spatial domain via y ≡ α(t) r, y ≡ | y| < R 0 , and renormalizing the wavefunction φ( y, t) ≡ α −3/2 (t)ψ( r, t) in order to preserve unitarity, we have where can be considered a small time-dependent perturbation if ǫ and ν are small enough.
Since H 1 (t) commutes with L 2 and L, we can look for solutions that are eigenstates of the angular momentum. This allows us to separate the angular dependence from the radial one in Eq. 2 to obtain: Using first-order perturbation theory, one can easily calculate the coefficients of the solution's expansion in terms of the unperturbed eigenstates. If the initial state is chosen to be |i >= |n = k, l = 0 > (φ n,0 = √ 2nπj 0 (nπy)), we have The term due to ihṘ (t) 2 is exactly canceled out by the diagonal contribution of −Ṙ (t) R(t) y · p. The last integral is analytically solvable for ν = ω nk = (E n − E k )/h and yields The secular term ω nk t/ǫ in Eq. 6 is a typical sign of a resonance. Notice that the secular term does not multiply a periodic function and the amplitude ǫ that we suppose to be small is at the denominator. We can easily check that this is not a problem if we make a Taylor expansion of arctan (ǫ + tan (ω nk t/2)) / √ 1 − ǫ 2 in powers of ǫ near ǫ = 0, since the zeroth-order term exactly cancels the secular term. However the increase of c 1 n (t) in time remains.
We can now calculate the expectation value of any observable as a function of time. We define the following dimensionless quantities: The perturbative results are in excellent agreement with the numerical ones when the cavity is oscillating out of the resonances. For example, atν = 7, ǫ = 0.01 the fluctuations of the energy ( Fig. 1) correspond almost exactly to those of 1/R 2 (t), as one can expect from a quasistatic approximation, even though our system is not quasistatic. Even at high frequencies such as atν = 90, ǫ = 0.01, the first-order perturbative results are still acceptable (Fig. 2a). Notice that in this case the energy is shifted up slightly and its fluctuations in time are smaller. This is due to the fact that the system is no longer able to follow the fast oscillations of the walls, and consequently the fluctuations as well as the value of the r.m.s. radius R s ≡< (y/R o ) 2 > 1/2 are suppressed slightly (see Fig. 2b). At resonances, the perturbative approach breaks down and gives only an indication that a resonance exists. In order to study these resonances we solved the Schrödinger equation numerically, using a unitary numerical algorithm [8]. Forν =Ẽ 2 −Ẽ 1 , we calculated the expectational values of the energy U ≡<Ẽ > and R s , choosing |n = 1, l > as the initial state. In Fig. 3 we plotted the results for l = 0 and l = 1 (l = 0 ,Ẽ 2 −Ẽ 1 = 14.8044 ; l = 1 ,Ẽ 2 −Ẽ 1 = 19.7444) and two different values of ǫ. The values forν = 7 are also plotted for comparison. The drastic change of behaviour of the system at the resonant frequency is evident even for very small amplitudes such as ǫ = 0.001.
At resonances the maximum expectation value of the energy, U max , varies as a function of ǫ because of the trivial adiabatic factor α 2 (t) and, more importantly, non-trivial excitation processes. In Fig. 4 we show max[α −2 (t)U] vs. ǫ. For very small ǫ (ǫ < 0.002), the perturbation is not strong enough and the probability of exciting the second eigenstate never reaches 1. The expectation value of the energy saturates (and equalsẼ 2 ) for 0.006 < ǫ < 0.1. In this regime, the frequency dependence of the energy maxima is well fitted by a Breit-Wigner function: U max =Ẽ 1 + C/[(ν −ν 0 ) 2 + Γ 2 /4], and the width Γ increases linearly with ǫ up to ǫ ≈ 0.1. For ǫ > 0.1, even higher states are excited.
Projecting the numerical solution on the eigenstates of the static system we found the expected result that for ǫ < 0.1, the resonant dynamics is dominated by the lowest two eigenfunctions. This fact allows us to study the resonating system as a two-level system. In this case the differential equations for the coefficients reduce to: where V ij (t) ≡< i|H 1 (t)|j >. Using the fact that c i (t) changes little in a period T = 2π/ω 21 , we can average Eq. 8 and Eq. 9 over a period to cast them into two coupled first-order ODE's with constant coefficients [9]:ċ Neglecting higher order terms in ǫ, we have The system can then be diagonalized easily, giving c 1 (t) = cos Ωt and c 2 (t) = sin Ωt. When ǫ ≪ 1 then Ω ≃ 2ω 21 ǫ/3 and the period of the resonance lim ǫ→0 T r = 2π/Ω = ∞. In the other limit when ǫ → 1 then Ω → 4/3 ω 21 , but in this case our assumption that c i (t) changes little in a period is no longer true and the averaging method no more valid. In Fig. 5 we plot the expectation value of the energy U = α 2 (t)(Ẽ 1 cos 2 Ωt +Ẽ 2 sin 2 Ωt) and compare it with the numerical results. For amplitudes 0.005 < ǫ < 0.1 the agreement is excellent.
The matrix W ij can be written as −iΩσ 2 , where σ 2 is the second Pauli matrix. It follows that the vector formed by the coefficients c 1 and c 2 behaves like the spinor of a spin 1/2 particle in a magnetic field along the axis: Therefore, if the initial state of the particle inside the oscillating cavity is one of the two eigenstates involved in the resonance, which corresponds to an eigenstate of S z , the evolution of the system will be a precession of < S > around the axis. On the other hand, if the initial state corresponds to an eigenstate of S y we will obtain a stationary solution: The wavefunction in Eq. 14 is periodical with period T = 2π/ω 21 : where θ ≡ −2π[E 2 /(hω 21 ) ± 4(1 − √ 1 − ǫ 2 )/3ǫ]. We calculated numerically the solution choosing as initial function one of the two of Eq. 14 at t = 0, and in Fig. 5 we show the resulting U. Although α 2 (t)U(t) is not strictly constant its variation is considerably smaller compared to other solutions. It is remarkable that such a highly dynamical system can show a quasi-stationary behaviour.
For ǫ > 0.1 the two-level approximation starts to break down. For ǫ = 0.15 the third and fourth eigenstates become as important as the first two, and even more states are involved as one increases ǫ further. The behaviour of the system changes drastically for ǫ > 0.1, and we even observe the emergence of several new resonances that seem to have no straightforward explanation in terms of the unperturbed eigenstates. In Fig. 6 we show the maxima of α 2 (t)U(t) computed numerically for several driving frequencies choosing as initial state |n = 1, l = 0 >. The resonance at ν = ω 21 is indicated, and it is much broader and smaller in amplitude compared to the new non-trivial resonances. It is interesting to note that even at these new resonances, the coefficients of the expansion in the static eigenstates are still approximately periodic. It may be possible to understand these new resonances for ǫ > 1 by including a few more levels in the two-level approximation. However the complexity of the system in this case warrants further study.
For ǫ < 0.005 the two-level approximation fails again; it continues to give the maximum of the expected energy asẼ 2 , typical of two-level systems, while in the complete system the energy maximum decreases as ǫ is reduced. Also, the two-level approximation gives a period of the resonance T r greater than that of the complete system.
We emphasize that the resonances we studied here are caused exclusively by the motion of the cavity wall, since the system has no interaction with electromagnetic fields. Another interesting feature of our system is the independence of its dynamics on R 0 except for the rescaling of the oscillating frequency.
It is also possible to consider a real system, hence with the electromagnetic interaction, in which an "oscillating-cavity" resonance occurs but the Rabi resonances do not. In fact, to observe Rabi resonances we need a cavity with radius R 0 such that the fundamental frequency of the electromagnetic field ν 0 = 2πc/R 0 is equal to the difference between two energy levels, E n − E k ∝h 2 π 2 /2mR 2 0 . It is hence not difficult to choose an R 0 such that the Rabi resonances are not excited. In practice though, maintaining a stable mechanical oscillation with frequencies higher than some MHz is difficult.
For simplicity we have only considered a spherically symmetric cavity with perfect wall. However, we conjecture that the resonances should not be too sensitive on the symmetry of the perturbation and on the detailed shape of the potential as long as the matrix element V 12 (see Eq. 9) is different from zero. One possibility is to use a microcrystal of conducting material with separations between the levels inside the conduction band of the order of 10 −11 eV (∼ 100 kHz). Forcing the crystal to vibrate at one of the resonant frequencies should excite many of the Fermi level electrons, which decay by emitting radiowaves. A second way could be to use a system with several, almost equispaced, energy levels. At a resonant frequency the particle, an electron or a trapped atom for example, absorbs energy from the driving oscillation to jump from one level to the next one and so on, as long as the resonance conditionν ≃Ẽ n+1 −Ẽ n is satisfied. In this way the frequency of the emitted quanta can be higher than the oscillation frequency, making them distinguishable from the electromagnetic noise due to dipole radiation at the driving frequency.
In a further study we will consider a system with many equispaced energy levels and analyze the increase in energy with time. Ideally from such a system one can get quanta of frequency much higher than the driving frequency, and this is a major difference compared to the cavity QED situation, where at resonances typically a great increase in the number of photons with the same frequency as the driving force is expected.
We thank Dr. C. K. Law for his suggestion of the two-level approximation. This work is partially supported by the Hong Kong Research Grants Council grant CUHK 312/96P and a Chinese University Direct Grant (Project ID: 2060093). | 2014-10-01T00:00:00.000Z | 1999-03-15T00:00:00.000 | {
"year": 1999,
"sha1": "e867626a57d16a958b438473cbbc63093fb031f3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/quant-ph/9903052",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0ac8c7367b17f1b3ab5cb997ba81a8145b154e3f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
238680445 | pes2o/s2orc | v3-fos-license | A Primary Cardiac Osteosarcoma: Case Report and Literature Review
Background: Primary cardiac osteosarcoma is an uncommon condition, which is challenging to diagnose, and rarely reported. Case presentation: Here, we present a previously healthy 27-year-old patient referred to our hospital with a long-term fever. Echocardiography and thoracic computed tomography (CT) presented two masses in the left atrium (LA) and left ventricle (LV), and surgical excision of the masses revealed cardiac high-grade osteosarcoma. Unfortunately, the left ventricular tumor recurred three months later, and the patient was administered periodic chemotherapy. Then, chest CT showed that the left ventricle was almost occupied by the tumor and also involved the left ventricular outow tract, the left atrial appendage mass increased signicantly, and multiple metastatic small nodules appeared in both lungs.The patient is still in follow-up. Conclusions: The prevalence of primary cardiac osteosarcoma is very low and did not involve LA and LV simultaneously. This patient was hospitalized in our hospital complaining of a long-term fever of unknown origin, which has never been reported in the previous literatures. Our case report ndings suggest that primary cardiac osteosarcoma should not be ignored in the differential diagnosis of fever of unknown origin.
Introduction
Primary cardiac osteosarcoma is an exceptionally rare intracardiac tumor with very low prevalence [1,2]. At present, there are only a few reports about cardiac osteosarcoma, which progresses rapidly and causes death by blocking blood circulation or directly damaging cardiomyocytes [3]. Given the rarity of this subtype, its clinicopathological features and treatment options are still unclear. Therefore, we describe a case of primary cardiac osteosarcoma of the LA and LV to assist clinical diagnosis and treatment of this disease.
Patient Information
A 27-year-old man presented to our hospital complaining of recurrent fever, he had no previous history of illness. At the time of physical examination, the heart rate was 137/min, and his blood pressure was 125/89mmHg. Laboratory tests, including blood routine, liver enzymes, and biochemistry were in the normal range.
Clinical Findings
Cardiac auscultation revealed a systolic murmur of grade 2/6 at the border of the left sternum.
Diagnostic Assessment
Chest CT con rmed two solid masses in LA and LV, respectively, with calci cation and stenosis of the cardiac cavity (Fig. 1).Transthoracic echocardiography presented a moderate echo mass in the LA, extending to the left atrial appendage accompanied by mitral valve thickening and moderate regurgitation (Fig. 2a). An irregular moderate echo mass measuring 29x22mm in size, lobulated, was detected in the LV out ow tract (Fig. 2b), leading to severe aortic stenosis during the cardiac systole. The patient was suspected of suffering from myxomas in the cardiac cavity and subsequently underwent the two tumors resection.
Therapeutic Intervention
During surgery, a tumor of 4.5x2.5x1.5cm in size in the posterior wall of LA and left atrial appendage was detected, and involved the mitral annulus and posterior lobe, resulting in mild stenosis of the opening.
The left ventricular lump, loose in texture and 6.0x4.5x2.0cm in size, was found with a broad base attached to the interventricular septum.
The presence of cartilage, neoplastic osteoid, and spindle cells of obvious cellular atypia were con rmed through histopathological examination, consistent with a high-grade sarcoma (Fig. 3).
Immunohistochemical study of tumor cells was positive for vimentin, CDK4 with a 60% positive rate of Ki-67 ( Fig. 4). To further exclude the possibility of metastatic osteosarcoma, the bone scan was performed after operation, which showed no evidence of distant metastasis or any other primary tumor. All these ndings con rm that the nal diagnosis of osteosarcoma originated in the heart.
Follow-up and Outcomes
The patient was discharged on the 14th day after operation. Three months later, the patient complained of dyspnea during exercise and still suffering from fever, echocardiography showed local recurrence of the left ventricular tumor. Therefore, the patient received 5 cycles of chemotherapy with doxorubicin, dacarbazine, and teriprizumab. After chemotherapy, chest CT showed that the left ventricle was almost occupied by the tumor and also involved the left ventricular out ow tract, the left atrial appendage mass increased signi cantly, and multiple metastatic small nodules appeared in both lungs (Fig. 5), which was still in our follow-up.
Discussion And Conclusions
Most heart tumors are metastatic, with an incidence rate of more than 20-40 times than the primary cardiac tumors [1]. Primary cardiac tumors are rare, with a prevalence of autopsy being reported to be 0.001-0.030% [2]. Among all the primary tumors, the malignant one's account for about 25% [4]. About 95% of primary cardiac malignancies are sarcomas and osteosarcomas, and the latter accounts for less than 10% [5]. In a literature study, 45 papers concerning primary cardiac osteosarcoma were included, and a total of 53 patients were reported [6]. Given its rare occurrence, there are no standard guidelines on its etiology, pathogenesis, and treatments.
The clinical symptoms of primary cardiac osteosarcoma mainly depend on the size and anatomical location of the tumor [7]. The common symptoms include dyspnea, chest pain and syncope, mainly related to heart failure and obstruction [3]. Fever was the main complaint of the current patient when he was admitted to our hospital, characterized by persistent high fever. There was merely one reported case of fever during the disease [8], which was attributed to upper respiratory tract infection, and the temperature returned to normal when he was admitted to our hospital. We hypothesized that the current patient might be complicated with infective endocarditis. However, as the patient had been treated with antibiotics many times before admission, no bacteria were detected by blood culture. Importantly, this reminds us that the long-term fever of unknown origin or infective endocarditis can also be one of the symptoms of primary cardiac osteosarcoma, which has never been reported before.
In contrast to the metastatic ones, a majority of the primary cardiac osteosarcomas most commonly involve the LA [9]. However, only one patient with simultaneous left atrial and ventricular involvement has been reported [10]. The difference is that in our case, the two tumors in the cardiac cavity are anatomically independent of each other, indicating the double primary malignancies. The similar clinical symptoms and anatomic location lead to the confusion between primary cardiac osteosarcoma and atrial myxoma; however, some characteristics are helpful to distinguish them [11,12], such as myxomas tend to have a short and extensive base attached to adjacent sites, pedicled, soft, and often have some hemorrhagic and necrotic areas. Osteosarcoma generally originates from the non-septal atrial wall and often predisposes to invading the pulmonary vein.
Because primary cardiac osteosarcoma is exceptionally rare, there are few studies reported on its pathogenesis. Terje Forslund [13] proposed that some genes may regulate the disease, such as aberrant PI3K-Akt-NFkappaB pathway, and tbhs3 and erbB2 proteins overexpression. However, the death of the patient in this report did not allow any further tests. Based on this study, we plan to conduct whole gene sequencing for the current patient to further clarify the potential pathogenesis of the disease.
Osteosarcoma has a high degree of malignancy, and recurrence and metastasis are its basic characteristics [14], leading to the low survival rate of patients. The average survival time of osteosarcoma patients ranged from 3 months to 1 year [15]. Our patient had a recurrence of left ventricular tumor 3 months after the operation and is currently receiving chemotherapy.
Given the lower incidence of osteosarcoma in the LA and LV, there is no standard treatment guidelines.
Complete surgical resection is considered to be the optimal therapy for the tumor [16]. Due to the low tolerance of myocardium to chemoradiotherapy, the role of chemoradiotherapy is controversial [9]. Given the previously reported case [13], a future procedure can be scheduled to detect some abnormal gene expression or molecular pathway in these patients, to develop some prospective targeted therapies.
We report a case of primary osteosarcoma in the LA and LV, and the patient had undergone complete tumor resection, and then received periodic chemotherapy owing to postoperative recurrence. This is the rst case complaining of long-term fever as the main symptom on admission. The second case reported atrial and ventricular involvement simultaneously, enriching our understanding of the disease. Next, we intended to use tissue samples of the patient for gene detection to seek potential targeted therapy and provide new options for treating the disease.
In short, when confronted with di cult diseases, especially some rare conditions, we should pay attention to the unusual clinical manifestations. We should also focus on the correlations among clinical, pathological, and imaging evidences.
Patient Perspective
After the operation, the fever symptoms were relieved and the patients were satis ed with the surgical treatment. Unfortunately, the left ventricular tumor relapsed and fever reappeared. After evaluation, the patient received periodic chemotherapy. At present, because the patient's heart tumor increased signi cantly, and the patient's body temperature still did not return to normal, but the vital signs are stable.
Declarations
Ethics approval and consent to participate Not applicable.
Consent for publication
Written informed consent was obtained from the patient for publication of this case report.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Figure 1
Chest CT shows two solid masses in LA (a, white arrow) and LV (b, red arrow), respectively.
Figure 2
Echocardiography of the patient's heart: a A moderate echo mass in the LA, extending to the left atrial appendage accompanied by mitral valve thickening and moderate regurgitation (red arrow); b An irregular moderate echo mass measuring 29x22mm in size, lobulated, was detected in the LV out ow tract (white arrow). Immunohistochemical staining: the neoplasm was stained with antibodies to vimentin (a) and CDK4 (b), and a 60% positive rate of Ki-67(c). | 2021-09-25T16:09:33.025Z | 2021-08-24T00:00:00.000 | {
"year": 2021,
"sha1": "1a28cbf235f3a8fb71b09223500c53dd5f51c625",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-815337/v1.pdf?c=1631902869000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1341cfe460c8b4524f103ff7215d39d23e7ada67",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244091797 | pes2o/s2orc | v3-fos-license | Practitioner and Service User Perspectives on the Rapid Shift to Teletherapy for Individuals on the Autism Spectrum as a Result of COVID-19
Prior to COVID-19, research into teletherapy models for individuals on the autism spectrum was slowly progressing. Following the onset of COVID-19, teletherapy became a necessity for continuity of services, however, research was still emerging for how to translate best practice autism support to the online environment. The aim of this research was to gain insight into the rapid shift to teletherapy for practitioner and service users and the implications for the broader disability sector. Survey responses were collected from 141 allied health practitioners (speech pathologists, occupational therapists, psychologists, educators, and social workers) from four Australian states and territories. A total of 806 responses were collected from service users following an individual teletherapy session. Five themes were identified during the qualitative analysis; (1) technology—love it or hate it; (2) teletherapy as a “new normal”; (3) short term pain, for long term gain; (4) the shape of service delivery has changed; (5) is teletherapy always an option? Data from the quantitative analysis provided further insights into the first two themes. While COVID-19 has brought forward significant advances in telehealth models of practice, what is needed now is to delve further into what works, for who, and in which context, and explore the potentiality, efficiencies, and scalability of a post-pandemic hybrid approach. This will inform practice guidelines and training, as well as information for service users on what to expect.
Introduction
Following the onset of COVID-19 in Australia in March 2020 when public health orders were implemented to reduce social contacts, a lack of access to in person face to face therapeutic support services impacted people on the autism spectrum. In geographically isolated regions and urban areas, teletherapy became not just a choice but a necessity for the continuity of services [1]. Teletherapy incorporates the use of telecommunications such as telephone, email and video conferencing to deliver therapeutic supports to individuals at a distance from the therapy provider [2]. The National Disability Insurance Scheme (NDIS) in Australia [3] operates on a reimbursement model that has supported telepractice, however this option has not been taken up extensively within the scheme. Prior to COVID-19, research into teletherapy models of practice for individuals on the autism spectrum in geographically isolated regions was emerging, with promising findings of benefits for children with autism and their families. In a systematic review in 2010, eight studies reported favourable outcomes [4] and, similarly, 14 studies in 2018 were positive about the impact of teletherapy [5]. However, both noted the lack of research on direct interventions or assessment provided by clinicians with children and young people with autism. 2 of 11 Teletherapy, as part of the broader terms of telehealth and telepractice, has been used successfully for a range of autism-specific interventions, including speech and language development [6], behaviour support [7][8][9][10], parent-mediated social-communication interventions for young child [11][12][13], and classroom coaching for educators of autistic students [14]. While reviews of the literature on teletherapy have suggested that the platform itself has not been a barrier for successful outcomes [15], the research is still emerging on how practitioners can translate best practice autism support to the online environment [5]. In a pilot study of multidisciplinary teletherapy services, Johnsson, et al. [16] found that some areas of practice were perceived by practitioners as being more difficult to adapt to online delivery, for example, fine and gross motor goals.
Many of the more recent studies on teletherapy have built on past research training of parents and practitioners in the implementation of Applied Behaviour Analysis for children on the autism spectrum [17][18][19][20][21], with positive findings supporting the inclusion of teletherapy as part of service delivery. These studies, however, have little qualitative data exploring the participant and practitioner perspectives of this adapted model of practice to flesh out the experience of teletherapy.
In a recent study of an occupational therapy teletherapy intervention for children on the autism spectrum, Wallisch et al. [22] found that as well as the bonus of less travel and increased access to supports, parents also learned to problem solve new situations and had the time to reflect on situations with the occupational therapist and gain confidence in trying new strategies. One of the key benefits reported by parents was being able to place therapy in the contexts, routines, and situations of the natural family environment, rather than the clinic [22]. Benefits in travel and family involvement when comparing remote and face to face early intervention programs have been found, however, parents, remote therapists and local support team members have highlighted the value of initial face to face support [23]. Conversely, others report remote delivery was reported to be limited in connecting the therapist to the child's local context and capture non-verbal communication [24].
The observation of client and parent behaviour is an essential component of effective teletherapy services, and without proper set up and management by the practitioner, may be a barrier for the delivery of good quality services [25]. Practitioners require significant training and support to adapt their practice to a teletherapy model, and this had implications in a practitioner's willingness to adopt this model of practice to achieve quality outcomes and role satisfaction [16].
Recent preliminary results from research conducted during the COVID-19 lockdowns [26] indicated that the loss of in person face to face support for adults on the spectrum has significantly impacted their mental health, and families have struggled to support their children to engage with their therapist via teletherapy. Another COVID-19 study from the United States [27] found that the use of coaching methods as part of telepractice for families of children with Autism Spectrum Disorder had positive outcomes, improving daily living skills. Given the inconsistencies in the literature on the efficacy of a teletherapy model for supporting individuals on the autism spectrum, more research is needed to understand practice in an online environment from those who have had this experience. Following the rapid shift to online service delivery, primarily offered as video conferencing therapy sessions, in late March 2020 due to COVID-19 public health recommendations, Autism Spectrum Australia (Aspect), an autism-specific, not-for profit organisation invited their allied health practitioners and service users including individuals on the autism spectrum and/or their families engaged with teletherapy services to complete a voluntary survey. The aim of this research was to add to the emerging discourse around teletherapy for individuals with autism, to gain point in time practice-based insights into teletherapy, both successes and barriers based on the COVID-19 pivot to teletherapy. It is anticipated these insights may contribute to the integration of teletherapy as part of service design for the broader disability sector.
Participants
Practitioner participants included speech pathologists, occupational therapists, psychologists, educators, and social workers from New South Wales (n = 100), Australian Capital Territory (n = 24), South Australia (n = 6), and Victoria (n = 11) at Aspect who were delivering NDIS funded teletherapy services to participants on the autism spectrum and/or their caregivers.
Service user participants included individuals on the autism spectrum and/or their caregivers who had received a teletherapy service from Aspect and were in locations across Australia. Throughout the data collection period, a total of 924 individuals on the autism spectrum and/or their caregivers received teletherapy services. Due to the anonymous nature of the survey, we were unable to extract any identifying information about service users.
Design
The study used a concurrent mixed methods design [28] to triangulate information from both service users and practitioners via quantitative and qualitative data.
Practitioners and service users were invited to complete an anonymous survey via Survey Monkey on their experiences of teletherapy (See Table 1).
Comment on why you gave the above rating (Comment box) 3.
What are the benefits of delivering a teletherapy service? (Comment box) 4.
What are the challenges of delivering a teletherapy service? (Comment box) 5.
Do you intend to continue to use teletherapy as part of your services? (Yes, no, unsure, comment box) 6.
Do you have any families who have requested to continue a teletherapy or hybrid model of support post COVID-19? (Yes, no, unsure, comment box)
Can you please rate your satisfaction with the support provided during your teletherapy session? (Likert scale 1-5) 3.
Any comments? (Comment box) A total of 164 practitioners were delivering teletherapy services for Aspect at the time and were invited to voluntarily complete a survey via email. Responses were collected between the 11 and 18 May 2020, approximately two months after all in person services changed to teletherapy in response to COVID-19.
Service users were invited to complete a short survey following each teletherapy session (See Table 1). The questions were designed to be a brief snapshot of service user experience of the teletherapy session during COVID-19 lockdowns. These responses were anonymous and service users could respond on multiple occasions. Data were collected from service users between 2 April and 7 September 2020. Data collection received retrospective Human Research Ethics Committee approval at the University of Sydney on 16 September 2020 (2020/457).
A total of 141 individuals completed the practitioner survey (86% response rate), and a total of 806 responses were collected from service users following teletherapy sessions. The service user response rate was unknown due to the nature of the anonymous survey and the ability for participants to respond on multiple occasions.
Data Analysis
Quantitative data was downloaded from survey monkey and input into SPSS (Version 24, IBM Corp., Armonk, NY, USA) [29] for analysis. Descriptive statistics were calculated for practitioners' ratings of experience of providing teletherapy services, service users rating of satisfaction with the support provided, and service users rating of technical quality. A Pearson-product moment correlation [30] was conducted between service users scores on level of support provided and technical quality.
Qualitative data was exported and analysed using NVivo (Version 11, QSR International Pty Ltd., Melbourne, NSW, Australia) [31]. The first author conducted a thematic analysis of the data following the six steps as outlined by Braun and Clarke [32]. The author began with becoming familiar with the data before generating initial codes. The author then went on to search for themes in the codes and iteratively reviewed these themes. Finally, the first author defined and named these themes. The second author then reviewed the full data set to attain a consensus that the codes and themes accurately represented the data.
Results
The qualitative and quantitative data from both participant groups has been combined, revealing five themes which were identified during analysis; (1) technology-love it or hate it; (2) teletherapy as a "new normal"; (3) short term pain, for long term gain; (4) the shape of service delivery has changed; (5) is teletherapy always an option?
Technology-Love It or Hate It
A total of 782 ratings on the technical quality of the teletherapy session were collected from service users, and 777 ratings were collected on the service user perception of the level of support provided during the teletherapy session (see Table 2). Table 2. Service user ratings following teletherapy sessions.
Ratings
Average Rating (Scale 1-5) Service users rating of satisfaction with the support provided Service users rating of technical quality 4.5 4.0 There was a strong positive correlation between technical quality and satisfaction with the level of support provided (r = 0.64, n = 772, p < 0.000).
Despite the above average rating for technical quality of the teletherapy sessions, the qualitative analysis identified a significant number of comments from both service users and practitioners related to technical difficulties and barriers they experienced during teletherapy sessions. The most common barrier reported by practitioners was the families having access to, and being confident in using, appropriate technology. Poor internet connection and speed were frequently reported by both service users and practitioners, as well as difficulties with connecting to software platforms, and their audio-visual quality. Both practitioners and service users reported that this impacted the success of the session and levels of engagement.
Internet connection issues can significantly impact on rapport building and session engagement and cause overall frustration at times. Practitioner 63 Technical difficulties made it difficult to stay on task and to make the most of this session. Service User response 59
Teletherapy as a "New Normal"
Despite the qualitative reports of technical difficulties, practitioners also reported a significant amount of positive feedback from participants and their families. Some families indicated that teletherapy was more engaging and just as effective for their child as in-person services. Practitioners reported that some clients indicated that they were more comfortable with the online platform and found it less confronting than in-person therapy.
Some of my clients are anxious about meeting new people/having people come into their home, so teletherapy is less invasive for them. Practitioner 46 Many families valued the continuity of service they were able to receive throughout the pandemic and expressed interest in continuing to receive service online in the future.
Would be super useful to have the teletherapy option even post the restrictions-I find it so much easier to integrate into my work schedule. Service User response 465 Practitioners indicated that a majority intended to continue to use teletherapy as part of their services, and that nearly half of the families on their caseload had requested to continue a teletherapy or hybrid model of support post COVID-19 (See Figure 1).
Teletherapy as a "New Normal"
Despite the qualitative reports of technical difficulties, practitioners also reported a significant amount of positive feedback from participants and their families. Some families indicated that teletherapy was more engaging and just as effective for their child as in-person services. Practitioners reported that some clients indicated that they were more comfortable with the online platform and found it less confronting than in-person therapy.
Some of my clients are anxious about meeting new people/having people come into their home, so teletherapy is less invasive for them. Practitioner 46
Many families valued the continuity of service they were able to receive throughout the pandemic and expressed interest in continuing to receive service online in the future.
Short Term Pain, for Long Term Gain
The majority of practitioners were positive about their experience of providing teletherapy services with both the mode and median rating being 4 out of 5. For practitioners, the reduction in travel was identified as one of the most significant benefits of moving to a teletherapy model of service. This was reported to have had a positive impact on practitioner productivity and the ability to see more clients on the waitlist including those in rural and remote areas. Teletherapy also resulted in service users spending less on travel, therefore releasing more funding in their budget to spend on therapy sessions.
Short Term Pain, for Long Term Gain
The majority of practitioners were positive about their experience of providing teletherapy services with both the mode and median rating being 4 out of 5. For practitioners, the reduction in travel was identified as one of the most significant benefits of moving to a teletherapy model of service. This was reported to have had a positive impact on practitioner productivity and the ability to see more clients on the waitlist including those in rural and remote areas. Teletherapy also resulted in service users spending less on travel, therefore releasing more funding in their budget to spend on therapy sessions.
Potential for more efficiency by reducing travel time (which could sometimes account for 3+ hours of my day). Practitioner 30
More hours of clients plans being dedicated to therapy rather than travel. Practitioner 50 At the same time, there was a significant increase in planning and preparation time as practitioners learned to navigate the online space for their therapy practice. This planning and preparation time had not been anticipated, was not included in the participant's service agreement and therefore was mostly unbillable during this time.
It takes a significant amount of additional unbillable time per session for me to source and/or come up with appropriate and engaging activities to make Teletherapy sessions interactive and fun for my clients. Practitioner 29
The Shape of Service Delivery Changed
While practitioners reported that some clients identified as being more comfortable with the online platform and found it less confronting than face to face therapy, many practitioners reported difficulties engaging their clients, particularly young children, through the screen.
It has been more challenging for early intervention clients as parents sometimes have different expectations (e.g., for them to sit in front of the computer and for the therapist to do 1:1 therapy with the child). Practitioner 95 It's new and we will need to work together to find new things to make this work. It's a challenge doing therapy this way. Service User response 183 This, however, had the indirect benefit of increasing parent involvement and a greater uptake of a capacity building coaching approach. Practitioners reported the benefit of higher levels of engagement with parents and a greater ability for parents to become heavily involved in therapy sessions and intervention goals in the child's natural environment.
Parents are becoming more confident and even providing their own strategies on therapy interventions based on their increased involvement. Practitioner 127
Is Teletherapy Always an Option?
Barriers for access to teletherapy reported by practitioners included having English as a second language, families with very little access to technology and/or high-quality internet, and families navigating through the pandemic juggling the added pressures of home schooling and working from home.
Minus one star was due to working out logistics with families who were stressed or didn't have adequate technology for videocall. Practitioner 75 Most of my families with high needs children have opted not to engage as they did not feel they had the capacity to coordinate this as well as "life". Practitioner 29 Not all clients are suited to tele therapy-those with English as a second language, those who do not have access to technology, those with significant mental health issues or those with intellectual disabilities who find it hard how to access tele services if they don't have a support person living with them. Practitioner 134 The shift to delivering services via telepractice was also reported to be a positive for practitioners where other dimensions of complexity were present. This included safety risks for the staff member due to exposure to COVID-19 or where behaviours of concern were present.
Can continue to provide sessions if someone in the household is unwell without putting ourselves at risk. Practitioner 125 No risk for therapist supporting complex PBS (Positive Behaviour Support) caseload.
Practitioner 79
Practitioners were less convinced about the suitability for teletherapy for goals that relied heavily on observation, prompting and modelling e.g., Alternative and Augmentative Communication (AAC), social skills group and peer mediated play, and physical skills such as tying shoelaces. In these instances, practitioners perceived a need for physical presence to develop and implement strategies. Additionally, practitioners supporting individuals with behaviours of concern reported barriers in being able to adequately observe the participant behaviours in their natural environment.
Many OT (Occupational Therapy) goals are not as effective over teletherapy such as dressing, ADL's, motor skills as it is modelling and observation of these skills which makes therapy most effective, which is very hard to do over a camera. Practitioner 93 Trialling AAC is more difficult (esp on an iPad). Practitioner 113
Discussion
While teletherapy is far from a "one size fits all" approach, as reported by Pellicano, et al. [26] in their study of the experiences of individuals on the autism spectrum during COVID-19, this study adds insights from a large group of service users who valued the continuity of service they were able to receive. In addition, a significant proportion of practitioners reported that they have families that have requested to continue a teletherapy or hybrid model of service delivery post-COVID. Three quarters of practitioners in our sample have expressed an interest in continuing to use teletherapy as a part of their service delivery model post-COVID.
Therefore, the rapid upskilling and shift to online may be seen as a positive for these practitioners who have added teletherapy to their service delivery skill set. While training and support will be a constant need as technology continues to evolve and we learn more about a teletherapy model of practice, the experiences of this group of service providers and service users indicates an ongoing place for teletherapy for individuals on the autism spectrum. This is in line with the longstanding recommendations of the potential of teletherapy as a means of increasing access to therapy services into rural and remote areas [33,34]. Overall, these findings identify teletherapy as part of the "new normal". Telepractice literature has primarily focussed on the feasibility and acceptability of a teletherapy approach, often in comparison to in-person supports. Second generation teletherapy research is required to explore the efficacy of a hybrid approach, which may represent increased efficiencies in current service design [35]. The hybrid model has already shown promise in improving mental health outcomes in rural areas [36] and support for caregivers of children with a diagnosis of attention deficit hyperactivity disorder [37]. In their commentary on post-pandemic hybrid approach to service delivery in India, Westwood [35] suggests the challenges that remain include digital education, the integration of technology into current care pathways, and creating seamless systems.
Advances in hardware, software, and Internet speed have greatly improved the technical reliability, scalability and quality of teletherapy services [38]. While ratings for both technology and support provided were above average, the strong positive correlation between the two would suggest that efficient access to and use of technology for teletherapy may impact the perceived level of support that is provided from service user perspectives. This finding aligns with broader research that has indicated that technology has not been a barrier for successful outcomes [15]. While the current study cannot correlate technical issues with outcomes, and represents the views of a narrow participant group, the results do suggest that initial investment in reliable technology needs to be made by both practitioners and service users to work towards a successful teletherapy service. For practitioners, this may mean ensuring they have access to a stable platform, reliable internet connections and training to trouble shoot any technical issues. For service users, there may be a need for training and ongoing support from the service providers admin support team to navigate the shift to teletherapy prior to beginning a teletherapy service with the practitioner. Consideration of the internet access available to service users will influence choices about teletherapy options and potential locations for access in community locations such as schools and libraries. Service users and service providers can engage in these conversations as part of establishing service agreements to ensure shared expectations are developed and any infrastructure barriers are addressed [24].
Video conferencing was the primary mode of service delivery offered for Aspect staff and clients, the participants in this research, and may be the preferred mode of teletherapy delivery, but this was not explored as a part of this research study. However, other studies have recommended offering multiple modalities, such as telephone calls, emails, and/or text messages to improve outcomes and engagement [24]. Web-based apps and resources such as those found on Boom Learning™ and Everyday Speech™ may further support the engagement in teletherapy sessions. McCrae et al. [39] found in their study of cognitive behavioural treatment for childhood insomnia in children on the autism spectrum and their parents, that practitioners used email to connect with families between sessions and that almost half of families suggested a "booster call" as an addition to treatment. Similarly, Lerman et al. [25] trained staff to adopt and implement telepractice observation via multiple modalities, for example, via audio and video recordings, and reported this may go some way to increasing the practitioner's ability and confidence in observing the participant.
Reduction in travel has been consistently reported as one of the major benefits of moving to a teletherapy model of service, with more time being able to be spent in direct support [7,16,40]. While the current study supported these findings and their impact on increasing therapy hours, we also discovered that the rapid shift to teletherapy also resulted in a significant amount of time spent on planning and preparation for teletherapy sessions. Due to the rapid shift to online service, these hours were unexpected and therefore unbillable under the NDIS service agreements [24]. Future planning when developing service agreements may need to account for preparation time when establishing a teletherapy service as a way to allow practitioners to adapt and individualise best practice autism support for the online environment.
As reported by Johnsson et al. [16], this preparation time may reduce as practitioner's increase in confidence and competency to adapt their practice for the online environment. Our findings are reflective of the pandemic context, which required a rapid shift to teletherapy, and did not allow for the upskilling of practitioners, or preparation of service users prior to this substantial shift to online service delivery. This finding suggests that practitioners should be given time to undergo ongoing teletherapy training and support in order to continue to adapt their model of practice. This may include discipline specific modules on adapted practice, practical support for resource adaptation, video and role play sessions, online observation sessions, and joint sessions. Due to the rapidly evolving nature of teletherapy, this may need to be updated regularly to stay abreast of teletherapy developments and advances in technology.
Consistent with Wallisch et al. [22], practitioners reported that one of the direct benefits of delivering services via teletherapy to children on the autism spectrum was the increase in parent involvement and capacity to implement support within the individuals' everyday environments. However, we did find that some practitioners had difficulties adjusting to this new way of delivering supports. Lawford, et al. [41] indicated a need for practitioner development and supports to assist with decision making and adapting their practice. While the practitioners in the Wallisch, et al. [22] study were given training and support to carry out their intervention, participants in this study were not afforded this opportunity in the 2020 COVID-19 context of a rapid change to teletherapy service delivery. Similar to previous recommendations [24], we found that telepractice relies heavily on a coaching model of practice and therefore, innovation and effective training is indicated for staff to shift to a coaching approach that supports families in implementing family centred goals and strategies.
At the onset of the pandemic when all in person services were suddenly replaced by teletherapy, it was to be expected that some clients and families would experience difficulty with this shift. While the shift was reported to be positive by many in being able to lower the health risks for all and continue to provide services for families, we found that some practitioner reported barriers with supporting fine motor skills due to limitations on the ability to observe, prompt and model. This is consistent with previous research [16], however, telepractice is still an emerging model of delivering therapy supports, and techniques for increasing observation, prompting, and modelling may be developed using portable cameras and a more agile approach to therapy sessions as practitioners navigate the possibilities of a teletherapy context. There are, however, still many questions left unanswered about whether teletherapy is always an option. For example, there is no research to date on the role of interpreters in a teletherapy service and their impact on access and outcomes for individuals on the autism spectrum and their support teams who are receiving services in their second language. During this time of unprecedented change, there has been a rapid upskilling of practitioners in adapting in person services for the online environment. There is a need for further targeted research to identify and mitigate barriers reported in delivering a teletherapy service.
The brief set of questions asked of service users and practitioners were intentionally broad to allow for them to speak of their experience of the rapid shift to teletherapy as a model of practice. Limitations, however, should be noted in that while the results in this study may be applied to the broader disability sector, the sample is not representative of the autism service delivery sector, and other organisations and service users may differ in their experiences. Due to the anonymity of the surveys and their brevity, the results are unable to be discussed in relation to specific practitioner disciplines (e.g., speech pathologist, occupational therapist, psychologist etc.), or the differing perspectives provided by an individual on the autism spectrum or their caregiver. The perspective of service users who chose to decline teletherapy services was also out of the scope of the current study. Online fatigue related to COVID-19 lockdown orders, and the subsequent transition of all services online may have also played a role in the findings from this study. Therefore, the findings from this brief point-in-time study should be applied as a starting point at a service wide level for further exploration in a specific context. Further research is needed to confirm these results in the broader sector, to understand the unique experiences across allied health disciplines, and to investigate the differences for individuals on the autism spectrum receiving teletherapy with that of their caregivers.
Conclusions
This study contributes to the discourse in understanding adaptations and gaps in practice in moving to an online model, the barriers to effective service provision, including technology, family complexities and specific goals, as well as the shift in practice models. While COVID-19 has brought forward incredible advances in telehealth models of practice in a short period of time, we have also learnt a considerable amount about issues in digital accessibility [42]. What's needed now is to take stock and understand what we have learnt in terms of what has worked, for who, and in which context. To look towards the future of a post pandemic hybrid approach, we need to delve more into the experiences of a broad range of individuals on the autism spectrum, their caregivers, and their local support teams. This will help to address and potentially break down the barriers we have seen in this preliminary study to create a sustainable model of service delivery that augments existing in person services for a variety of individuals with diverse needs. We also need to harness the high level of innovation that has taken place within this timeframe to develop practice guidelines and training for new graduates and practitioners interested in incorporating this model of service delivery as part of the suite of options for NDIS participants. | 2021-11-14T16:16:24.108Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "c468a158b0b5c7cf8380cd3d890564b029cd6eaa",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/18/22/11812/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "69675ff53781f8a2ad10de2165440d9fc78c66f2",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21297487 | pes2o/s2orc | v3-fos-license | Child mortality is (estimated to be) falling
www.thelancet.com Vol 388 December 17/24/31, 2016 2965 Undoubtedly child mortality is falling, and the world should be proud of this progress. Within the past 100 years, expectations around child mortality (and subsequentl y family size) have changed substantially, starting in countries that industrialised earlier and more recently pervading most of the world. Li Liu and colleagues, in The Lancet, describe detailed fi ndings on the latest state of global child mortality. Naturally the levels of detail—by location, time, age, and cause of death—at which these fi ndings can be presented in a single scientifi c article are limited, although a fi ner level of detail is available as online material. Liu and colleagues report that in 2015, among the 5·9 million under 5 deaths, 2·7 million are now estimated to occur in the narrow time window of the neonatal period (fi rst 28 days of life), mainly around delivery or due to subsequent infections. They report that the leading under-5 causes of death were preterm birth complications (1·055 million), pneumonia (0·921 million), and intrapartum related events (0·691 million). Sub-Saharan Africa and Asia account for more than 80% of all under-5 deaths, with post-neonatal deaths mainly attributable to childhood infections and injuries. Reductions in mortality from pneumonia, diarrhoea, neonatal intrapartum related events, malaria, and measles were responsible for 61% of the total reduction of 35 per 1000 livebirths in under-5 mortality rates in 2000–15. These headline outcomes were also refl ected closely in the recently updated Global Burden of Disease Study estimates. Seeing diff erent approaches leading to very similar fi ndings in the two sets of estimates suggests high covalidity. All of these headline fi ndings invite further exploration of the underlying detailed resources. Estimated numbers of child deaths are important, but are not very useful unless they are continually probed, interpreted, and applied into health policy solutions. The UN Millennium Development Goals (MDGs), specifi cally MDG 4, have rightly focused considerable attention on child mortality in recent years. Although Liu and colleagues acknowledge that the goal of a two-thirds reduction in under-5 child mortality from 1990 to 2015 did not happen globally, more nuanced consideration needs to be applied to understand changing patterns of child mortality. Global goals and targets tend to be set on a one-size-fi ts-all basis, as was the case with the MDGs. However, there are notable exceptions. In 1990, South Africa had the lowest under-5 mortality rate in the sub-Saharan region, then encountered a massive HIV pandemic, but subsequently achieved a substantial improvement in child mortality towards the end of the MDG period. Using in-country data to reveal the details, this was dubbed ‘‘a successful failure’’ in terms of Child mortality is (estimated to be) falling Thomas H Gillingwater Euan MacDonald Centre for Motor Neurone Disease Research and Centre for Integrative Physiology, Edinburgh Medical School: Biomedical Sciences, University of Edinburgh, Edinburgh EH8 9AG, UK T.Gillingwater@ed.ac.uk
Undoubtedly child mortality is falling, and the world should be proud of this progress. Within the past 100 years, expectations around child mortality (and subsequentl y family size) have changed substantially, starting in countries that industrialised earlier and more recently pervading most of the world. Li Liu and colleagues, 1 in The Lancet, describe detailed fi ndings on the latest state of global child mortality. Naturally the levels of detail-by location, time, age, and cause of death-at which these fi ndings can be presented in a single scientifi c article are limited, although a fi ner level of detail is available as online material. Liu and colleagues 1 report that in 2015, among the 5·9 million under 5 deaths, 2·7 million are now estimated to occur in the narrow time window of the neonatal period (fi rst 28 days of life), mainly around delivery or due to subsequent infections. They report that the leading under-5 causes of death were preterm birth complications (1·055 million), pneumonia (0·921 million), and intrapartum related events (0·691 million). Sub-Saharan Africa and Asia account for more than 80% of all under-5 deaths, with post-neonatal deaths mainly attributable to childhood infections and injuries. Reductions in mortality from pneumonia, diarrhoea, neonatal intrapartum related events, malaria, and measles were responsible for 61% of the total reduction of 35 per 1000 livebirths in under-5 mortality rates in 2000-15.
These headline outcomes were also refl ected closely in the recently updated Global Burden of Disease Study estimates. 2 Seeing diff erent approaches leading to very similar fi ndings in the two sets of estimates suggests high covalidity. All of these headline fi ndings invite further exploration of the underlying detailed resources. Estimated numbers of child deaths are important, but are not very useful unless they are continually probed, interpreted, and applied into health policy solutions.
The UN Millennium Development Goals (MDGs), specifi cally MDG 4, have rightly focused considerable attention on child mortality in recent years. 3 Although Liu and colleagues 1 acknowledge that the goal of a two-thirds reduction in under-5 child mortality from 1990 to 2015 did not happen globally, more nuanced consideration needs to be applied to understand changing patterns of child mortality. Global goals and targets tend to be set on a one-size-fi ts-all basis, as was the case with the MDGs. However, there are notable exceptions. In 1990, South Africa had the lowest under-5 mortality rate in the sub-Saharan region, then encountered a massive HIV pandemic, but subsequently achieved a substantial improvement in child mortality towards the end of the MDG period. Using in-country data to reveal the details, this was dubbed ''a successful failure'' in terms of MDG 4. 4 Additionally, country-level estimates could well obscure major geographical or socioeconomic inequalities in mortality that might well exceed intercountry diff erences.
Child mortality is (estimated to be) falling
In view of the substantial eff orts that go into assessing global patterns of childhood mortality, it is important to consider additional creative ways of using and interpreting such fi ndings. As well as the obvious need to monitor levels and trends of mortality over time and hold governments to account, mortality rates might also provide crucial pointers to other health and disease issues at the population level. Early life exposures are critically important 5 and can exert epigenetic changes that aff ect the whole life-course, as expressed in the Developmental Origins of Health and Disease (DOHaD) hypothesis. 6 Ideally, individual life-course information linking community and health facility events is needed to understand such processes, but rarely exists in lowincome and middle-income countries. 7 Clearly, early childhood death data cannot substitute on an individual basis for life-course details. However, each early child death probably refl ects a similar set of exposures among a wider surviving peer-group, and making that connection could enable the application of indirect analytical methods, such as longitudinal estimates of population-attributable risks, to elucidate the health impacts of early stresses on later life.
In considering Liu and colleagues' work, 1 the world should not be proud of the persisting technical requirement to say that child mortality is estimated to be falling. Of the estimated 6 million under-5 child deaths in 2015, only a small proportion were adequately documented at the individual level, with particularly low proportions evident in low-income and middle-income countries, where most childhood deaths occur. Liu and colleagues, 1 as well as other international groups, 2 have made impressive methodological progress in applying increasingly sophisticated mathematical and computing techniques to the scant available data on child mortality, to arrive at reasonable estimates. Nevertheless, the proportion of child lives and deaths individually documented has not increased nearly as rapidly as (estimated) rates of child mortality have decreased. Despite the global information revolution-resulting in a single modern 256 gb laptop having enough capacity to hold a 250-character record on each of the 670 million under-5 children in the world, with space left over for full details of each of the 6 million annual under-5 deaths-such data are simply neither collected nor available. 7 That 6 million under-5 children continue to die every year in our 21st century world is unacceptable, but even worse is that we seem collectively unable to count, and hence be accountable for, most of those individual deaths. A suggestion 5 years ago was that the MDGs lacked the hypothetical MDG 0, to increase coverage of individual vital registration beyond 95%. 8 Instruments and expertise to expand civil registration and vital statistics (CRVS) still need much wider application. 9 Automated verbal autopsy needs deploying as a routine part of CRVS, to track individual cause-of-death and decrease dependence on estimates. 10 Disappointingly, the new Sustainable Development Goals (SDGs) do not explicitly mandate registering and counting major life events as the foundation for monitoring human health and development. 11 Target 16.9, which calls for universal birth registration by 2030, almost implies by omission that registering other life events is unimportant, although Target 17.19 wishes for improved statistical capacity in general. But when will the world learn that slogans like "Everyone counts-so count everyone" need to translate urgently into large-scale, globally funded actions that are determined to value every individual as the basic unit of observation for understanding and improving global health? 12 The war in Syria has reached extraordinary levels of human suff ering. Millions are displaced in Syria, throughout the region, and into Europe, an inestimable number of people killed, and hundreds of thousands are trapped in besieged areas. A popular uprising has been overtaken by a regional stand-off among great powers. The path that led us here calls into harsh light the utter failure of global governance and action to intervene to protect vast populations from the atrocities of war. The promises made and structures established in the wake of World War 2 have been broken, crumbling under political stalemates and lack of leadership at the UN. What is most disturbing to people in the region is the indiff erence and silence from the major nation states that defi ned the post-war consensus on law and norms relating to treatment of civilian populations in war.
In this shadow, The Lancet and the American University of Beirut have together established the Commission on Syria: Health in Confl ict. The aim of the Commission is to describe, analyse, interrogate, and decry the calamity before us. The lens is health and wellbeing, always a productive way to assess grave issues of high mortality and morbidity, disruptions of home, family, settlement, environment, and such extensive loss that the future itself is hard to discern. With this Commission, we have embarked on the diffi cult eff ort to identify these costs and enumerate them where possible. Hence, the fi rst task ahead is to account for the burden of war. We will also examine the challenges of the international response to the crisis and learn the lessons for future crises. The Commission will develop concrete recommendations to address the unmet current and future health needs, including those related to rebuilding and to strengthening the global health response to political confl ict.
At the Commission's fi rst meeting in Beirut, Lebanon, on Dec 1-2, 2016, the participants recognised the terrible global meanings and dismal outlook of the confl ict in Syria. But as members of the global health community, we must acknowledge our collective responsibility to respond through what we do best: science and advocacy. In so doing, we hope to advance global research, collaboration, and advocacy on matters of life and death in confl ictcertainly at the core of our mission as health professionals in a globalised and increasingly violent world.
Many of the events and facts are widely known, a point that further underscores the enormity of the crime of inaction. The carnage in the cities and villages of Syria has left at least 250 000 people dead, but recent | 2016-11-11T20:17:58.970Z | 2016-12-17T00:00:00.000 | {
"year": 2017,
"sha1": "449c9722895a41f97f8706d5b2d0a3d49461163e",
"oa_license": "CCBY",
"oa_url": "http://www.thelancet.com/article/S0140673616321699/pdf",
"oa_status": "HYBRID",
"pdf_src": "Elsevier",
"pdf_hash": "99a5b03320646f43bb6a4a31b7fafcaad8186455",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15689646 | pes2o/s2orc | v3-fos-license | Productivity of pre-modern agriculture in the Cucuteni-Trypillia area
(Abridged) We present palaeoeconomy reconstructions for pre-modern agriculture, with the Cucuteni-Trypillia Cultural unity (5,400-2,700 BC, modern Ukraine, Moldova and Romania) as example. The starting point of our analysis is the palaeodiet structure suggested by archaeological data, stable isotope analyses of human remains, and palynology. We allow for the archeologically attested contributions of domesticated and wild animal products to the diet, develop plausible estimates of the yield of ancient cereal varieties cultivated with ancient techniques, and quantify the yield dependence on the time after initial planting and on rainfall (as a climate proxy). Our conclusions involve analysis of the labour costs of the agricultural cycle of both an individual and a farmer's family. Finally, we put our results into the context of the exploitation territory and catchment analysis. The simplest economic complex based on cereals, domestic and wild animal products, with fallow cropping, appears to be capable of supporting an isolated, relatively small farming community of 50-300 people (2-10 ha). Our results strongly suggest that dairy products played a significant role in the dietary and labour balance. The smaller settlements are typical of the earliest Trypillia A but remain predominant at the later stages. A larger settlement of several hundred people could function in isolation only with technological innovations, such as manure fertiliser and ard tillage. Very large settlements of a few hundred hectares could function only if supported by satellite farming villages and stable exchange networks. We also discuss, quantify and assess some strategies to mitigate the risks of arable agriculture associated with strong temporal fluctuations in the cereal yield, such as manure fertilisation, increased fraction of cereals in the diet combined with producing grain surplus for emergency storage.
4 stage because there is a relatively small number of exceptionally large settlements that affect the average but not the median area. The difference between the mean and the median areas is not very strong at the earlier stages A-BI but becomes extreme at the later stages. In such cases, the median area best represents a typical site. There is a systematic increase in the size of the settlements, with a maximum during the middle stages. Plant remains identified at the CTU sites in the Ukraine and Moldova show that agriculture was already substantially advanced, even at early CTU stages. The dominant species of cereals were hulled wheats (Triticum dicoccum Schrank, T. monococcum L. and T. spelta L.), supplemented by naked six-row barley (Hordeum vulgare var. nudum Hook f.coeleste L.) and hulled barley (Hordeum vulgare). Broomcorn millet (Panicum miliaceum L.) was less common. During later periods, changes are observable only in the dominant varieties of barley: large amounts of naked barley were particularly typical of Trypillia A/Precucuteni sites, but were increasingly replaced by hulled varieties. The list of Trypillia cultigens also included pea (Pisum sativum L.) and bitter vetch (Vicia ervilia L.); pulse seeds are also frequently recovered in excavations. The fields were cultivated with antler and stone hoes, which made the soil more friable and thus better prepared for sowing the spikelets of hulled wheats. The use of the ard is suggested both by a find of an antler ard at Grebenukiv Yar (Pashkevich and Videiko 2006) and by cattle bone structures that suggest their use for traction (Zhuravlev 2008). The harvesting technique was probably specially adapted for cutting ears. Low yields, long periods of natural soil regeneration, primitive tools for soil cultivation and harvesting, and the use of undemanding cultigens were the basic features of the Early-and Middle-Trypillia agriculture.
The animal remains identified at the Trypillia sites belong to both wild species (red deer, wild boar, roe deer, elk, etc.) and domesticated species (cattle, pig, sheep/goat and horse); the relative occurrence of species varies significantly from site to site, implying considerable variations in subsistence. Cattle (and possibly horses) were used for transportation and traction as evidenced by bone structures and pottery models of sledges with ox heads found at several sites.
From the early phases, CTU settlements consisted of several one-or two-storey houses, each supposedly inhabited by a single family (sometimes, several families). The population of a typical settlement (estimated to be 50 to 500 people) formed a basic community unit, apparently sharing the ownership of land and other resources. No communal cemeteries are known at the CTU sites from the early and middle periods. From the earliest periods onwards, female effigies were predominant among the portable figurines, possibly symbols of fecundity, as grains of wheat and barley were found included in the ceramic fabric of several figurines at the Luka-Vrublevetskaya site (Bibikov 1953).
There are at least two concepts concerning the origins and expansion of the CTU; in the main, it is viewed as a result of migration from west to east and south. A different viewpoint, particularly popular in the former Soviet Union, stressed the local origin of the CTU, pointing to the Bug-Dniesterian region as the most likely source. Based on the bulk of available evidence one may consider the initial emergence of CTU sites in the forest-steppe of Eastern Europe as an agricultural colonization, essentially similar to that of the LBK in central Europe, with a complete culture-economic package spreading into a poorly occupied niche at a rapid pace. Similarly to the LBK, a limited impact of indigenous (in the CTU case, the Bug-Dniester) groups is rec-6
Palaeodiet reconstructions
The relative importance of plant food versus domestic animal products and wild meat in the diet of early farming communities remains a subject of active discussion. Stable isotope analysis of human bones by Lösch et al. (2006) suggests that, in the early farming communities of Anatolia (Pre-Pottery Neolithic B, mid-ninth millennium BC), "the contribution of stock on the hoof in the human diet was modest". Low 15 N values in their samples imply the increased consumption of protein-rich cereals and pulses. According to these authors, animal husbandry gained in importance at later Neolithic stages. Bogaard (2004a) concluded, from archaeobotanical evidence, that cereals and pulses provided the bulk of the diet in Neolithic Greece, while livestock provided a vital alternative in the case of crop failure.
In contrast, investigations of Copper Age (early-to mid-fifth millennium BC) cemeteries in Varna I and Durankulak, Bulgaria (Honch et al. 2006), using stable carbon ( 13 C/ 12 C) and nitrogen ( 15 N/ 14 N) isotope ratios, suggest a diet based on terrestrial resources, with a predominance of animal products (meat and/or milk, cheese and other secondary products from sheep/goat). These sites are roughly coeval with Trypillia A. However, the Bulgarian Copper Age sites are more advanced agriculturally. Hence, one might argue that the initial stage of farming at the early Trypillia sites may be structurally closer to the early Anatolian farming with the human diet being essentially based on cereals and pulses, with greater impact of animal husbandry at the later stages. Ogrinc and Budja (2005) perform a similar stable isotope analysis of the animal (both wild and domestic) and human bone collagen as well as of floral remains (mostly wheat, barley and peas) from Ajdovska Jama cave in Slovenia, dated to 6400-5300 years cal BP, i.e., coeval with Trypillia B-C. These authors find convincing evidence for a stable palaeoeconomy during this whole period, based on terrestrial food resources. According to these results, the major diet components were domestic animal products (44%), cereals (39%) and terrestrial wild meat (17%). Bogaard et al. (2007) stress that field manuring can bias the results of such analyses, leading to an overestimation of the contribution of animal products to the diet. However, there is firm archaeological evidence in favour of the importance of animal husbandry in the CTU agriculture. Pashkevich (1989, p. 136) concludes, from palynology data, that land farming and animal husbandry were equally important at the Maydanetske settlement.
As a plausible estimate and the starting point of our discussion, we assume that domestic animal products and cereals provided each 40% of the food consumption of the CTU population, with the remaining 20% coming from hunting. The meat weight and its calorific value of the hunted animals (mostly red deer, roe deer and wild boar in Trypillia) can be found in Jarman et al. (1982, p. 83). We do not include vegetables and other plants in our calculations as they could only contribute little to the calorific content of the diet: although as much as 2-3 kg of leafy vegetables would supply as little as 1000 kcal of energy (Jarman et al. 1982, p. 16), this volume of food exceeds the natural biological constraints of the human body. Likewise, we do not include any wild plants even if their calorific value might be comparable to that of cereals (Stokes and Rowley-Conwy 2002).
Our calculations presented below refer to the energy content of the food alone, but not to any nutritional balance of its individual components such as proteins, vitamins, amino acids, etc. Moreover, we only consider cereals, meat and dairy products but neglect legumes. Jarman et al. (1982, p. 16) note that, "when adequate calories are available from a varied diet, then considerably more than minimal protein requirements are automatically provided". Given the unavoidably tentative and approximate character of palaeoeconomy calculations, we do not feel that introducing a more detailed nutritional classification of foods would be justifiable.
Cereal yield
In this section, we discuss methods of estimating the plausible wheat yield in the CTU region using the available data from agricultural experiments in other comparable areas. Apart from corrections for ancient wheat varieties, we present evidence for the variation of the yield with rainfall, duration of continuous cropping and the efficiency of manure fertilisation. Since no evidence of irrigation has been discovered in the CTU area, we focus on dry farming.
Agricultural experiments
Any attempt to estimate the productivity of prehistoric agriculture faces a number of problems. Specifically in the CTU area, the land in the Ukraine, Moldova and Romania today has been cultivated for 9000-8000 years and the soils are unlikely at all to have properties like those encountered by the first CTU farmers. The varieties of wheat grown today have been modified by plant breeders and the yields have increased greatly, even without fertilisers (Austin et al. 1993). Furthermore, agricultural tools have changed over time, undoubtedly affecting the agricultural productivity. Added to this is the problem that, in the modern agricultural practice and in most agricultural experiments, the soil is amended with nutrients and pests, often made heavier by prolonged use of heavy agricultural machinery, and with weeds and diseases controlled using synthetic chemicals.
One way to address a part of these problems is to use the results from long-term agricultural experiments in areas which had not previously been used for agriculture. This excludes virtually the whole of Europe, Africa and Asia. In the central United States, however, there are areas that are climatically similar to the Ukraine and where the prairies remained uncultivated until the late nineteenth century. In Australia, there are also similar areas that had not been exploited; however, in southern Australia, unlike the CTU area, the climate is Mediterranean with a severe summer drought. These experiments mostly involve modern wheat varieties rather than those used in the early agriculture. This remains a problem which is hard to resolve completely (see below).
Our main data come from the Sanborn Field of the Agricultural Experiment Station of the University of Missouri-Columbia, USA (N38°57, W92°19) which began in 1888 and still continues; this is one of the oldest continuous, long-term research plots in the world (Miller and Hudelson 1921). In this experiment, we are interested in wheat grown annually and in various biennial and rotational systems both with and without the use of manure fertiliser. The Sanborn field is divided into 39 experimental plots, each 30 m by 10 m in size, separated by 1.5 m wide grass hedges. Changes were made to the experiment over its lifetime. Commercial fertiliser was introduced in 1914, and the number of plots receiving manure was reduced, which prevents us from using data obtained after 1918. A suitable coherent run of data for a number of replicate plots comes from the 1890-1918 period. Climatic data is available for Columbia from the U.S. National Oceanic and Atmospheric Administration, currently from 1890. The average climate conditions at the Sanborn field have been very stable over the period 1895-1998, without any detectable trends in the temperature and precipitation. The average annual surface temperature in 1895-1998 was 13C, with the maximum and minimum monthly mean temperatures of 26C in July and about 2C in January. Mean annual precipitation was 973 mm, and potential evapotranspiration, 790 mm (Hu and Buyanovsky 2003).
Chernozems and podzolic chernozems are widespread in the CTU area. Chernozems in the USA are classified within the Mollisol group (Fanning and Fanning 1989) and Sanborn lies at the south-eastern edge of the zone. Currently the detailed classification of the soil is an udollic ochraqualf, the mollic properties of the thin loess deposit being modified by the underlying glacial till; the top layer of the soil profile contains 2.5-2.9% organic matter (Hu and Buyanovsky 2003). 8 The yield (here denoted Y, in tonne/ha/year, with Yu obtained without any fertiliser and Ym obtained from manured plots) is known for each replicate plot between 1890and 1918(Miller and Hudelson 1921. Measurements of total rainfall between January and May are available at the experiment location (denoted R, in mm/year), and the time since the start of cultivation is known for each plot (denoted D, in years). These data are analysed below separately for plots with and without manure fertiliser applied, and where the wheat was grown every year, biennially or in rotation with other species. Data on the air and soil temperature at the Sanborn experiment site are also available. However, we do not use the temperature data in our analysis since the rainfall and temperature are not independent variables; on average, lower rainfall implies higher temperature. We use the rainfall data for the January-May period when the growth of the wheat is most critically affected by either drought in the early summer period (Arnon 1972) or by excess water leaching nitrogen from the soil (Hall 1905).
Variability and systematic trends of wheat yield
The data from the Sanborn experiment come from seven replicate plots of land, five treated with manure and two unmanured, with wheat grown annually.
Yield without fertilisers
For unmanured wheat grown every year at Sanborn, the average yield is 0.9 tonne/ha/year with a standard deviation of 0.7 tonne/ha/year (the coefficient of variation of 80%). The yield variability is very large, with a peak frequency at about 0.6 tonne/ha/year and a long positive tail (that is a few years gave exceptionally high yields). There are significant negative correlations between the wheat yield Yu and both rainfall from January to May R and the duration of cultivation D. The experimental data are shown with open circles in Figures 1a and 1b.
Assuming that the soil fertility is depleted by the same fraction each year, it might be expected that the dependence of the yield on time, and perhaps rainfall, is exponential. However, because of the large data scatter and relatively narrow ranges of the independent variables, it is more reasonable to adopt the simplest linear dependence of the yield at the unmanured plots, Yu, on the January-May rainfall, R, and the cultivation duration, D, where the uncertainties represent one standard deviation obtained from the scatter of the data points around the fit. The values of Yu obtained from this fit for the corresponding values of R and D are shown in the figures with filled circles to appreciate the quality of the fit. Figure 1. The dependence of wheat yield, Yu for unmanured (a, b) and Ym for manured (c, d) Sanborn plots, as a function of the annual rainfall R (a, c) and the duration of continuous cultivation D (b, d). Open circles show the experimental Sanborn data whereas filled circles represent fitted values calculated using Eqs. (2) and (3) for the corresponding values of R or D as appropriate. One outlying data point with R = 142 mm is not shown in Panels (a) and (c) and not included in the fit. Figure 1a shows the yield (both observed and fitted) versus the January-May rainfall, and Figure 1b presents the variation of the yield with time after initial planting. The yield decreases, on average, with both R and D. Rainfall over the period January to May averages to 400 mm with a standard deviation of 114 mm; it is clear that the higher rainfalls are not beneficial; the same effect was found at the Broadbalk experiment in England (Hall, 1905), where yield was reduced in wetter seasons. The rainfall of about 300 mm is nearly optimal for the crops as more rainfall just removes nutrients from soil in winter. The rainfall at Sanborn was less than 292 mm in only two years (256 mm in 1901 and 142 mm in 1914) that showed significantly reduced yields. However, the data available are not sufficient to identify such a non-monotonic dependence of Yu on R. Conservatively, the fits presented here should only be applied for R 300 mm.
The reduction in yield with the cultivation time on these unmanured plots is not unexpected, and a similar reduction is clearly noted for the Urrbrae wheat experiment in Australia (Grace and Oades 1994).
This analysis relates to all unmanured replicate plots combined. To ensure that the trends are consistent across individual plots, we repeated the analysis for the two individual unmanured replicate plots. The fit to the data from Plot 2 has much larger errors, but the trends with rainfall and time remain. We show the fit coefficients and their errors for the individual plots and the summary results in Table 3. Table 3. Fits to the yield data for individual plots and for the overall yield from all the plots: the fitted parameters A, B and C of Eq. (1), together with their respective standard deviations σA, σB and σC. Note that the unit chosen is based on kilogram rather than tonne as used elsewhere in the text. The value of ℛ 2 , given as a percentage, indicates the fraction of the variation in the data accounted for by the fit (higher values of ℛ 2 indicate a better fit, with the maximum of 100 We used data from five manure-fertilised plots with wheat grown every year. These received 15 tonne/ha/year of farmyard manure but were otherwise identical to the unmanured plots. The average yield of all unmanured plots is 1.34 tonne/ha/year with a standard deviation of 0.7 tonne/ha/year (coefficient of variation of about 50%). The variability between the plots and years is much smaller than that in the unmanured plots. The yield Ym is significantly correlated with the January-May rainfall R and time span D since the beginning of cultivation: (3) Figure 1c and 1d show the yield data for manured plots (open circles) together with this fit (filled circles). Panel (c) shows the yields (both observed and fitted) versus rainfall, Panel (d) presents the yield dependence on time. The rate of decrease in yield with time is smaller than for the unmanured plots, while that with increased rain is larger. It is not surprising that the decrease with time is slower than that for the unmanured plots, as the manure supplied a large part of the nutrients removed in the harvested crop. The stronger decrease with rainfall can occur because more nutrients are leached from the soil in the wetter years, or because the thicker crop lodged (was knocked down) more severely by intense rain. The yield is, most frequently, higher than for the unmanured plots, and there is a long positive tail of infrequent very high yields. Again, each plot was analysed individually as well as collectively with all other manured plots. Similar trends are present at all replicate plots, as shown in Table 3. As mentioned above, we also tried fits with exponential dependencies on rainfall and time span, but this did not improve the statistical quality of the results. The time span available (only around 25 years) is too short to make it practical to distinguish exponential and linear dependences. We note, however, that it is probable that the decline in productivity is exponential in the long term (i.e., there is a constant annual fractional decrease in yield).
For completeness, we also fitted a constant to the data, to test the hypothesis that the yield is independent of the rainfall and time; the resulting fits were significantly worse than the linear fits given above, confirming that the systematic trends revealed are meaningful. Table 4. The cross-correlation matrix between the yield (Yu and Ym for unmanured and manured plots, respectively), rainfall (R) and time since the beginning of cultivation (D), denoted in the text Cij with i, j = Y, R, D. Larger correlation coefficients (by magnitude) indicate stronger statistical dependence between the corresponding variables; negative values indicate an anticorrelation (i.e., one variable decreases as the other increases).
Unmanured Plots Manured Plots
To provide an additional measure of the yield sensitivity to the rainfall and time lag, we calculated the Pearson cross-correlation coefficient Cij between these variables. The crosscorrelation coefficients given in Table 4 suggest that, in the case of unmanured plots, the yield is slightly more sensitive to time elapsed since the start of cultivation than to the rainfall: |CYD| > |CYR|, with CYD = -0.31 and CYR = -0.26. We note, however, that the difference is rather small and perhaps statistically insignificant. However, the opposite inequality applies to manured plots, where |CYR| = 0.42 is more than a factor of two larger than |CYD| = 0.18. Thus, the dependence on the rainfall dominates over the dependence on the time span in the variability and long-term trend of the yield from manured plots. The correlation between the rainfall and time span is similar for both manured and unmanured plots, CRD = 0.33 and 0.30, respectively, which is a natural consequence of identical climate trends and the difference has no practical significance.
The relatively small values of ℛ 2 in Table 3 indicate that the yield can significantly depend on other variables apart from the rainfall and the time span. For example, our assumption that the temperature and rainfall are strongly negatively correlated, and thus are not independent variables, may be questionable. Hu and Buyanovsky (2003) note that, in the study area, higher temperatures often occur concurrently with increased rainfall. The relatively low values of the cross-correlations CYR and CYD in Table 4 are consistent with this suggestion. This question clearly deserves further analysis.
We also considered plots of biennial wheat crops, manured or unmanured, with clover as the intervening crop. There are fewer measurements available than for the monoculture wheat described above, and although the manured plots had a larger yield (1650 kg/ha/year as opposed to 1340 kg/ha/year at the unmanured plots) there is no qualitative change in the yield trends with either the passage of time or the amount of rain that fell.
The data summarised above are similar to those from other experiments, albeit in different climatic regions -Broadbalk in England (Hall 1905) and Urrbrae in the coastal belt of Australia (Grace and Oades 1994) showing comparable response of the crops to the environment in such disparate areas, even if the trends may differ quantitatively. The very large variability of yield at Sanborn on the monocultural plots was explained by pest and disease attack and weeds (Miller and Hudelson 1921). The yield at Rothamsted farm in England (Hall, 1905) was less variable from year to year, probably because the impact of the outbreaks of pests and diseases was weaker in the cooler climate. Similarly to our results, Hu and Buyanovsky (2003) find that the corn yield at Sanborn was higher in years with lower rainfall in April and higher rainfall in May-August. They conclude that the corn yield is favoured by warmer and dryer spring months (April and May) and wetter and cooler July and August. These authors also find that "the average growing season climate gives little indication of climate effect on corn yield", and the yield variations are mainly controlled by monthly and shorter climate variations.
Adjustments to pre-modern agriculture
The Sanborn data have been obtained for relatively modern wheat varieties. [Unfortunately, Miller and Hudelson (1921), our main data source, do not identify the specific wheat varieties used in the experiments.] Even if the soil and climate conditions can be taken to be broadly similar to those of the CTU area, significant corrections are required to allow for the difference in the crop species and agricultural techniques. Nikolova and Pashkevich (2003) and Pashkevich and Videiko (2006) present and discuss evidence that the main cereal crops of the CTU farmers were hulled wheats, such as emmer (Triticum dicoccum Schrank), einkorn (T. monococcum L.) and spelt (T. spelta L.), as well as barley varieties (Hordeum vulgare and Hordeum vulgare var. coeleste).
Considering adjacent temporal and geographical domains, the cereal crop assemblages in early Neolithic cultures in Bulgaria (the second half of the sixth millennium BC) include naked and hulled barley (Hordeum sp.) and naked wheat (T. aestivum s.l./durum/turgidum), together with pulses, in addition to those cultivated by the LBK farmers: emmer (T. diococcum), einkorn (T. monococcum), as well as peas, lentils and flax (Kreuz et al. 2005). These authors note that barley and naked wheat were used in the broader area, including that of the Starčevo-Körös-Čris culture (eastern Hungary, Greece, former Yugoslavia, Romania and the Turkish Thrace). A review of other estimates of the wheat yields, including experimental, historical and ethnographic data can be found in Table 2.1 of Bogaard (2004b). Her data are generally consistent with our estimates, especially given the fact that they refer to naked wheat varieties and barley, whereas we focus on hulled wheats. Pashkevich and Videiko (2006) suggests that the CTU farmers relied on spring crops and did not use winter crops. From the potential weed species recorded at the Neolithic sites (in particular, winter annuals versus summer annuals), Kreuz et al. (2005) conclude that both summer and winter crop growing was typical of the early Bulgarian Neolithic, whereas summer crop cultivation apparently dominated at the LBK sites. The Sanborn crops considered in Section 3 are winter crops. We note that winter crops have higher yields than spring varieties on the same land (by 25% or more -Percival 1974, p. 422) but, correspondingly, they deplete the soil fertility more than the summer crops. As a result, growing winter crops often requires crop rotation, which reduces the yield averaged over a sufficiently long period. There are certain disadvantages of winter crops as compared to summer ones: the fields need to be prepared for sowing in a rather short time, and winter crops are more sensitive to climate fluctuations. A certain balance of winter and summer wheats appears to be optimal. Table 5. Yields of spring emmer and einkorn, and winter spelt, together with naked wheat yields grown under comparable conditions, under dryland cropping in south central Montana, U.S.A. in 1992-1994(emmer and einkorn) and 1991-1994(spelt) (after Stallknecht et al. 1996. The emmer, einkorn and spelt grain yields were estimated as 60% of the hulled grain when dehulled. Stallknecht et al. (1996) provide data on the yield of selected crossings of emmer, einkorn and spelt grown at the Southern Agricultural Research Center, Huntley, Montana, U.S. A. in 1991-1994. These modern varieties were selected for their high yield, so the data, summarised in Table 5, should be used with great caution in the present context. The yields of einkorn, emmer and spelt are significantly lower than those of modern naked wheats grown under similar conditions; the data of Table 5 suggest that the yields of even the best selections of emmer and spelt are 60-75% of naked wheat yields. We also note the strong variability of the yields, shown in Column 3 of Table 5 in terms of the yield range. The range for einkorn is based on the data series for individual plots, and shows variations by about 100%, whereas the other entries show the range of the annual averages over a set of plots, thus showing less variability, at about 25% (if the individual plots have 100% variability, such a reduction could be achieved with 10-15 plots in each set). Percival (1974, pp. 171 and 188) estimates the einkorn yield as 16-80 hectolitres per ha (about 1200-6000 kg/ha/year) depending on the soil quality (ranging from poor mountainous regions to good soils), whereas emmer yields vary from 25 to 50 bushels per acre (about 1700-3400 kg/ha/year). The largest einkorn yield given by Percival is significantly higher than that in Table 5, but the emmer yield is in a better agreement with Table 5. We stress again that the emmer, einkorn and spelt data in Table 5 and those of Percival (1974) are at the higher end of the range even for the modern plant varieties. Jarman et al. (1982, p. 158) quote historical data on the average cereal yield of 800-1400 kg/ha/year in traditional agricultural systems in Romania and note its strong fluctuations from about 1400 kg/ha/year in 1913 to 540 kg/ha/year in 1914. Nikolova and Pashkevich (2003) quote the emmer yields for 1902 in south Ukraine at the level of 390-1140 kg/ha/year; with the median value (750 kg/ha/year) significantly smaller than that given in Table 5. Russell (1988, p. 111) suggests, for the early agriculture in the Near East and Africa, 500 kg/ha/year for the emmer and spelt yields, with a range of 400-3700 kg/ha/year. Gregg (1988, pp. 73-74) quotes the range of 757-1045 kg/ha/year for the late nineteenth century yields of winter and spring einkorn and emmer-spelt maslin in Germany, and adopts the larger value in her estimates for the LBK agriculture. The yields of autumn-sawn emmer in the Butser Ancient Farm experiment averaged over 15 consecutive seasons at about 2080 kg.ha/year, grown without using manure on a field every second year with a bean crop in between (Reynolds 1992). The author notes a rather high yield, "significantly higher than any expectations", attributable to "the soil, the climate and good management". Karagöz (1996) provides data on the yield of einkorn and emmer in Turkey in 1948-1993. Although the data are only given for the two species combined, the author notes that emmer was planted on much larger areas than einkorn. According to this author, the yield varied from 814 to 1391 kg/ha/year, with the mean and standard deviation of 1110 200 kg/ha/year. This variation was not uniform in time: the yield did not change much in 1948-1968 when it was 930 100 kg/ha/year, but exceeded 1231 kg/ha/year thereafter. Karagöz (1996) also reports an agricultural experiment in northern Turkey, with very limited use of fertilisers and herbicide. Naked wheat was grown on 1280 ha, and emmer and barley on 542 and 456 ha, respectively, in sloping, marginal forest areas. The average yields of naked wheat, barley and emmer in this experiment were 847, 711 and 618 kg/ha/year, respectively. The modern annual average rainfall in the area is 567 mm, and the average annual temperature is 10.4ºC; the soil cover is predominantly the Brown Forest Soil.
Wheat variety
In another experiment (Castagna et al. 1996), einkorn gross yield (i.e., that of hulled grain) varied broadly between 840 and 4570 kg/ha/year (with a typical value of 2840 kg/ha/year), with the net yield estimated as 77% of the gross value on average. The maximum gross grain yield was obtained with a seeding rate of 72 kg/ha/year (300 kernels/m 2 /year). The yield of two bread wheat cultivars (T. aestivum) grown as controls averaged at 7030 kg/ha/year. 14 Considering also the other extreme, we note that the yield of wild einkorn and emmer can reach 500-1000 kg/ha/year (see Araus et al. 2007, and references therein). Araus et al. (2007) use the stable carbon isotope ratio 13 C/ 12 C in the fossil grains of naked wheat (T. aestivum/durum) recovered from early Neolithic sites to estimate the prehistoric grain yield. The total number of 54 grains from Tell Halula and Akarçay Tepe (8000-6100 BC, Middle Euphrates region) were used for this purpose. This method relies on the strong connection, observed in modern wheat crops, between both the total water inputs during grain filling and grain yield, on one side, and the (normalised) difference in 13 C/ 14 C between the grain kernels and atmospheric CO2, on the other side (Araus et al. 2003). The atmospheric carbon isotope content of the time was obtained by the authors from the Antarctic ice-core records. Furthermore, ancient soil fertility and/or the occurrence of fallow can be estimated from the grain 15 N/ 14 N ratio. The estimated wheat yield is 1300-1700 kg/ha/year, comparable to or even higher than that of modern wheat varieties in this region grown without irrigation. This can be attributed to a favourably wetter Neolithic climate in the area or to planting in alluvial areas. Furthermore, high values of 15 N/ 14 N in the ancient grain suggest that it was grown on fertile soils, perhaps with manure application and/or the use of natural wet soils. Altogether, Araus et al. (2007) suggest that the yield of naked wheat in the early agriculture in the area studied could plausibly be as high as 1000 kg/ha/year (see also Araus et al. 2001).
Given the differences in agricultural technologies and especially the wheat varieties from the modern experimental farms, it is fair to assume that the yields of the CTU crops were significantly lower than those of the Sanborn data of presented above. The relation between the yields of naked and hulled wheats grown under similar conditions that follows from Table 5 suggests that the yield of einkorn, emmer and spelt can be adopted as 70% of the ancient naked wheat yield estimated by Araus et al. (2007), i.e., of order 700 kg/ha/year. Incidentally, this figure is close to the emmer yield in the early twentieth century Ukraine quoted above, and somewhat smaller than the lower-end yields of emmer and einkorn in modern agricultural experiments. Whenever required, we shall allow for this correction by multiplying the yield of Eqs. (2) and (3) with a factor chosen as to adjust the average yield at unmanured Sanborn plots, 900 kg/ha/year, to about 700 kg/ha/year. This yields This appears to be a very conservative estimate of the correction for the yield of cereals in the Neolithic: the yield could be noticeable larger, i.e., can be larger. Table 6. Fit parameters and their standard deviations for Eq. (4), based on the wheat yields at Sanborn given in Table 3 and Eqs. (2) and (3), with and without manure fertilization. We shall be using the trends given in Eqs. (2) and (3), being aware of the tentative nature of these results. Rewriting these equations in a more convenient form, we shall be using fits of the following form for Yu and Ym:
Parameter
where is the correction factor suggested above, and the fitted values of R0 and D0 are given in Table 6, as obtained from the fits for all unmanured and manured plots in Table 3. Here R0 and D0 have an intuitively clear meaning of the nominal values of the rainfall and the time span, respectively, required to reduce the yield to zero if only one of the two parameters varies while the other is fixed at zero. For comparison, Percival (1974, p. 420) provides an approximation to the dependence of the average wheat yield in Britain in 1884-1904 on the total rainfall in October-December: yield per acre equals 39.5 bushels minus 5/4 of the rainfall expressed in inches, which translates into Y0 = 2660 kg/ha/year/ and R0 = 800 mm, figures rather similar to those in Table 5. Jarman et al. (1982, p. 141) refer to the Rothamsted Broadbalk continuous wheat experiment (where the soil is a chalk-rich loam) suggesting "that, even without manure or fertiliser, average yields of grain showed only a very gradual decline over 60 years". The data shown in their Fig. 52 exhibit a decrease in the yield from 9 to 5-6 cwt/acre/year (1130 to 630-750 kg/ha/year) in about 20 years, followed by a variation between the latter value and 7 cwt/acre/year. Our fits for unmanured Sanborn plots give a decrease in yield by 50% in about 30 years, in a reasonable agreement with the initial decrease in the Rothamsted Broadbalk experiment. However, Loomis (1978) argues that, for a lower wheat yield of about 1000 kg/ha/year, nitrogen removed by the wheat crop (20 kg N/ha annually) is replaced during a crop-fallow cycle by dust, rain and birds (8-12 kg N/ha/year), by the seed (1 kg N/ha/year for the yield/seed ratio of 10 to 1), and by leguminous weeds (2-10 kg N/ha/year) and manuring. As a result, the nitrogen budget can be balanced and remain in equilibrium even without manuring (see also Gregg, 1988, p. 65). Loomis refers to existing cropping systems in Asia that have maintained such equilibria through thousands of years and notes that plots in Rothamsted experiment generally stabilized at a low yields of 1000-2000 kg/ha/year without manuring. The Sanborn data series is too short to assess this suggestion: the yield of unmanured plots in Figure 1b do not show any signs of reaching any equilibrium value in 30 years of cropping, whereas the manured plots of Figure 1d may have reached it in 15-20 years.
We stress that the values of R0 and D0 have been obtained from our fits to the Sanborn data, and we are unable to apply any corrections to make them better applicable to the opremodern CTU agriculture, even if such a correction can be reasonably introduced for Y0. Admittedly, this is not satisfactory, but we are not aware of any data or arguments which would help to resolve the problem. On the other hand, the trends with time and rainfall can be less sensitive to the wheat variety than the yield.
On the use of the manure fertiliser
Having noted the strong variability of the yield, evident from Figure 1 (see also Nikolova and Pashkevich, 2003), we suggest that the Neolithic farmer would experience a boom and bust production system which could be mitigated to some extent by the use of manure. There is ample evidence for the use of manure as a fertiliser from the early stages of farming (Wilkinson 1982;Bogaard et al. 2007Bogaard et al. , 2013Vaiglova et al. 2014). However, as there would be (at least initially) a large area of virgin land available that was relatively easy to clear for the fields, the extra work of collecting and using manure could have been avoided by the use of fresh fertile soil in new fields. In addition, the possibility of collecting manure in useful quantities depends on how the livestock is kept, and often requires that the cattle be brought to barns every night; this may or 16 may not have been the practice in the CTU settlements. However, as the manure helped to reduce yield variability from year to year, this could make its use much more advantageous. In the Sanborn data, yields smaller than 400 kg/ha/year occurred on fewer than 8% of occasions under manure, but on 27% of occasions on the unmanured plots. It can be argued that large, relatively short-term, negative fluctuations in the productivity, rather than its general low level, can lead to catastrophic consequences and affect the survival and subsistence strategy and patterns of the population (Feynman and Ruzmaikin 2007;Abbo et al. 2010). The fact that manuring stabilises the yield under variable environmental conditions could make the use of the fertiliser an especially attractive option for the Neolithic and CTU farmers. We estimate below the maximum fraction of the crop area that could be manured given the herd composition of the CTU farmers,
Cereals
Following the results of Section 4, we adopt Y = 700 kg/ha/year as a nominal yield of hulled wheats, but consider plausible the range of 700-1200 kg/ha/year; even higher yields may be appropriate, especially for later CTU stages. Emmer seeding rates are 76 kg/ha in low-rainfall regions and 100 kg/ha in high-rainfall areas; 67-100 kg/ha is the seeding rate of spelt on dryland (Stallnecht et al. 1996). Einkorn seeding rate is similarly about 72 kg/ha/year (Castagna et al. 1996). These estimates agree with the general figure of about 10% or more of a harvested grain to be used as seed crop (e.g., Hillman and Davies 1990, p. 178). We adopt the seeding rate of 12% in our calculations. For comparison, White (1963) suggests, based on documentary evidence (Varro), the wheat yield in Roman Etruria was between ten-and fifteen-fold. Assuming that further 25% of the grain is lost to pests (Hall, 1905), about 440 kg/ha/year remains available for consumption.
The World Health Organisation proper nutrition recommendation of 2200-3000 kcal/person/day translates into about 900-1200 kcal/person/day from each of domestic animal products and cereals, assuming that each contributes 40% of the calorific diet content. Using the calorific value of the spelt grain of about 3150 kcal/kg (Ranhotra et al. 1996), the required amount of cereals is 100-140 kg/person/year. With the grain available as food of 440 kg/ha/year, this implies the required crops area of about 0.2-0.3 ha/person. Although emmer and einkorn dominate over spelt at the CTU sites, the calorific content of their grain, 3567 kcal/kg for einkorn (Harlan 1967, p. 198), does not differ much from that of spelt; we conservatively adopt the lower figure.
Palaeoeconomy estimates often neglect the contribution of domestic and wild animal products to the diet and assume (explicitly or implicitly) that cereals are the only component of the Neolithic diet. Using the above figures, 250-350 kg/person/year of cereals would be required as the sole source of calories, which would need the area of 0.4-0.5 ha/person to produce if any losses are neglected (as is done equally often). This figure is similar to many earlier results, which we believe to be overestimates. Table 7. Animal bone assemblages from Trypillia sites: the minimum numbers of individuals (MNI) at the sites specified in Column 1 (after Appendices 2-5 of Tsalkin 1970) and mean and relative numbers for each Trypillia stage (bold). Data are given here only for the animals suitable as a food resource and occurring in significant numbers. The relative mean MNI values and their standard deviations are given separately for the domestic and wild animals.
Domestic animal products
To estimate the size of cattle and caprine herds required to satisfy the nutrition needs of the Neolithic and Bronze Age farmer, we assume that the animals were kept for both meat, milk and dairy products (and perhaps blood). However, wildlife resources are another source of meat, and there is sufficient archaeological evidence, similar to that given in Table 7, to assume that wild animal meat was also an important source of nutrition. As discussed above, Ogrinc and Budja (2005) suggest that about 20% of the diet at the Ajdovska Jama site was provided by the meat of wild animals. Zhuravlev (1990, p. 137) analysed the animal bone assemblage of Maydanetske, one of the largest CTU sites known (Trylillia CI, Cherkassy Region, central Ukraine) to estimate the fraction of domestic animals as 85% by head, comprising 35% of cattle (Bos taurus L.), 27% of sheep (Ovis aries L.) and goats (Capra hircus L.), 28% of pigs (Sus domestica Gray) and 5% of horses (Equus caballus L.); this appears to be a typical picture for both early and late Trypillia settlements in the Ukraine. These figures are encouragingly similar to those of Tsalkin (1970) presented in Table 7. A very detailed and extensive overview of the CTU bone assemblages, their biometric characteristics and local variations can be found in Zhuravlev (2008) and Videiko et al. (2004, Vol. 1, pp. 152-198). These authors note a relatively large fraction of cattle in the apparent herd structure and suggest, from the osteometric data, that bulls, oxen and horses were used as draught animals.
There are several clear trends in the bone assemblages presented in Table 7. The ratio of domestic to wild animals (by MNI, the minimum number of individuals) increases from 1.4 in the Early Trypillia to 2.4 in the middle period and to 2.8 in the late stage. The composition of the domestic livestock apparently remains stable within errors, apart from the increase in the relative frequency of the horse MNI from small quantities in the Early and Middle stages to 0.14 0.06 in the Late Trypillia. The faunal remains at Usatovo (Late Trypillia) are clearly exceptional (e.g., Zhuravlev 2008) and are excluded from the averages presented in the table.
For the herd/flock composition, we adopt the relative mean MNI numbers from the bottom of Table 7, ac = 0.35 of cattle, as = 0.24 of caprines, ap = 0.33 of pigs and ah = 0.08 of horses in terms of the relative numbers by head. The energy content of the meat from the domestic animal species can be found in Gregg (1988, p 152) and Jarman et al. (1982). The average culling rate in the modern UK cattle herds is 25% (AHDB 2012); our nominal figure of the cattle herd fraction culled annually is kc = 0.2; the culling rate of caprines, ks, is assumed to 0.2 too. Since pigs are not kept for milk, their culling rate kp can be higher; but we adopt kp = 0.2.
Following White (1953), we assume that a half of the live weight of both cattle and caprines represents usable meat; the figure for pigs is 0.7. The live weight of cattle and caprines adopted are 200 kg/head and 50 kg/head, respectively. Neolithic pigs were significantly smaller than either wild or modern ones. This difference, noted from the CTU bone assemblages by Tsalkin (1970, p. 179) andZhurvalev (2008, p. 17) is interpreted as evidence that the pigs were isolated from their wild relatives using fences or pens. Following Gregg (1988, p. 118), we adopt 30 kg/head for a pig's live weight. Bökönyi (1971) suggests that, in the Middle Neolithic, cows could provide only little surplus milk after the calf had been fed. This would of course depend on the feeding of the cow, and the size, vigour and the weaning age of the calf. However, dairy foods appear to be used in the Neolithic (Craig 2002;Copley et al. 2003;Craig et al. 2005;Spangenber et al. 2006;Evershed et al. 2008), and the importance of dairy farming apparently increased qualitatively in the Bronze Age (Sherratt 1983(Sherratt , 2010Greenfield 2005;Brochier 2013). Milk was valued to the point that calves seem to have been weaned early during the Neolithic (Balasse and Tresset 2002). Composition of the milk is affected by the diet of the animal (Boland 2003), with those fed on grass without a concentrate feed having a lower yield, more butterfat and similar protein content. The breed and species also have a strong effect on milk composition (Crawford 1990), with modern breeds such as the Holstein having lower butterfat content.
It is difficult to estimate the milk yield in the CTU or any other prehistoric farming system. To start at the lower end of the modern productivity, we note that, in modern subhumid Nigeria, milk yield from 'traditional' cattle is 280.7 litres per annum of which 111.5 litres is a surplus to the calf's requirement (Otchere 1986). A figure of 0.59 litre/day (or about 215 litre/year) surplus for the Zebu cattle in Tanzania was reported by Kavana et al. (2006). 'Indigenous' cattle in Ethiopia on smallholdings produce a total of 1.5-3.6 litre/day with the average lactation length of 232 days (Abraha et al. 2009), as compared to 1.6-2.4 litre/day for 'indigenous' stock in Zimbabwe (Masama et al. 2003). It is notable that, in some of the above cases where the milk yield is very low, the cattle is kept mostly for prestige and other similar non-economic reasons. It is hard to find suitable European data since even in the less developed areas such as Moldova, the 'traditional' breeds produce nearly 10 times the above yield (Moldova 2004), and even the worst producer (in a survey of, predominantly, smallholders with less than 3 cows) was producing 1400 litre/year or more in 2001 and 2003 (Dumitrasko et al. 2006). Todorova (1978) suggests that a Neolithic cow produced some 600-700 litres of milk annually. Gregg (1988, p. 106) adopts a cow's milk yield of 1.78 litre/day, which leads to about 360 litre/year/head for a lactation length of 200 days. As a nominal figure, we adopt the surplus cow milk yield of yc = 400 litre/head/year but consider a range of 0-2000 litre/head/year. For comparison, modern European cow breeds typically produce 10,000 litre/head/year of milk.
For the milk yield of sheep and goats, we adopt values at the lower end of the modern range. For a 12-week annual lactation period and hand-milking, non-dairy goats and sheep produce in Malawi 61 and 34 kg/head/year of milk, respectively (Banda et al. 1992). Gregg (1988, p. 118) quotes 170-680 kg/head/year for sheep and 340-1417 kg/head/year for goats (as they have a longer lactation period). We prefer to use the conservative lower estimates, and the nominal figure used in our calculations is a rounded mean of the figures of Banda et al., ys = 50 litre/head/year. Since caprines represent a relatively small fraction of the livestock, this choice does not greatly affect our results.
Estimates of the cattle grazing area range from 1 ha/head/month in deciduous forests to 1.5 ha/head on pasture (Gregg 1988, pp. 106-107). Jarman et al. (1982, p. 108) adopt the grazing area required for cattle of about Ac = 10 ha/head but note that it can be as low as 0.3-0.5 ha/head on seasonally and permanently flooded pasture. Gregg (1988, p. 123) suggests that the grazing area required for the herd should be doubled to allow for at least one-year recovery of the grazing land. Glass (1991, p. 28) quotes a number of estimates of the forest pasture area ranging from 0.8 to 8 ha/head. We adopt Ac = Ah = 10 ha/head as the nominal figure for both cattle and horses; detailed knowledge of the landscape around specific sites would be required to refine this estimate. Caprines' needs in grazing are about ten times smaller than those of cattle. When kept in large herds and under extensive grazing systems, sheep and goat need about As = 0.5 ha/head of grazing area (Coop 1986); this is the figure we adopt. However, the grazing characteristics of cattle, sheep and goats are complementary, as cattle and sheep relish grasses and herbs, respectively, whereas goats prefer weeds and woody vegetation not used by the other animals (Coop 1986;Gregg 1988, p. 123). We neglect any pasture area for the pigs as they can graze in woodlands and/or near the rural settlements; to some extent, this also applies to goats.
Fodder for four winter months is another requirement of livestock imposing constraints on both the exploitation area and the labour costs. Apart from meadow hay, cereal straw and leaves of certain trees such as elm (Rasmussen 1990), elder, ash and acacia provide good fodder. Modern grass-legume pastures can yield up to 5-20 tonne/ha/year of dry hay (Coop 1986); mature cows consume about 400 kg/head/month of hay and sheep/goat require about ten times less food (Gregg 1988, pp. 108 and 118). Gregg adopts the yield of a natural meadow on low-lying damp soils to be 1470 kg/ha/year. We follow this author to assume that about Mc = Mh = 0.5 ha/head of hay meadow is required to produce winter fodder for cattle and horses, and Ms = 0.02 ha/head for sheep/goats (Gregg 1988, pp. 110, 120 and 121). Since not only natural or cultivated meadows but also forests are a source of leafy fodder, we assume that only half of the fodder is hay and cereal straw. We include the area required to produce hay into the calculations of 20 the exploitation area of a settlement in Section 7, and the time to cut grass on them in the labour costs and labour return in Section 8.
Wild animal products
The faunal remains found at CTU sites indicate that hunting was a significant source of food, especially at the early CTU stages. The ratio of wild to domestic MNI in Table 7 decreases from about 0.7 in the Early Trypillia to 0.4 at later stages. A more recent analysis of Zhuravlev 2008) shows a lower fraction of wild animals, of order 0.2. We adopt this figure in our calculations. The composition of the hunting trophy given in Table 8 is taken according to the relative mean MNI in the bone assemblages: 0.48 of red deer (Cervus elaphus L.), 0.24 of roe deer (Capreolus capreolus L.) and 0.29 of wild boar (Sus scrofa ferus L.) by head. The calorific value of the meat is taken from Jarman et al. (1982, p. 83).
Land use and the local carrying capacity
In this section we estimate the land area required for a farming population to subsist in a given environment, with a given subsistence strategy and agricultural technology, and hence the maximum number of people per unit area. We call this the subsistence carrying capacity, Ks, as opposed to an economic behaviour aimed at creating a surplus product for exchange or trade. The starting point for such a calculation are the human dietary requirements.
Any estimate of the carrying capacity of a landscape strongly depends on the subsistence strategy and on the land use. Ethnographic evidence presented by Jarman et al. (1982, p. 30) suggests that land could be exploited within 1-11 km of a settlement. This radius is limited by the time required to travel to the field, with one hour as a reasonable maximum, and 1.5-2 hours as an undesirable upper limit (similarly to the commuting times of modern urban workers, as Jarman et al. note). The average outside limit of the cultivated land area is suggested as 5 km, with most land under cultivation within 1-2 km of the settlement. Higgs and Vita-Finzi (1972) suggest a radius of 5 km for the exploitation territory by a sedentary population (and 10 km, for sedentary or semi-sedentary people), and note that time spent on travel is more important than distance (see also Jarman et al. 1982, pp. 30-32). Tipping et al. (2009) carefully analysed and modelled pollen data from an early Neolithic site in north-east Scotland (a timber 'hall' at Warren Field), to conclude that land within a radius of at most 2.5 km was in use. Cereals were cultivated immediately around the 'hall', but no evidence of pasture for livestock has been recorded. Following Chisholm (1979, p. 72), Higgs andVita-Finzi (1972), Jarman et al. (1982) and many other authors, we assume that the cultivated fields will tend to be located in a close proximity, within not more than about 5 km of the settlement, and preferably within 1-2 km. The livestock can be kept at larger distances: up to 5 km if walking to the pasture and returning to the farm daily, or 10 km if the animals are kept around a temporary camp.
The family size is another important parameter. Five to seven people is a reasonable estimate for the size of an extended farming family, of which 2-4 may be fit to work in the fields, the remaining being too young or too weak. We adopt six people in a family as a representative value. Although a few family members could be involved in the physically demanding work such as land tillage, many other production activities can be assigned to other family members. For example, a large proportion of the herding and care of the domestic animals can be assigned to children. Tillage with the ard or plough requires two people to work simultaneously, but guiding the draught animal(s) does not require much physical force. Likewise, reaping, threshing (especially using animals), winnowing and later preparation of grain could involve virtually the whole family. Therefore, our discussion of the labour costs and the seasonal time stress largely focuses on the land preparation for sowing, an activity that requires significant physical force and must be completed in a short and strongly limited time. 21 Gaydarska (2003) presents land use analysis of Maydanetske, a proto-urban site (Trypillia CI) that had an area of A0 = 210 ha (Müller at al. 2014) and an estimated N = 10,000-15,000 inhabitants; sites that large are rare but not exceptional: the area of the nearby Tallianky is 350-400 ha. The giant settlements emerged at late Trypillia stages. Typical settlement areas at various Trypillia stages are given in Table 2. Houses in CTU settlements are often arranged along nearly elliptical contours closer to the settlement boundary (perhaps to provide easier access to the fields) with large open spaces in the central part of the settlement that could be used for horticulture. According to Gaydarska (2003), about 78% of the area within 7 km of Maydanetske is suitable for agriculture; thus, u = 0.2 appears to be an acceptable estimate of the fraction of unusable area in the central part of the CTU area in the Dnieper-South Buh interfluve. We further assume that a fraction a = 0.35 of the total land area is potentially arable; the rest can be used as grazing land. We further assume that part of the arable land lies fallow; the ratio of the fallow to cropped land areas is denoted f . The nominal value adopted is f = 2, that is any plot is cropped once in three years. As an example from another region, the LBK study area of Ebersbach and Schade (2004), Mörlener Bucht in Hesse north of Frankfurt am Main, has 82% of the area suitable for fields (loess soil), 11% are water meadows suitable for grazing and 7% are steep slopes suitable neither for fields nor for grazing.
To make our results properly robust and flexible, we first derive general algebraic expressions for the key variables involved in palaeoeconomy reconstructions before using specific values of the input parameters and exploring the effects of their variation within ranges consistent with what we know about the CTU agriculture. The nominal values of the input parameters, their dimensions and the mathematical notation used in the equations are given in Table 8, whereas Table 9 contains the most important results of the calculations presented in a similar format. The text contain sufficient detail to reproduce all the results of Table 9, and to calculate any other quantity if it is not given in the latter table.
Per capita cereal production and arable land area
With the daily dietary requirement of c [kcal/person/day], the annual diet must have the calorific value C = 365c [kcal/person/year]. The relative contributions of cereals, domestic animal products and wild animal products to the diet are denoted g, d and w, respectively (see Section 5).
Thus, the annual calorific values of grain (cereals), domestic and wild animal products required for one person to subsist are gC, dC and wC, respectively.
The cereal yield available for consumption, Yg [kg/ha/year], is obtained from the total yield Y by subtracting various losses and the amount required for seeding. We assume that a fraction of the cereal yield is used for seeding and a fraction of the total grain amount is lost to pests and other losses; the nominal figures are = 0.12 and = 0.25. The usable cereal yield is then Yg = (1 )Y = 0.63Y. With the calorific value of grain equal to eg [kcal/kg] and the crop area per person equal to Ag [ha/person], the calorific value of the cereals grown annually is given by The per capita crop area required to satisfy the dietary needs in cereals follows from the requirement that energy produced annually, denoted Eg , equals the annual cereal dietary energy requirement, gC : However, only a fraction of the arable area is used for the crops, and the rest is fallow; the area of the fallow fields exceeds that under the crops by a factor f . Furthermore, only a fraction a of 22 the total land area is arable. Thus, the total land area containing the cereal fields and fallow land required to satisfy the dietary requirements of a single person is given by For the sake of simplicity, we assume that there is only one type of cereal (and domestic plants in general) grown for food, but the diversity of crops (including legumes) can easily be allowed for by introducing the dependence of the usable cereal yield Yg on the yields and nutrition values of any other cereal varieties and including other cultivated plants into the calorific dietary budget, in the same manner as it is done below for the animal products. We restrain ourselves from including all these factors into our calculations only to avoid any misinterpretation of their accuracy.
Per capita consumption of domestic animal products and the livestock grazing area
A similar calculation for animal food products is slightly more complicated, as there is more than one kind of domestic animals kept and wild animals hunted for. Since the amount of food provided and the grazing area required are rather different for different animals, it is more important to allow explicitly for the herd diversity than for the crop diversity. The bone assemblages discussed above provide the relative average numbers of cattle, sheep/goat, pig and horse among the domestic animals kept, denoted here ac, as, ap and ah. Their usable meat weight is denoted mc, ms, mp and mh , respectively. Consider a herd of this composition that has na animals per capita.
where individual terms in the brackets represent the contributions of beef, lamb/mutton and pork, respectively. We include horses here for generality, although we will later assume that horses are not kept for food (perhaps as draught animals) and neglect their contribution to the diet. Since the cattle and caprines are kept for both meat and milk, it is reasonable to assume equal cull rates for these animals, kc = ks, but the cull rate of pigs can be larger.
The per capita area Aa required for the animals to graze is given by where ai Ai (with i = c, s, pi, h for the cattle, sheep/goats, pigs and horses, respectively) are the proportions of grazing areas of various animals in the total grazing area. The grazing area includes meadows, fallow land and woodland; pigs and goats can find food even near to or within a rural settlement. In calculations presented below, we assumed that pigs do not need any grazing area additional to that used by other animals; formally, we put Api = 0.
The per capita area required to collect winter fodder for the livestock is similarly calculated as where Mi (with i = c, s, h for the cattle, sheep/goats and horses, respectively) are the land areas required to produce fodder for one head of the corresponding animal. A perhaps unexpected result of our calculations (confirmed by Jorgenson 2009) is that dairy products can play quite a significant and important role in the diet. With the calorific values of the cow and caprine milk denoted emc and ems, and the respective per capita animal numbers given by naac and naas , the per capita amount of milk that can be obtained annually from the herd [litre/year/person] is given by where c and s are the fractions of milk-producing cows and caprines in the herd. Having in mind the limited accuracy of any estimates of this kind, we neglect the relatively small number of the male cattle in the herd and thus assume that the value of ac is the same here and in Eq. (7) for the meat production and Eq. (8) for the grazing area. We allow for the fact that only a fraction of the cows, ewes and does can be milked at any time. The lactation period of cow is close enough to half a year ranging from 180 to 230 days (Gregg, 1988, p. 106); thus, we adopt c = 0.5. The lactation period of unimproved breed of caprines varies from 12 weeks (Banda et al. 1992) to 19 weeks for sheep and 30 weeks for goats (Redding 1981, cited in Gregg, 1988. We adopt the lower value, 12 weeks annually, to have s = 0.25 but the range 0.25-0.5 appears to be a realistic possibility.
Analyses of archaeological bone assemblages do not always distinguish between the sheep and the goat. This may affect the estimate of the energy content of the dairy products since the energy content of the cow milk, about emc = 600 kcal/litre on average, differs significantly from that of the sheep milk, 1030 kcal/litre, but not the goat milk, 680 kcal/litre (Table 3.
We assume that all this energy is consumed in the form of various dairy products if not milk itself.
Equating the total calorific value of the meat and dairy products obtained from the herd, Ea + Em from Eqs. (7) and (9) Assuming that horses are not used for food (in part, because of their relatively small numbers relative to the cattle), we neglect their contribution to meat supply, formally setting mh = 0 in the above equations. This is consistent with the fact that the relative number of horses increases in the Late Trypillia (Table 7), as the need in draught animals is likely to increase as agriculture becomes more intensive. The grazing area required for the pigs is neglected, Api = 0 (see Section 5.2). 26 The number of domestic animals required to satisfy dietary requirements of N people can be calculated as a = a .
Then the numbers of the cattle, caprines, pigs and horses in the herd are equal to Na ac, Na as, Na ap and Na ah, respectively.
Wild animal products
The final contribution to the calorific value of the palaeodiet considered here comes from the meat of wild animals, red deer, roe deer and wild boar. As discussed in Section 5.2, bone assemblages at the CTU as well as other Neolithic and Bronze Age sites suggest that about w = 0.2 of the total energy intake was from the wild animal meat. Using their relative numbers, meat weight and calorific values given in Table 7, one can convert the required energy content into the numbers of the wild animals per person implied by the bone assemblages in the same way as is done for domesticated animals. We do not write out these relations here since they differ insignificantly from those already given.
Per capita subsistence land area and the subsistence carrying capacity
The total land area required to provide the amounts of cereals, meat and dairy products of domestic animals to satisfy the calorific dietary requirements of a single person is the sum of the specific land areas under cereals and pasture obtained in Sections 6.1 and 0: This estimate needs careful qualification to be useful. Although Ks is called here a carrying capacity, it should not be confused with the maximum population density averaged over a large area that appears in demographic and population dynamics models. It is based on the land area required to support a single person and is used below to calculate the area required to support a rural settlement (the exploitation territory). However, the exploitation areas of settlements do not need and, indeed, are unlikely to cover the landscape completely while the land between the exploitation areas does not enter our calculations. Therefore, Ks represents the upper limit of the carrying capacity, attainable only under an unrealistic condition of densely packed exploitation areas. To extend such calculation to the global carrying capacity, careful analysis of the spatial patterns and lifetimes of the settlements is required as well as detailed environmental data. An example of such analysis can be found in Zimmermann et al. (2009) who suggest 8.5 persons/km 2 for the local carrying capacity of LBK settlements in 5250-5050 BC and note its strong spatial variability, whereas their global estimate is 0.6 persons/km 2 . Ellen (1982, p. 43) notes that the actual population densities are most often well below the local carrying capacity at a level of 25-70%.
The maximum fraction of manured fields
The above relationships between rainfall, duration of cultivation and yield can be used to estimate the average yield at the CTU sites with allowance for the use of manure fertiliser. The overall yield Y [kg/ha/year], given the fraction of manured land, fm, is given by where Yu and Ym are the yields from unmanured and manured fields, respectively. The amount of manure available depends on the amount of livestock kept and its management. The finds of faunal remains at the CTU sites ( We will not be counting manure in the same detail as the meat, milk and grazing area, although it is easy to do, and will only include cattle manure into the calculation. Then the maximum fraction of the manured arable land, attainable if all the manure produced is used in the fields, is estimated as where Aw is the crops area per person from Eq. (5) and (1 + f)Aw is the total area of both cropped and fallow fields. Using Eq. (11) for Y in Eq. (5), with Yw = (1 ), we obtain a simple equation for fm, which solves to yield . ) 1 ( We take = 15 tonne/ha/year, as in the Sanborn experiments, and m = 2.5 tonne/head/year for the manure from cattle (LWFH 1993), assuming that 50% of the total amount of the manure is lost. The Sanborn data on wheat yields from manured and unmanured plots, summarized in Eq. (4) and Table 6, suggest Ym/Yu = 1.2 for D = 10 years and the typical rainfall in the CTU area, R = 550 mm/year. Assuming Yu = 650 kg/ha/year, with the nominal per capita cattle number nc = 1.8 head/person and other variables from Table 9, we obtain fm 0.4, that is, about half of the total field area (both cropped and fallow) could be fertilised with the manure available for the nominal values of the parameters. Using Eq. (11), we then obtain the nominal average yield of Y0 = 700 kg/ha/year (in fact, we have adjusted the above value of Yu to preserve consistency with the nominal parameter values of TablesTable 8 andTable 9).
Figure 2.
Land use of a settlement for the nominal diet structure with the relative fractions of cereals, domestic and wild animal products of 0.4, 0.4 and 0.2, respectively, and the cereal yield of 700 kg/ha/year. The settlement is represented by the innermost circle, surrounded by the field zone containing the area under crops (12%), fallow fields (23%) used for pasture, and specialized grazing area (45%), leaving 20% of the area for unproductive land (unshadedravines, dense forests, etc.). The next outer zone is used exclusively for livestock grazing; it also contains 20% of the area that cannot be used for any agricultural purposes. The outermost zone is used to collect animals' winter fodder from both grass meadows and suitable trees that are assumed to occupy a half of the total area in that zone. The settlement radius R0 and the maximum distances to the zones, D1, D2 and D3 (shown not to scale), are discussed in the text and given in Table 9.
The exploitation territory of a settlement
With the above estimates, we can calculate the land area exploited by the population of a rural settlement. Consider a settlement of an area A0 with a population of N people. Here and below, we assume for simplicity that the settlement area is circular, so that 0 = 0 2 , where R0 is its radius. In fact, many CTU settlements have a roughly elliptical shape; then R0 is understood as the geometric mean of the minor and major semi-axes of the settlement, r1 and r2: 0 2 = 1 2 . The land around a settlement is thus divided into three zones shown in Figure 2. The field zone is the closest to the settlement, where both currently cultivated and fallow fields are located. Fallow fields in this zone can be used for grazing. The next outer zone is used as summer pasture for the livestock. The outermost zone is where winter fodder for the animals is collected. The total area of the field zone serving N people is given by with the per capita area Af given in Eq. (6), and contains the crops area NAg and fallow land of an area Nf Ag ; the remaining land is agriculturally unproductive. Most of the pasture and grazing areas are located in the grazing zone at a larger distance from the settlement. The fallow area Nf Ag in the field zone can be used for grazing, so that the useful area of the grazing zone has to be equal to NAa Nf Ag , where NAa is the total grazing area required, with Aa given in Eq. (8). The total area of the grazing zone (including unproductive land of the same fractional area u) is then given by 2 = a − f g 1 − u .
Finally, the area of the fodder zone serving Na animals is given by Am = Na(acMc + asMs + ahMc), and its total area follows as where m is the fraction of the total area bearing meadows and trees providing leafy fodder; we adopt, more or less arbitrarily, m = 0.5. Since the radius of the fodder zone is relatively large (Section 7), the magnitude of m affects the radius of the zone only slightly: a change of m form 0.1 to 0.9 changes by just 10% the outer radius of the fodder zone around a settlement with 2000 inhabitants.
It is straightforward to calculate the radial distances to the boundaries of the three exploitation zones from either the centre of a settlement or its border assuming that they have circular shape. For example the maximum distance from the settlement border to the fodder zone is given by 3 = √( 0 + 1 + 2 + 3 )/ − 0 , and similarly for the maximum distance to the field and pasture zones, D1 and D2 , respectively.
Labour costs of the agricultural cycle
For the estimates described above to be viable, one has to demonstrate that the food required can indeed be produced with the labour resources available. The availability of human labour rather than land could be the limiting factor in the early agriculture (Halstead 1996); our calculations confirm this. In this section, we discuss the labour required for a farming population to subsist, starting with estimates of labour productivity in pre-modern agriculture and proceeding to evaluating the labour costs of the agricultural cycle and then, the labour efficiency. Knowing the area required for the population, we then estimate its local subsistence carrying capacity within the exploitation area.
Experiments on agricultural labour productivity
Archaeological finds at CTU sites include a range of agricultural tools, including stone and antler hoes and flint sickle blades; remarkably, an antler ard was found at Grebenukiv Yar (near Maydanetske), dated to the late-fifth-early-fourth millennium BC (Pashkevich and Videiko 2006, pp. 88-95). Numerous ceramic models of sledges with ox heads clearly suggest the use of cattle for traction (Pashkevich and Videiko 2006, p. 89), confirming the conclusions drawn from analyses of faunal remains (Zhuravlev 2008). Semyonov (1974, pp. 194-226) describes in detail extensive experiments conducted in 1969-1970 at the Laboratory of Primitive Techniques in the Leningrad Branch of the Institute of Archaeology of the Academy of Science of the USSR. The experiments involved tilling and harvesting with tools modelled upon prehistoric and ethnographic examples. The tools tested include various digging sticks, stone, wood and antler hoes, wooden ards, and sickles with flint blades. In those experiments, friable soil could be prepared for sowing (tilled to a depth of 20-25 cm) with an oak dibble at a rate ranging from st = 50 m 2 /person-hour on a well-manured field to 30 10 m 2 /person-hour on a denser soil and to 5 m 2 /person-hour on a dense, half-virgin soil. Adding an iron point to an oak dibble increased the productivity to st = 6-8 m 2 /person-hour on virgin soil, and to st = 8-15 m 2 /person-hour when the stick was further equipped with a pedal. Work with a dibble with an additional weight was slightly more productive but required a significantly larger physical effort. With a stone hoe, st = 13-17 m 2 /person-hour of light soil could be tilled, somewhat better than with an antler hoe at st = 6-17 m 2 /person-hour. Tilling of virgin soil covered with high grass and dense turf could be done at a rate st = 2.5 m 2 /person-hour with an oak dibble and about st = 6 m 2 /person-hour with hoes (2 hours 10 min of work with an antler hoe followed by 1 hour 15 min using an iron hoe on a plot 25 m 2 in size). Altogether, the productivity of hand tilling with a digging stick or stone hoe can be adopted as st = 10-20 m 2 /hour depending on the soil quality.
Tilling with horse-drawn oak ards, modelled on the earliest prehistoric evidence, involved two people, one to guide the horse and the other to manipulate the ard. A plot 250 m 2 in size, with soil tilled earlier but hardened after a 12-day drought, could be tilled with a Døstrup (spade) ard in 40 minutes (375 m 2 /hour) to the depth of 30-35 cm, whereas tilling a similar plot on the same field with digging sticks and hoes took about 50 hours. Thus, the tillage efficiency is increased by more than a factor of 50. Cross-ploughing of the plot with a Walle (crook) ard was equally successful. However, both ards failed to perform on virgin soil covered with grass. The Walle ard was tested on a previously harvested pea-oat field with stubble, plant roots and weed on dry soil compressed by the heavy machinery used for harvesting. An area of 1430 m 2 was tilled to a depth of 10-20 cm in 2 hours 50 minutes (about 500 m 2 /hour). Although the depth of tilling with hand tools was 1.5-2 times larger and the furrows made with the ard were unevenly spaced, the soil tilled with the ard was better pulverised. Cross-ploughing of the plot removed the imperfections in additional 2 hours 35 min. Trials of the Døstrup ard on a clayey soil after a strong rain demonstrated the difficulties of working on sticky soil with higher resistance from wet plant roots and weeds. A single ploughing of 1430 m 2 took 3 hours 20 minutes (about 430 m 2 /hour) in this case. Altogether, ploughing 1430 m 2 twice by two people took 5 hours 25 minutes, or at the overall rate of about st = 260 m 2 /person-hour. We note in passing that of the two workers involved in ploughing, a physically weaker person, e.g., an older child, can guide the animal. Semyonov (1974, p. 252) cites Steensberg (1943, pp. 10-22) who experimented with harvesting ripe barley and partially ripe oats with modern and primitive sickles in 1938-1939 in Western Ukraine, in Slovakia and in Denmark. With a flint sickle, cutting low on the stem at a height of 12-30 cm above ground was done at a rate sr = 30-40 m 2 /person-hour (10 m 2 /person-hour are equivalent to 100 person-day/ha for a 10-hour working day). Mowing of 50 m 2 with a Viking-or Roman-type scythe took 17-30 minutes. Semyonov's (1974, pp. 253-254) own experiments on cutting wet grass (stem diameter 0.5-0.7 mm) with flint sickles, modelled on those found at the CTU site Luka-Vrublevets'ka, resulted in a sr = 20-25 m 2 /person-hour productivity; ripe rye could be reaped at a slightly higher rate, sr = 20-35 m 2 /person-hour. A cultivated fodder field (oats, barley, peas, goose-foot and 10% of various weeds, up to 1.5 m in height and 0.8 cm in the stem diameter) could be reaped (by cutting the stem at a height of 25 cm above ground or larger) with flint sickles at a rate 20-30 m 2 /person-hour. Altogether, Semyonov (1974, pp. 255-256) concludes that the productivity of reaping with a flint sickle is only twice lower than with a modern steel tool. White (1965) assesses as credible Columella's estimates of the average labour cost for Roman Italy to be about 44 person-day/ha (18 person-day/acre) for the whole wheat cultivation cycle, excluding harvesting, with four ploughings (including ploughing-in the seed), and further 5.7 person-day/ha (1.5 person-day/iugerum) for reaping. Halstead and Jones (1989) describe traditional farming in modern Greek islands. Their conclusions emphasize the highly seasonal nature of agricultural activity with maximum time stress in the harvesting period and, to a lesser extent, the ploughing season. These authors also note that overproduction and storage of more than one year's supply of food is a relevant response to the risk of a failing crop inherent in a 31 highly seasonal climate environment. A typical labour cost of reaping cereals with a modern sickle was 10-30 person-days/ha, and the crop processing (threshing, winnowing, etc.) required about the same amount of labour as the reaping. Assuming 0.75-1.2 ha/person of per capita cultivated land, harvesting at this rate would take 7.5-36 person-days/person. A typical modern productivity of tilling is 25 m 2 /hour (1 ha in 400 hours) when using hand tools, and about 150 m 2 /hour (1 ha in 65 hours) when tilling with a pair of oxen (Ellen 1982, p. 137).
The agricultural cycle and labour return
Using estimates of the labour productivity presented above, the dietary requirements presented in Sections 3 and the land use estimates of Section 6, it is straightforward to estimate the labour cost of the arable farming and livestock maintenance required for the population to subsist.
Equation (5) expresses the area under crops in terms of the per capita dietary requirements in cereals and the cereal yield. Using the nominal values of the labour productivity presented in Table 9, we obtain the estimates of the labour cost of various agricultural activities collected under the Labour Productivity heading in that table. Whenever required, we assumed that a working year consists of 250 days, allowing for bad weather, holidays, etc. (White 1965).
It is convenient to summarise some (but not all) important aspects of the organization of farming in terms of the labour return, denoted , which can be defined as the ratio of energy produced to the energy spent or, equivalently, as the ratio of the length of time over which a person can subsist (here, in terms of the calorific food content alone) on the food produced, to the working time required to produce it. Based on ethnographic evidence, Ellen (1982, p. 45) suggests that an overall labour return of 10 is about the minimum acceptable in subsistence societies, with 1750 kcal produced per person-hour of labour for major economic activities. However, the labour return of plant cultivation alone can be as low as = 2.4 among swidden horticulturalists in modern Indonesia (Ellen 1982, p. 152) To illustrate the significance of this quantity, we note that, theoretically, one person can support themselves with the labour return of at least unity; to support a family of six, two working family members must achieve a labour return of at least three; if any surplus food should be produced, as an emergency storage or for exchange, higher labour return is required. In our calculations, we focus on the costs of labour that requires a certain physical fitness, such as land tillage, and on those seasonal activities that must be completed in a limited time, such as land preparation for sowing and reaping the harvest and winter fodder. These are the most demanding parts of the agricultural cycle in terms of either the workforce or time. We assume that only a fraction w of a family members are capable of physically demanding work, with w = 1/3-1/2. Many other activities, such as sowing, cleaning the grain, collecting leafy fodder, can be assigned to less capable family members and/or spread over longer time.
Even at the lower-end tillage productivity of 15 m 2 /person-hour, it takes only 66 persondays to satisfy the annual dietary requirements of a single person in cereals. Considering a family of six people of whom only two are physically fit to work (w = 1/3), the cost of producing the grain required for its annual subsistence is just 396 person-days per family per year, as compared to 500 man/days available annually in such a family. The resulting labour return is reasonably high at = 365 person-days/66 person-days 5.5.
However, a problem with this option is that the tilling of a family cereal field requires 104 person-days, or 52 days if done by two workers, while the soil preparation and sowing must be done in not more than 30 days to avoid significant crop losses (Percival 1974, p. 423). Tilling the family field with hand tools by two people can only be finished in 31 days if the productivity is st 25 m 2 /person-hour. This is marginally acceptable but still leaves little room for any eventualities such as bad weather or difficult soil. There are several ways to resolve the problem. An obvious one is to have more family members working in the fields, especially during the tillage and sowing. For half of the family members tilling the field at a rate st 15 m 2 /person-hour, the work can be finished in about 31 days. Another obvious option could be to use wheat varieties 32 that provide higher yield. However, this does not lead to any significant saving in the labour. For example, two people working at st 15 m 2 /person-hour could till the family field in 30 days only provided the wheat yield was as implausibly high as Y = 2200 kg/ha/year (with a high labour return of about nine, though). Neither winter crops nor manuring alone is likely to boost the yield to that level. Yet another option is to reduce the reliance on cereals by reducing their contribution to the diet. This could be achieved, for instance, if 20% of the calorific content of the diet was from cereals and 60% from domestic animal products, provided the cereal yield is Y = 1100 kg/ha/year. A more radical, and long-term, solution is to replace the hoe with the ard. Then two workers can plough the family field in just 3 days. As mentioned above, primitive ards are not efficient on heavy and virgin soils where the hoe appears to be the only alternative. This fact highlights the difficulty of moving the fields to a virgin soil if the settlement has to be relocated.
Another bottleneck in the agricultural cycle is cutting grass for the winter fodder. If only meadow grass was used for fodder, working with a flint sickle would require 114 person-days to provide the family livestock for winter. This is obviously untenable, even for three workers. However, leaves of certain tree species also provide excellent fodder (see above), and younger or weaker members of a farmer's family could collect them. We assume, admittedly arbitrarily, that only a half of the fodder required is meadow hay. Then the labour cost of fodder (excluding collecting the leaves) is quite acceptable at 67 person-days. A further labour saving option is to improve technology and cut grass with scythe.
There are innumerable such combinations reflecting various techniques and strategies of farming, and there is no point in trying to discuss them all. The diversity in the implementation of farming strategy between individual CTU sites and between CTU evolutionary stages apparent from archaeological evidence is likely to reflect the wide breadth of possibilities. Instead of discussing a large number of hypothetical scenarios, we present our results in a graphical form to show the dependence of the labour return on the wheat yield, the diet structure, etc., with the aim of identifying the limiting elements of a farming strategy. To make the results mutually comparable, we only vary one or a few parameters at a time, having the others fixed at their nominal values given in Table 8.
Trends in the labour return and land use
Calculations of the labour costs of various agricultural activities readily identify the well-known seasonal labour bottlenecks in the farmer's year (e.g., Fuller et al. 2010) where large parts of the annual work have to be done in a limited time: preparation of the land for sowing, collection of winter fodder, and harvesting. The land tilling time, limited to about 30 days, can be an especially demanding constraint. Depending on weather, harvesting may need to be completed in a few weeks or even a few days when the spikelets have not yet dried and shattered. However, this limits mostly the reaping time since the grain can be threshed and cleaned later. Since naked wheat grains are easily detached from the ear, they are better threshed immediately after reaping. On the other hand, hulled wheats can be reaped and then stored to be threshed on daily basis. Thus, we focus on the reaping time in our assessment of the labour costs. Collecting hay, straw and leaves for winter fodder is another activity that may impose stringent time limits. However, younger and weaker members of the family can be involved, relieving pressure on those fit for hard physical work (this is also true of crops reaping). Land tilling thus appears to be the most demanding seasonal activity in terms of the time and labour stress.
To illustrate the results of the calculations, we present per capita figures, e.g., the labour cost of producing enough food to support a single person. Furthermore, we discuss the requirements, and how they could be met, of a family of six people of whom only two or three (w = 1/3 or 1/2) are capable of doing work that require a certain physical fitness. To support such a family, the labour return of two workers must be equal to at least three if w = 1/3, or three workers should work with a return of at least two if w = 1/2. Finally, we discuss the limiting factors in the 33 agricultural cycle of typical settlements of 2 and 10 ha in area that host about 50 and 270 inhabitants, respectively (assuming a constant population density within the settlements).
Conclusions drawn from the calculations presented further in this section are testable with relevant archaeological material and its analysis. In general, our results imply that certain types of the temporal evolution of the diet are more advantageous and efficient than others, and that different stages in the development of agriculture can have different preferable subsistence strategies.
Cereal yield and agricultural technology
One of the constraints on the size of the exploitation area of a rural village is that its fields should be within 5 km at most. This constraint can safely be satisfied even for a large settlement of 75 ha in area as long as the cereal yield exceeds about 350 kg/ha/year. Figure 3 illustrates a strong effect of the cereal yield on the labour return, for land tillage with either hand tools or ard. Unsurprisingly, the use of ard reduces the labour costs and increases the labour return rather dramatically, by a factor 1.5-2 over the whole agricultural cycle. For a given diet structure, lower yields require larger field areas and, consequently, a larger distance to them. For a settlement of N = 53 people and A0 = 2 ha in size, the maximum distance to the crops from the settlement border varies from D1 = 0.7 km to 0.4 km as Y increases from 500 to 1500 kg/ha/year. The maximum distance to the grazing zone varies very little remaining about D2 = 2.2 km; the maximum distance to the fodder zone, D3 , differs from D2 by just 50 m. The labour cost of the cereal production varies with the size of the cultivated fields. With 40% of the diet's calorific content coming from cereals, yields below about 400 kg/ha/year are untenable as the amount of labour required to till the land required to feed one person exceeds 31 person-days using hand tools. For a family of six, yields in excess of 1230 kg/ha/year are required to till the family plot in less than 60 person-days; this is just acceptable if two members of the family are fit for hard physical work. Thus, land tillage causes time stress if done with hand tools. i.e., the amount of output as a fraction of nutritional energy requirement per working hour (or the ratio of the energy produced to the energy spent on the production, or the inverse ratio of the working time required to produce food to the time the output can sustain the worker). Solid (blue): tillage with hand tools; dashed (red): tillage with ard. The diet structure is assumed to be fixed at g/d/w = 0.4/0.4/0.2 for the relative contributions of the cereals, domestic and wild animal products. 34 The use of the ard removes this constraint and leaves abundant time to continue using hand tools, say, in vegetable gardens. Even for implausibly low yields of Y < 150 kg/ha/year, the labour required to till a family plot is just 28 person-days.
However, the earliest ard found in the CTU area dates to Trypillia BI. The earliest CTU framers most probably used only hand tools. One option to avoid the excessive time stress in the land tilling and sowing season is to reduce the contribution of cereals to the diet. If the relative contributions of cereals, domestic and wild animal products were g/d/w = 0.23/0.57/0.2 (instead of the nominal 0.4/0.4/0.2), the per capita crop area reduces to 0.15 ha/person for Y = 700 kg/ha/year, and its tilling would take 10 person-day/person. The labour to prepare a family plot for sowing is, correspondingly, 60 person-day/family. Keeping the livestock is more efficient in terms of the energy return: with the diet containing only 23% of cereals, the labour return is as high as 10. Cutting grass for the herd requires 96 person-day/family; this is a large load but not untenable given that fodder can be collected between the sowing and harvesting seasons by virtually all family members. A possible problem with this option is not in the labour cost but in the distance to the grazing area as the large herd needs a large area to be fed. The distance to the outer boundary of the grazing area around a settlement of about 50 inhabitants (2 ha in area) is D2 = 2.6 km. Larger settlements become still more problematic. For instance, a 10-ha village of about 270 people has its fields within D1 = 0.9 km from the settlement but the outer border of the grazing area is D2 = 5.8 km away. The distance to the fodder zone differs insignificantly (by about 200 m) from that to the grazing area.
The magnitude of D2 obviously depends on the grazing area per animal head while our nominal figure of Ac = 10 ha/head is rather generous. Given that less than 1 ha/head of a flooded pasture is sufficient for cattle, D2 can be reduced to 3.8 km for a village of 10 ha in area if Ac = 5 ha/head, corresponding to an approximately equal split between meadow and forest grazing (and all other parameters unchanged). With Ac = 5 ha/head, a settlement of 20 ha in area still has D2 5.8 km, but the problem arises again for larger settlements.
Since arable fields represent a relatively small fraction of the exploitation area, changes in the cereal yield affect the local carrying capacity only weakly. As Y varies from 500 to 1500 kg/ha/year for the nominal diet structure, Ks varies by a few percent remaining close to 3.4 persons/km 2 . Changing the diet to g/d/w = 0.23/0.57/0.20 leads to Ks 2.4 persons/km 2 .
Altogether, we suggest that large, exclusively farming settlements of a few thousand people and a few hundred hectares in area are sustainable only if the ard is available. Otherwise, such a settlement has to be supported by satellite farming villages, which would imply complex social organization, labour and occupation division, and well-established, stable exchange networks. The development of complex structures based on technological advances is implausible at the early stages of the CTU. This can be a reason for the dominance of smaller and medium-size settlements in the early CTU. Large, proto-urban settlements have to be supported by adequate technology and/or the developed social relations that presumably emerged at the later stages.
The diet structure and labour return
Having identified and quantified specific mechanisms of the influence of the population diet on agricultural activities, we explore this connection in more detail. It appears that reducing the fraction of cereals in the diet is the only obvious way to cope with the labour bottlenecks in a crop-based agriculture, especially if the cereal yield is low. The variation of the labour return and the local subsistence carrying capacity with the relative contribution of cereals to the diet is shown in Figure 4 assuming a constant cereal yield of Y = 700 kg/ha/year and a constant contribution of the wild animal food to the diet, w = 0.2. Solid and dashed lines show the dependencies obtained under the land cultivation with hand tools and with the ard, respectively. A significant constraint that arises if hand tools are used is that a family plot can be tilled in less than 60 person-day/family only for small contributions of cereals to the diet, g/d < 0.4. An advantage of a diet with a small fraction of cereals, which could be attractive at early stages of the development of farming, is that the labour return is higher when the cereal fraction is lower. For g/d < 0.4, the labour return exceeds 10 even if hand-tools are used.
With the ard, the tillage takes less than 10 person-day/family for g/d < 4, and the labour return is exceeds 10 for any reasonable fraction of cereals in the diet. The size of the exploitation territory remains reasonable across a large part of the range shown in Figure 4, with 0 < D1 < 0.7 km, 6.9 > D2 > 1.5 km and 7.0 > D3 > 1.6 km for 0 < g/d < 3 and a settlement of 2 ha in area with 50 inhabitants. A larger settlement has larger exploitation territory, with 0 < D1 < 1.5 km, 3.1 < D2 < 3.4 km and 3.2 < D3 < 3.5 km for 0 < g/d < 3. The sense of the inequalities for D2 and D3 changes as compared to the smaller settlement because the radii of the grazing and fodder areas are larger while the zone area increases quadratically with its radius.
An increase in the fraction of cereal products (larger g/d) beyond the equal split, g d, affects the labour return rather weakly. The labour return varies from 8 to 6 for hand-tillage and from 12 and 10 under ard-tillage as g/d increases from 1 to 10. (However, values of g/d above about 0.5 do not appear to be practical with hand-tilling because of the time constraints noted above.) Thus, the diet structure, at least after the introduction of the ard, is flexible in this sense as long as the contribution of cereals is large enough, allowing much room for change without any strong effect on the amount of labour required to support it. The change in the labour efficiency is relatively weak mainly because changes in g/d lead to a seasonal redistribution of the labour cost between collecting winter fodder and tilling the land and harvesting. Thus, a diet dominated by cereals permits a change of labour resources with little effect on the labour efficiency in case of poor or even failed harvest or any other hazard in food production. 36 The proportion of cereal food affects noticeably the local subsistence carrying capacity since a larger fraction of domestic animal products makes the economy more land-extensive through the demand for grazing and fodder lands. The magnitude of Ks increases slightly faster than linearly with g/d as stronger reliance on cereals means smaller exploitation area.
These calculations confirm that changing the diet can hardly help to remove the labourcost and time bottlenecks in the soil preparation for sowing: only when the contribution of the cereals is less than half of that from domestic animal products, can two workers till the fields of a family of six in less than 30 days. A diet with similar contributions of cereal and domestic animal products is only possible if the ard replaces hand tools in the land tillage. On the other hand, a diet dominated by cereals (where possible) is rather flexible, and can be adjusted widely without much effect on the labour return. This observation may be relevant to discussions of the risks involved in growing cereals: if the harvest is poor but the (reduced) dominance on cereals can still be maintained (e.g., because of a stored grain), a stronger reliance on animal products does not affect the labour efficiency much, but rather requires a seasonal redistribution of the workload.
To make this point clearer, consider another trajectory in the parameter space that may help to clarify possible risk management strategies associated with arable agriculture. Figure 5 shows the variation of the labour return and the relative fraction of cereals in the diet with the cereal yield, where we assume that the relative contribution of cereals to the diet is proportional to the cereal yield, g = 0.4Y/700 kg/ha/year, keeping the total contribution of domestic products constant, g + d = 0.8. The crops area is then independent of the cereal yield remaining equal to 0.26 ha/person. This scenario is supposed to model the reaction to a failed harvest or a possible diet evolution as the cereal yield increases systematically with time (e.g., because of the selection of cereal varieties).
With the cereal fraction increasing together with the cereal yield, the labour return is significantly higher, and variation with the yield weaker, than in the case of a fixed diet illustrated in Figure 4. This version of the subsistence strategy is apparently advantageous as it both maximizes the labour return and provides flexibility in terms of the redistribution of resources between growing of crops and animal husbandry. As mentioned above, this strategy may also help to offset the damage of a failed harvest. Figure 5. The effect of the cereal yield under a different diet structure on the annual labour return with hand tools (solid) and ard tillage (dashed), and the ratio of the contributions of cereals and domestic animal products to the diet, g/d (dotted). The contribution of cereals is assumed to be proportional to the cereal yield, g = 0.4Y/700 kg/(ha year), and the total contribution of domestic foods to the total diet is kept constant, g + d = 0.8. Figure 6. The effect of the milk yield on the labour return (solid: hand-tool tillage; dashed: ard tillage), the local subsistence carrying capacity (dotted) and the per capita number of domestic animals required to satisfy dietary requirements (dash-dotted).
The role of dairy products
Dairy products appear to have been a part of the European human diet since the Early Neolithic. However, previous palaeoeconomy analyses rarely, if ever, included dairy products. The general attitude felt in the literature is that they are an attractive but optional addition rather than an essential component of the diet. Based on our calculations, we argue that milk and dairy products could be an essential component of the diet, providing an opportunity to reduce labour costs. Figure 6 illustrates the role of the dairy products. The results shown are obtained by increasing the cow and caprine milk yields together, ys = 50 (yc/400) 1/3 , where both ys and yc are measured in litres/head/year. This dependence is chosen exclusively for illustrative purposes to ensure that the range of variation of the caprine milk yield is reasonable as the cow milk yield varies. In particular, the nominal figures ys = 50 litre/head/year for yc = 400 litre/head/year are reproduced, and, at the top end of the range, ys = 146 litre/head/year for yc = 10,000 litres/head/year are similar to the modern livestock figures.
Unsurprisingly, increasing milk yield boosts the labour return. What is surprising is that the effect is so significant. As the milk yield increases from zero to 2000 litre/head/year, the efficiency of the hand-tool agriculture grows from 5 to 10, and labour assisted by the ard has the return boosted from 6 to 16. For larger milk yields, the size of the herd required to satisfy the dietary requirements reduces, and hence the grazing and fodder zones become smaller. As a result, the local carrying capacity increases with the milk yield linearly from 1 to 12 persons/km 2 as yc increases from 0 to 2000 litres/head/year and ys increases simultaneously from 0 to about 90 litres/head/year. We neglect the labour costs of milking, tending the animals, collecting leaf fodder, etc., and this, of course, contributes to the increase in the labour return. Again, these activities can be assigned to the weaker family members: the labour returns quoted here refer to the physically most demanding activities performed by a few physically stronger people.
The effect of the milk yield on the carrying capacity is so strong because the number of domestic animals that need to be kept reduces significantly if their milk is used for food. The dash-dotted curve in Figure 6 shows how rapidly the per capita number of the livestock decreases as the milk yield increases. For yc = 0, an implausibly large herd of 16 heads is required to satisfy the dietary requirements of a single person for the diet structure assumed (g/d/w = 0.4/0.4/0.2). We discussed above how such an implausible situation could be avoided, but this stresses once more the importance of dairy products. For yc = 400 litres/head/year, the herd size decreases to about 5.2 heads per capita (1.8 heads of cattle, 1.2 caprines, 1.7 pigs and 0.4 horses 38 per person), still a rather large herd to keep. The rapid decrease continues to 1.4 head/person, comprising 0.5 cattle, 0.3 sheep or goats, 0.5 pigs and 0.1 horses for yc = 2000 litres/head/year. It is clear that even for this productivity of the milk herd, still low by modern standards, there are many opportunities to produce surplus product beyond the subsistence requirements.
To provide another illustration of the importance of dairy products, we note that, if no milk is used at all and the fraction of domestic animal products (then, meat alone) remains equal to d = 0.4, the size of per capita herd increases to na = 16 head/person for the nominal parameters values, clearly an untenable number. Moreover, the daily consumption of meat from domestic animals alone becomes as high as 500 g/person/day (as compared to 160 g/person/day if milk is used). To put this figure into a simple but relevant context, we note that a fillet steak served in a typical British restaurant weighs 230 grams. It is thus clear that palaeodiet reconstructions with a significant fraction of animal products are inconsistent with the constraints of human physiology and nutrition unless a significant part of the animal food are dairy products.
The exploitation territory
The above discussion contains references to the size of the exploitation territory of a settlement in connection with the expectation that the distance to the arable fields should not exceed 5 km and preferably be within 1-2 km of a settlement, whereas the distance to the pasture areas should be within 5-10 km. In this section, we summarize this aspect of our results. Figure 7 shows the maximum distances from a settlement border to the field zone (D1), grazing zone (D2) and the fodder zone (D3). For this illustration, we have chosen a typical settlement size of the Early Trypillia, of an area A0 = 2 ( ha with 50 people (Table 2). ). Each panel of Figure 7 corresponds to one of the models discussed above and illustrated in Figures Figure 3-Figure 6. The maximum distance to the fields D1 is close to 0.5 km in all cases except for extremely low cereal yields (Panel a) or extremely high fraction of cereals in the diet (Panel b), but even then it does not exceed 1.5-0.8 km. The distance to the grazing area, D2, never exceeds 3 km and is smaller than 2 km under rather realistic choices of parameters. The distance to the fodder zone, D3, differs from D2 insignificantly because the radius of this zone is large, and hence even a narrow annulus can have a substantial area.
The situation is not that simple for larger settlements. Assuming, for the sake of argument, that the population density is independent of the population size (375 m 2 of the total settlement area per person), a settlement of an area 10 ha has about 270 people. With the nominal values of parameters of Table 8, the outer radii of the three zones, D1 = 1.2 km, D2 = 4.8 km and D3 = 5.0 km, are approaching the maximum acceptable values. A 40-ha settlement (1100 people) is only marginally sustainable with D1 = 2.4 km, D2 = 9.7 km and D3 = 9.9 km. Of course, optimisation of the subsistence strategy by changing the diet (perhaps only slightly) or a higher cereal or meal yield, to mention just a few options, can make a 40-ha village a viable option. We note that the amount of fallow land adopted (twice the area of the fields under direct cultivation, f = 2) might be unrealistically small as it implies a triennial fallow. Early agricultural systems could use longer fallow intervals; ethnographic data suggest that fallow length of 8-15 years is not unusual (Styger and Fernandes 2006). Longer fallow would obviously result in larger exploited land area. Notably, the median size of the Trypillia settlements given in Table 2 does not exceed 8.4 ha. It is clear, however, that significantly larger settlements would need a fundamental change in the organization of their food supplies, and the division of labour and occupation, with ensuing increased social complexity is an obvious option. Figure 8 shows the variation of the maximum distance to the field zone from a settlement boundary with the size of the fallow area relative to the cropped area for several typical settlement sizes. It is noteworthy that the distances to the grazing and fodder zones do not change as f varies since the larger fallow land is used for pasture, so that the size of the grazing area reduces as the fallow area increases. Figure 8. The dependence of D1, the maximum the distance from a settlement boundary to the field area (see Figure 2) for settlements of various areas and populations: 2 ha, 50 people (solid), 5 ha, 130 people (long-dashed), 10 ha, 270 people (short-dashed) and 15 ha, 400 people (dashdotted). 40 The distance to the field zone increases with the fallow ratio at a modest rate (roughly, as the square root of f). The field zone is within 1-2 km of a village only if the length of the fallow is not too large: D1 < 1.5 km for f < 20 for a settlement of 2 ha in area but only for f < 7.5 around a 5 ha settlement. The fields of a bigger settlement of 10 ha are within this distance only for f < 3; for a still larger settlement of 20 ha, the maximum acceptable value of f is the marginal 1.5. Of course, these figures would be smaller for a higher cereal yield, with D1 decreasing roughly in inverse proportion to the square root of the yield. However, this illustrates once more that settlements of more than a few tens of hectares, with more than a few hundred people, are likely to function differently from smaller villages as the need to import food from satellite farming villages rapidly increases with the size of the settlement.
Surplus food production
The above estimates present an overall economic picture of farming based on the immediate dietary requirements of the population. There is another aspect of this picture that we have touched upon only in passing: the risks of agricultural production mainly associated with failed crops (Halstead 2004). The diversification of the domesticated plants and livestock, storage of emergency reserves and wider use of wild resources are among the strategies used to mitigate this risk. However, the storage for emergencies obviously requires some surplus of food to be produced implying higher labour costs. The opportunity to produce a surplus product can also profoundly affect the economic behaviour of the farmer. If a surplus product can be, and indeed is, produced beyond the needs of the farmers and their families, the importance of transportation and communication greatly increases, as the surplus produce needs to be transported to the consumer on a regular basis. This makes it more important for the farm to be located conveniently with respect to (most often, close to) transportation routes, of which waterways are most obvious. In turn, this makes isolated hamlets a less attractive option for a farmer to occupy, thereby facilitating the agglomeration and clustering of the population. In the discussion above, we have identified a direct route to a surplus food production via the use of dairy products: by providing a significant food resource that requires relatively little labour investment from the physically fit family members, it provides an opportunity to redirect the resources to producing surplus product in any branch of agriculture. We shall explore these opportunities in another publication.
Archaeological evidence
There are many indications that most Trypillia settlements had a relatively short lifetime of less than 100 years. Most of the settlements have a single-layer stratigraphy. Tells are found only in the Carpathian piedmont areas, and even there only isolated phases and stages are represented in the excavation finds, often separated by significant gaps. This is also true of the multi-layered sites discovered in the eastern part of the CTU area, where material finds are restricted to 2-3 phases. For example, the largest settlements, such as Talianky and Maydanetske, belong to a limited part of the same stage, CI (Smaglii and Videiko 1990;Ryzhov 1990).
There were several attempts to estimate the Trypillia settlement lifetime, converging to 50-100 years (e.g., Krutz 1989;Markevich 1981). These estimates were based on archaeological dating and pottery typology, together with 14 C and archaeomagnetic dating. For example, Ryzhov (1990) identified distinct phases in the development of the Trypillia sites in the Dnieper-Southern Bug interfluves in the fourth millennium BC. The types of painted pottery found there suggest up to five development phases belonging to Stage BII and four, to CI, nine phases altogether. According to archaeomagnetic dating, the overall duration of these phases is 500-600 years (Telegin 1985, pp. 11-17). The author of these archaeomagnetic measurements, G. F. Zagnii (private communication) suggests that their accuracy is 25-50 years, sufficiently 41 high for our purposes; for comparison, recent archaeomagnetic studies of Neolithic sites in Bulgaria (Jordonova et al., 2004) and Greece (Aidona and Kondopoulou, 2012) report accuracies of up to ±70 and ±85 years, respectively (from the 95% range of dates). (The accuracy of the archaeological dates obtained from 14 C measurements is as yet insufficient to make them useful in this discussion.) The average duration of a single phase, which can be identified with the settlement lifetime, follows as 50-70 years. However, the stratigraphic structure within a single phase (e.g., Maydanetske -Shmaglii andVideiko 2001-2002) suggests that at some sites the lifetime could be somewhat longer but never exceeding 80-150 years. In the vast majority of cases, a repeated occupation of a given site, if it happened, occurred with prolonged periods of abandonment, often of 200-500 years long.
A depleted resources model
From the available archaeological and agricultural evidence, it is possible to estimate the maximum lifetime of a farming settlement if it is limited by the decreasing soil fertility alone. We assume a settlement has a fraction f of fallow land. Thus, at any time, a plot is either being farmed, and so has decreasing fertility, or it is fallow and then its fertility is recovering. Denote f the ratio of the fallow to cropped areas. Let TR be the recovery time scale of the soil fertility and TD the fertility depletion time. In a depletion phase (i.e., when a field is being farmed) we have a decreasing content of soil nutrients, which can be described as an instantaneous reduction in the potential yield, where YD denotes the crop yield at a time t in the cultivation phase that starts at t = 0, and Y0 is the starting yield (e.g., that of the of virgin land). When discussing the Sanborn data above, we used linear fits to the yield variation with the time span after the stat of the cultivation, which proves to be sufficient over relatively short periods of order 30 years. On longer timescales, the yield is likely to decrease exponentially with time as adopted here, assuming that a constant fraction (rather than the amount) of nutrients is extracted annually from the soil by the crop plants. Figure 9. An illustration of the cereal yield changes in a fallow system with the initial yield Y0 = 700 kg/ha/year, the ratio of fallow to cultivated field areas f = 2 (so that any given plot is used for the crops for one year and then stays fallow for two years), the fertility depletion time scale TD = 23 years and the fertility recuperation time scale TR = 100 years. Note that the time plotted includes only the periods of cultivation. (The fallow periods correspond to the step-increases of yield after each year of cultivation.) For total elapsed time, these numbers should therefore be multiplied by (1 + f) = 3. 42 Suppose that the plot is farmed for a period t1 and then left fallow for a period t2 = f t1 while one of the other f plots is cultivated. The recovery of the potential yield of the fallow field is then described by A complete recovery of a fallow field is not waited for, simply as it takes too long. Rather, a plot is cropped again after the full rotation, at t = (1 + f)t1 , and the cycle repeats again and again. The resulting variation of the yield from the whole land area containing all the plots involved is shown in Figure 9. The cycle is repeated until the yield reduces to a level Ym too low to be useful, and then the whole site is abandoned and a new settlement location is sought. The average yield (with the saw-tooth changes smoothed-out) is then given by The land will be abandoned when abandoned at the time T such that Y(T) = Ym, and the settlement lifetime T then follows as From the fits to the Sanborn data discussed above, the average half-life of a plot of land (i.e., the average time for the yield to halve) is approximately u = 17 years for unmanured plots and m = 28 years where manure fertilizer was applied. This gives the combined half-life of TD = [(1 fm)/u + fm /m] 1 20 years, for the fraction of manured fields fm = 0.4. We assume a recovery time of TR = 100 years (e.g., Boserup 1965) and adopt f = 2 and Ym = 250 kg/ha/year, which corresponds to the minimum acceptable labour return with hand tillage of three in Figure 3. For Y0 = 700 kg/ha/year and t1 = 1 year, we obtain T 130 years as the settlement lifetime in this model. This estimate is rather sensitive to the amount of land kept fallow but only weakly to the minimum yield leading to the settlement to be abandoned. For example, for f = 3 and other parameters unchanged, we obtain T 300 years.
Conclusions and discussion
From the very beginning of its evolution, the CTU possessed a developed agricultural technology with a wide spectrum of domesticated plants and animals. We present palaeoeconomy reconstructions of pre-modern agriculture selecting, wherever required, features specific for the CTU, and paying special attention to the self-consistency of all the elements of the model within the constraints provided by the archaeological, environmental and technological evidence available. With full appreciation of the tentative and approximate nature of any estimates of this kind, our calculations firmly demonstrate the sustainability of the CTU agriculture. Our models include several equally important elements. We start with the calorific content of the palaeodiet suggested by archaeological data, stable isotope analyses of human remains, and palynology studies in the area. We allow for all known domestic and wildlife elements of the diet and provide plausible estimates of the pre-modern yield of ancient cereal varieties and its dependence on the rainfall and duration of continuous land cultivation. Importantly, we pay proper attention to the labour costs of various seasonal parts of the agricultural cycle, not only for an individual but also for the farmer's family (with its majority of weak and young members not capable of hard physical labour); this was rarely, if ever, done systematically in the earlier studies of pre-modern agri-43 culture. Finally, we put our results into the context of the exploitation territory and catchment analysis to translate the subsistence needs and strategy of an individual to those of settlements of various sizes. Many (but not all) aspects of the economy are conveniently summarised in terms of the labour return, the ratio of the amount of food energy produced to the energy spent or, equivalently, the total amount of labourer-time available to the working time. Another important aspect of the agricultural activities is the relation of the labour productivity to the time available to seasonal agricultural activities. Of those, the land preparation for sowing causes the strongest time stress. We address this aspect of the problem using the published results of experiments on tillage, reaping, threshing and winnowing using primitive tools and/or traditional techniques.
The simplest subsistence strategy, based on a complex of cereals, domestic and wild animal products, with fallow cropping, appears to be capable of supporting an isolated, relatively small farming community of 100-300 people even without recourse to technological improvements such as the use of manure fertiliser. The most important factor limiting the size of such a community is the labour productivity and the labour cost of land cultivation with hand tools. The time stress at the crop sowing time can be relieved by reducing the fraction of cereals in the diet to about 25% in terms of calorific content. Reduction in the soil fertility with time, estimated here from the continuous agricultural experiment on virgin land at Sanborn (Missouri, USA), suggests that soil fertility around such a settlement would be depleted within 60-100 years even with a fallow system. This factor can determine the lifetime of a farming village. Such settlements are typical of earliest Trypillia Stage A.
A larger settlement of several hundred people could function in isolation, and with a larger fraction of cereals in the diet, only with technological innovations; for example, the use of manure fertiliser and, most importantly, the use of the ard for land tilling. The ard relieves radically the extreme time pressure at the time of soil preparation for sowing. There is archaeological evidence for the use of ard from the Trypillia Stage BI. Another constraint on the settlement size arises from the fact that animal husbandry is land-extensive, and the distance to the grazing area increases very rapidly with the settlement size. It appears that very large settlements of a few hundred hectares in area could function only if supported by satellite farming villages. In turn, this implies division of labour, sufficiently complex social relations, stable exchange channels, etc.: altogether, a proto-urban character of such settlements.
Arable agriculture is more labour expensive and involves stronger seasonal time stress than animal husbandry. However, variations in the labour return with the fraction of cereals in the diet indicate that a diet dominated by cereals is more flexible in the sense that labour redistribution between obtaining food from cereals and domestic animals does not affect the labour return significantly but leads to a seasonal redistribution of the labour costs. This feature can be relevant to the mitigation of the risk of failed crops: when cereals dominate in the diet, applying more effort to the livestock is easy in this respect. Another ways to counter the risk is the use of the manure fertiliser as it significantly reduces the yield variability. We quantify this using the Sanborn experimental data.
Yet another strategy to handle the agricultural risks is the storage of an annual supply of grain to be used when the harvest is low. Typical labour returns of order = 6-8 if using hand tools for the tillage and = 10 for the ard tillage imply that keeping such a storage is indeed possible. In a family of six with two members fit for hard agricultural labour (so that each of the workers feeds three people), the minimum labour return required for immediate subsistence is = 3. Any effort beyond this figure can be used to produce a surplus, part of which can be stored as insurance. Even when the insurance grain storage has been laid out, there is sufficient reserve in the labour return to produce surplus food that can be exchanged or traded externally. However, the tillage bottleneck prevents significant surplus grain being produced unless the ard is used to till the land. Thus, exchange networks, labour division, etc., can indeed be expected to develop starting from the middle CTU stages. | 2015-05-19T19:08:23.000Z | 2015-05-19T00:00:00.000 | {
"year": 2015,
"sha1": "7666c5730a9194a989aa539a2db18900d4a1baa9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1505.05121",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a0255a123147be1253c8a7ebf35aa9611729c917",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"History"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
259040291 | pes2o/s2orc | v3-fos-license | Equivalent-Input-Disturbance Based Robust Control Design for Fuzzy Semi-Markovian Jump Systems via the Proportional-Integral Observer Approach
: This work focuses on the design of a unified control law, which enhances the accuracy of both the disturbance estimation and stabilization of nonlinear T-S fuzzy semi-Markovian jump systems. In detail, a proportional-integral observer based equivalent-input-disturbance (PIO-EID) approach is considered to model and develop the controller. The PIO approach includes a variable for relaxation in the system design along with an additional term for integration to improve the flexibility of the design and endurance of the system. The proposed stability criteria are formulated in the form of matrix inequalities using Lyapunov theory and depend on the sojourn time for robust control design. Final analyses are performed using MATLAB software with simulations to endorse the theoretical findings of this paper.
Introduction
In the real world, nonlinearity poses many challenges during the analysis of the stability and stabilization of control systems. The study based on the Takagi-Sugeno (T-S) fuzzy model technique has attracted a lot of attention due to its remarkable ability to approximate complex nonlinear systems as the weighted sum of linearized subsystems at operating points using fuzzy membership functions and if-then rules. In recent decades, remarkable research attention has been given to the stability and stabilization for T-S fuzzy systems [1][2][3][4][5][6]. In [7,8], the authors looked into the H ∞ filtering problem for nonlinear switched systems with time-varying delay by incorporating T-S fuzzy model characteristics. The research findings in the design of fault-tolerant controller for fuzzy systems with actuator fault were presented in [3,9].
Meanwhile, random factors may also have an impact on physical systems that are actually in use, rapidly altering the structure of the system. Due to the fact that traditional systems are unable to accurately reflect the system models in this situation, MJSs are used to describe them. As a result of their composition as a collection of subsystems with Markov chains that select values from a finite set, MJSs are seen as an effective tool for modeling systems that are subject to rapid changes. To be specific, the sojourn time (ST) is the interval between two subsequent jumps. In reality, the ST of MJS is a random variable distributed according to a probability distribution, and the ε ι℘ (l) variable denotes the rate at which the system switches from mode i to mode j. The transition rate (TR) is usually taken to be • The EID based FSMJSs with a PIO are being taken into consideration for the first time. • A new set of LMI-based restrictions is derived using Lyapunov functional approaches to ensure the disturbance rejection and stochastic stability of FSMJSs. • The accompanying controller gain and observer gain are realized by addressing the LMI restrictions. The offered simulation results are able to unequivocally show the advantages and applicability of the produced theoretical conclusions.
The remainder of this article proceeds as follows: Section 2 illustrates the construction of an EID based FSMJS with a PIO. Through mathematical authentications, the major result is demonstrated in Section 3. The efficiency of the suggested control design is demonstrated by the theoretical findings in Section 4. Section 5 draws a brief conclusion regarding our study.
Variable Explanation
ℵ(t) and(אt) actual and reconstructed state vector, respectively u(t) control input £(t) and£(t) actual and reconstructed measured output, respectively predicted filter signal F(s) low-pass filter u f (t) enhanced control input L ςι and N ςι proportional and integral gains, respectively ℵ I (t) integral of the weighted output estimation K ηι fuzzy controller gain S −1 differentiator In Figure 1, the setup of an EID based control system with a PIO is depicted. It includes the plant, a state-feedback controller, a PIO, and an EID estimator. EID, as defined in [26], is a signal on the control input channel that affects the output in a manner similar to genuine disturbances. To start, we characterize the plant (2) using an EID υ e (t). This gives: where the equivalent-input-disturbance υ e (t) of υ d (t) means that the output of the EID υ e (t) is equal to the output produced by the real disturbance υ d (t). The PIO from [37] is utilized in this work to aid with system design. The PIO is represented in state space by: where the estimated reconstructed states of ℵ(t) and £(t) are indicated by(אt) ∈ R n , £(t) ∈ R r . u f (t) is the enhanced control input, and the proportional and integral gains, respectively, are L ςι ∈ R n×r and N ςι ∈ R r×r . A vector named ℵ I (t) represents the integral of the weighted output estimation.
Remark 1.
In this study, it is assumed that the state variables in (4), which represents the measured output £(t), are unavailable. In order to estimate the measured output, the system in (4) is manually replicated with the same behavior to form the estimated output£(t) in (5). In particular, while replicating the actual system, a negligible error component To be specific, when t → ∞, ∆£(t) → 0; thus, it is obvious that £(t) =£(t). Hence, when£(t) converges to 0, £(t) will also converge to 0.
Further, the error dynamics between state (2) and the observer (5) are denoted by: and substituting it to (4) yields: If we add a control input ∆υ(t), such that: then substituting (8) into (7) and allowing the estimated EID υ e (t) to be: allows us to denote the plant as: It should be noticed that the FSMJSs (4) may contain disturbances, which may lead to the poor performance or instability of the system. Moreover, the EID approach does not need prior information about the disturbances, and it may efficiently estimate and reject both matched and unmatched disturbances. In this FSMJSs, an EID technique is employed in the control channel, which yields satisfactory disturbance rejection performance. The optimal EID estimated disturbance can be expressed by: where E + ι is the pseudo inverse of E ι . It is commonly known that external disruptions typically have frequencies in the low-frequency range. Therefore, developing a method to determine the disturbance in a particular low-frequency band makes sense. The frequency range for the disturbance calculation is chosen using a low-pass filter, F(s) to achieve this.
Here, r is the highest angular frequency required for the EID estimation. It should be noted that the estimated total disturbance may contain some measurement noise. To eliminate this noise fromυ e (t), the low-pass-filter F(s) is integrated together with the state observer. Then, the state-space form of F(s) is considered as: where the state of F(s) is represented by ℵ ν (t). The predicted noise signal is shown asυ e (t). D ν , E ν , and J ν are constant parameters.
Remark 2.
It should be noted that the abovementioned conditions of the EID are used to construct an EID estimator, which estimates and rejects the unknown external disturbances. In particular, it shows the difference between the fuzzy state-observer error dynamics and its system parameters. This difference is also treated as the disturbance, since it is caused by modeling the error dynamics and unknown inputs. Further, it is added to the estimated disturbance and filtered by a first order low-pass filter. Finally, this disturbance is completely suppressed by the EID estimator. Therefore, the construction of this assumption is reasonable.
Remark 3.
It should be noticed that the PIO has an additional integrating term, x I (t), which raises the order of the state observer and enables quick and precise estimation of the system state. Additionally, the system flexibility is improved by an additional gain matrix N ςι introduced in the PIO. The PIO combined with the EID offers stable and accurate disturbance rejection performance as well as a trustworthy disturbance estimate.
Next, we consider the stabilization problem of FSMJSs (2). In order to stabilize system (2), a fuzzy state feedback controller u f (t) of η-th rule is designed as follows.
Control rule η : where K ηι ∈ R m×n indicates the fuzzy controller gain. Eventually, the noise estimated disturbanceυ e (t) together with the state feedback control law u f (t) yields a new controller, which is given by Furthermore, the system internal stability condition does not depends on exogenous signals; hence, the exogenous signals υ e (t) are assumed to be zero.
Remark 4.
The primary objective of this work is to stabilize the system (1); therefore, a state feedback control u f (t) is introduced in (14), which is a function of the estimated observer statê ℵ(t) . The developed control law u(t) in (15) is a combination of u f (t) and an EID estimatorυ e (t), which forces(אt) to zero for stabilizing the system. Therefore, the convergence of ℵ to zero implies the convergence of the measured output £ to zero from (1), and the developed controller significantly converges the estimateא to zero.
Moreover, the following equation is obtained when incorporating (15) in (13): Furthermore, we define the error between the state (2) and observer (5) with the configuration ∆ℵ(t) = ℵ(t) −(אt). Then, the corresponding error system can be written as: For the sake of simplicity, we assume that the closed-loop system must satisfy the following stability requirement: υ d (t) = 0.
The enhanced closed loop system resulting from the aforementioned equations with T can be defined as: T is the output of the augmented closed loop system (19).
Remark 5.
It is noted that the augmented system ϕ in (19) combines the vectors(אt) and ∆ℵ(t) but does have the term ℵ(t). In addition, from (14) and (15), ϕ converges to zero. It is obvious that the terms(אt) and ∆ℵ(t) of the error dynamics in (6) converge to zero, achieving the convergence of ℵ(t) to zero.
Main Results
The issue of the stabilization and disturbance rejection for FSMJSs based on the EID control via the PIO method is covered in this section. It is possible to create a new set of necessary conditions based on the LMI that ensure the closed-loop system (19) is stochastically stable by building a correct Lyapunov-Krasovskii functional (LKF). Theorem 1. For fixed positive scalars ϑ 1 , ϑ 2 , and ϑ 3 , the closed-loop system (19) is stochastically stable, if there exist symmetric positive definite matrices P ι , Q ι , R ι , and S, such that the following inequalities are satisfied: where the elements of Π are: Φ ςη ι1,1 = Sym{ 1 Proof. Obtaining the stability criterion for the enhanced system (19) suffices to verify the stochastic stability for the considered FSMJSs (1). Let us build an LKF with the following structure for the enhanced system (19) for this purpose: S} are the positive definite matrices. By using the infinitesimal operator L{.} together with the solution of the augmented system (19) along with the mathematical expectation, we can have: (24)-(27), we can obtain the LMI (21). Furthermore, it is obvious and simple to draw the conclusion that E {LV (ϕ(t), t, ι)} < 0, if the condition in (21) is true. In addition, the closed-loop system (19) is stochastically stable according to the Lyapunov stability theory and the aforementioned assessments.
Theorem 2.
For fixed positive scalars κ 1 , ϑ 1 , ϑ 2 , and ϑ 3 , the closed-loop system (19) is stochastically stable, if there exist symmetric positive definite matrices, X ι , Y ι , Z ι , and M, and appropriate dimensional matrices, V ηι , U ςι , and H ςι , such that the following inequalities are satisfied: where the elements ofΠ are: Moreover, if the derived inequality (28) is feasible, then the stabilizing controller gain is given by K ηι = V ηι X −1 ι , and the observer gains are given by L ςι = U ςιȲ −1 ι and N ςι = H ςιȲ −1 ι .
Proof. It appears that inequality (21) has been transformed into an LMI-based constraint because it does not take the linear form. Let us define a few terms to that end: 1 Further, pre-and post-multiplying (21) by diag{ν 1 X ι , ν 2 Y ι , ν 3 Z ι , M, I, I, I}, and setting up V ηι = K ηι X ι , U ςι = L ςιȲι , H ςι = N ςιȲι , we can easily retrieve (28). Since it is not a linear equation, it is difficult to solve the equation C ςι Y ι =Ȳ ι C ςι using the MATLAB LMI toolbox. In order to circumvent the problem, the optimization strategy algorithm is taken into account for the assumptions C ςι Y ι =Ȳ ι C ςι . It can thus be understood equivalently as a given positive scalar that is sufficiently small. Then, by using the Schur complement, the aforementioned inequality can be equivalently turned into (29). The closed-loop system (19) is stochastically stable, if the relations (28) and (29) are met. As a result, the closed-loop system (19) meets the prerequisites for stochastic stability, which has been proved in Theorem 2. Since the TR matrix ε ιι (l) is time varying, it might be necessary to test an infinite number of LMIs to fulfill the criteria (28) in Theorem 2, which would be computationally inefficient. The next step is to create numerically feasible finite LMIs from the inequalities in (28). This problem is resolved by the subsequent theorem.
Proof. Theorem 2 makes it clear that assuming condition (28) holds, and the closed-loop system (19) accompanied by the time-varying TR matrix ε ι℘ (l) is robustly stochastically stable. The time-varying term ε ι℘ (l) is allowed to have a lower bound ε ι℘ and an upper bound ε ι℘ in order to ensure numerical tractability. There is no disagreement that this is a successful approach.
As can be seen from (31), it is clear that ε ι℘ (l) takes any value in ε ι℘ , ε ι℘ . In parallel, for a specific l, ε ι℘ (l) is a convex combination, as demonstrated below: where 0 ≤ β ≤ 1, because ε ι℘ (l) in (32) is linearly dependent on β, and (28) only has to be satisfied for β = 0 and β = 1; that is, (28) holds if the inequalities in (30) hold. The proof of this theorem is also completed by splitting the sojourn time s into w pieces.
Numerical Simulation
To demonstrate the theoretical viability of our strategy, we assume a system in this section that has two operating system modes and two fuzzy rules. The parameters are chosen in accordance with the following, and its model is identical to system (19): ϑ 1 = 1, ϑ 2 = 0.00001, and ϑ 3 = 1. A disturbance υ e (t) = e −0.0001t sin(2t) is injected to the system. We select r to be π. The parameters of the low-pass filter F(s) are chosen as D ν = −101, E ν = 100, and J ν = 1, which satisfies (12). By utilizing the MATLAB LMI toolbox, a set of feasible solutions for Theorem 2 is discovered via LMIs (28) and (29), and the associated admissible gain matrices are started as K 11 = [1. Figure 2a, it is clear that the actual disturbance is completely estimated by the estimated disturbance, and its corresponding estimation error is displayed in Figure 2b. Figure 3a,b shows, respectively, the system's genuine state trajectory and associated error trajectory. Figure 4 shows the open-loop state trajectories. The state response curves of system along with its observer are presented in Figure 5a,b. Figure 6a displays the control responses. Furthermore, Figure 6b displays the control responses in the absence of a disturbance estimationυ e (t). The associated measured and observation output responses are displayed in Figure 7. In Figure 8, the membership functions are displayed. The jumping mode's trajectory is also shown in Figure 9. Additionally, Figure 10 shows the outcomes of output response curves in the control absenteeism.
Conclusions
This work has examined FSMJSs in terms of the stability and disturbance rejection, utilizing the EID technique. A PIO was utilized in conjunction with an EID estimator to improve the accuracy of the state system estimation. LMIs were formulated to design the stability condition from the gain of the feedback controller and PIO. The simulation results showed that the proposed PIO-based EID technique achieved appropriate stability in addition to enhanced disturbance estimation and rejection capabilities. The intriguing subject of our future study will be how we may incorporate the PIO to further extend the suggested method for event-triggered fuzzy sliding-mode control with semi-Markovian switchings using EID for networked control systems. Data Availability Statement: Data sharing is not applicable to this article, as no datasets were generated or analyzed during the current study.
Conflicts of Interest:
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Abbreviations
The following abbreviations are used in this manuscript: FSMJSs Fuzzy semi-Markovian jump systems EID Equivalent-input disturbance PIO Proportional-integral observer ST Sojourn time RT Transition rate | 2023-06-03T15:11:44.916Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "985de0a147e9e8063fa4f46268fe2a95af6d10f4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/math11112543",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "59afe6f6151e7b84f7c81d8252cf31c246f38a14",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
} |
238852474 | pes2o/s2orc | v3-fos-license | Review of Recent Progress in Robotic Knee Prosthesis Related Techniques: Structure, Actuation and Control
As the essential technology of human-robotics interactive wearable devices, the robotic knee prosthesis can provide above-knee amputations with functional knee compensations to realize their physical and psychological social regression. With the development of mechanical and mechatronic science and technology, the fully active knee prosthesis that can provide subjects with actuating torques has demonstrated a better wearing performance in slope walking and stair ascent when compared with the passive and the semi-active ones. Additionally, with intelligent human-robotics control strategies and algorithms, the wearing effect of the knee prosthesis has been greatly enhanced in terms of stance stability and swing mobility. Therefore, to help readers to obtain an overview of recent progress in robotic knee prosthesis, this paper systematically categorized knee prostheses according to their integrated functions and introduced related research in the past ten years (2010–2020) regarding (1) mechanical design, including uniaxial, four-bar, and multi-bar knee structures, (2) actuating technology, including rigid and elastic actuation, and (3) control method, including mode identification, motion prediction, and automatic control. Quantitative and qualitative analysis and comparison of robotic knee prosthesis-related techniques are conducted. The development trends are concluded as follows: (1) bionic and lightweight structures with better mechanical performance, (2) bionic elastic actuation with energy-saving effect, (3) artificial intelligence-based bionic prosthetic control. Besides, challenges and innovative insights of customized lightweight bionic knee joint structure, highly efficient compact bionic actuation, and personalized daily multi-mode gait adaptation are also discussed in-depth to facilitate the future development of the robotic knee prosthesis.
Introduction
With the rapid development of human society, traffic accidents, industrial injuries, human-made disasters, war, and diseases have led to an increasing number of patients with limb function losses. These patients are unable to achieve daily life due to physical disability, and huge psychological burdens caused more significant harm to themselves and their families. According to the World Disability Report of the World Health Organization (WHO), approximately 15% of people (worldwide) currently have some form of disability, 2% to 4% of whom face severe functional disorders.
For patients with lower limb disabilities, the primary goal of self-care rehabilitation is to restore their limb functions, where the robotic knee prosthesis acts as the primary functional component. The robotic knee prosthesis enables them to walk by controlling the shank posture, providing necessary knee joint torque and swing damping, and stabilizing the lower limb, etc., realizing both the physical and psychological social return of these patients. Fig. 1 shows some typical commercial knee prostheses that have been widely applied for above-knee amputees. From the most primary to the most advanced requirements, the robotic knee prosthesis should provide the following functions: (1) Support function. The support function is the most basic function of the robotic knee prosthesis, which enable a specific damping or self-locking function when the patient stands, thereby supporting the patient to walk stably. Realizing this function can make the patient get out of the difficult situation of being unable to walk.
(2) Swing function. The swing function can provide the patients with the swing actuation or damping, thereby enabling the partial or complete swing motion of the prosthetic shank. Realizing this function can improve/normalize the walking gait of the disabled lower limb.
(3) Active actuation. In addition to necessary support and swing functions, the robotic knee prosthesis should also provide actuating torques for specific lower limb movements such as level-ground walking, stair and ramp ascent/descent, squatting, running, etc. Because the dynamic characteristics and motion laws of the robotic knee prosthesis vary in different locomotion modes, the implementations of active actuation for different locomotion modes of the robotic knee prosthesis require special investigations and designs. Realizing as many active locomotion modes as possible will significantly increase the rehabilitation effect of the robotic knee prosthesis.
(4) Automatic control. An automatic robotic knee prosthesis is capable of perceiving the willingness of the patient and move according to expected/desired rules. The more intelligent the automatic control is, the better the rehabilitation effect the patient will obtain.
According to actuation and automation characteristics, the robotic knee prostheses can be classified into the passive (automatic) robotic knee prosthesis, the semi-active (automatic) robotic knee prosthesis, and the active (automatic) robotic knee prosthesis, as shown in Fig. 2. The passive robotic knee prosthesis is the most widely applied robotic knee prosthesis due to its low prices, which can only provide the patients with function (1). Unlike the purely passive robotic knee prosthesis, the semi-active prosthetic knee is a more high-end robotic knee prosthesis in the market, which can provide the patients with functions (1) and (2). Compared with the passive and the semi-active robotic knee prostheses, the active robotic knee prosthesis can realize all the functions (1), (2) and (3), which is currently the most advanced robotic knee prosthesis for above-knee amputees. If these prosthetic knees are designed with automatic functions, then the function (4) described above can be provided. The mutual relationships between these robotic knee prostheses are listed in Fig. 2.
To realize the functional compensation of the human knee joint, the following design criteria should be met: (1) Structure. The robotic knee prosthesis must first meet the size and functional requirements of its structure. An excellent prosthetic knee structure can effectively improve its mechanical properties and bionic performance, thus laying the foundation for the movement and gait diversity of the robotic knee prostheses.
(2) Actuation. The robotic knee prosthesis needs active actuation/damping to normalize the gait of the affected side. Selecting a suitable actuating technique can effectively improve the walking gait of the robotic knee prosthesis and is beneficial to enhancing the actuating characteristics of the robotic knee prosthesis.
(3) Control. The control technology of the robotic knee prosthesis directly determines its adaptability and practicability. Improving the automatic control level of the robotic knee prosthesis helps to achieve a more bionic and advanced performance of the robotic knee prosthesis. Therefore, to help relevant readers better understand the current research status and progress of the robotic knee prosthesis, this review compared and summarized related contributions in terms of structural design, actuating technology, and control methods in the past ten years (2010−2020). The literature survey and acquisition method are as follows: the authors searched Google Scholar for "robotic knee prosthesis", "prosthetic knee joint", "bionic prosthetic knee joint", "prosthetic exoskeleton robot", and "prosthetic knee exoskeleton", and obtained 16700, 17700, 5170, 14400, 8550 results. Then, the authors selected 458 articles of general significance through the following process: (a) The results are ranked by relevance using the algorithm provided by Google; (b) divide the past ten years into ten periods and select the top 1% articles of each keyword in each period; (c) ignore those articles that do not include "rehabilitation equipment" or belong to the category of biological research. After checking abstracts of the selected articles, 66 highly relevant documents were selected for in-depth investigation. The main contributions of this paper are as follows: (1) Systematic classifications of robotic knee prosthesis functions are presented, and related research in terms of structures, actuation, and control in the past ten years are introduced.
(2) Quantitative and qualitative analysis and comparison of robotic knee prosthesis-related techniques are conducted, and their challenges and innovative insights are discussed in detail.
The rest of this review is organized as follows: In section 2, the robotic knee prostheses of different structures are studied and compared; in section 3, different actuating strategies of the robotic knee prosthesis are compared and summarized; in section 4, available control schemes of the robotic knee prosthesis are introduced and summarized; section 5 discussed the research status, and drew some technical issues and development trends of the robotic knee prosthesis; in section 6, the review is technically concluded.
Structural design of the robotic knee prosthesis
Proper structural design of the robotic knee prosthesis can provide the patients with better bionic performance, dynamic characteristics, and motion stability, and can facilitate the prosthetic actuation. At present, the study of the prosthetic structure mainly includes two types, namely the uniaxial one and the multi-axial one.
Uniaxial robotic knee prosthesis
The uniaxial prosthetic knee uses a single hinge to achieve knee motion. Because the working principle of the uniaxial robotic knee prosthesis is simple, its structural research mainly focuses on the realization of special knee functions, such as stance self-locking, redundant actuation, and stair ascent/descent, etc.
Andrysek et al. proposed a uniaxial robotic knee prosthesis (Fig. 3a) with a self-locking function in the stance phase [1] . The pylon bone can be pushed by the foot support force to push the knee lock clockwise around the control axis in the stance phase, thereby locking the thigh component and the robotic knee prosthesis to form a stance self-locking. In the swing phase, the lower limb movement drives the knee lock to rotate counterclockwise around the control axis, thereby disengaging the prosthetic knee from the self-locking state and providing required damping by the friction washer. The robotic knee prosthesis achieves stance self-locking and swinging damping control in the experimental stage through a purely mechanical structure, which is of high walking reliability. Arelekatti et al. also designed a self-locking/clutching single-axis knee joint [2,3] (Fig. 3b) by using the latch mechanism in the stance phase, and Ramakrishnan et al. proposed a stance self-locking/ clutching method based on the gear-rack mechanism [4] .
Moreover, Liu et al. theoretically proposed a redundantly actuated uniaxial robotic knee prosthesis by designing the locking mechanism with a torsion spring [5] . The robotic knee prosthesis can lock the torsion spring in the stance phase via a cam mechanism to provide redundant control torque in the stance phase. When in the swing phase, the cam mechanism leaves the locked state and releases the torsion spring, thereby disengaging the robotic knee prosthesis from the redundant actuation state. In addition, Inoue (Fig. 3c) et al. proposed a uniaxial robotic knee prosthesis for stair ascent [6] , which can realize the stance position limit in stair ascent via the crank slider mechanism, limiter, and spring, and can assist the knee joint extension when the sole was exerted.
To improve the overall performance of the uniaxial robotic knee prosthesis, Lenzi et al. implemented a lightweight robotic knee prosthesis with a new hybrid actuating system that can realize passive and active modes of operation [7] . The hybrid knee joint uses a spring-damper system in combination with an electric motor and transmission system to provide stairs mobility, which weighs only 1.7 kg (including batteries) and can provide up to 125 Nm of repetitive torque. Similarly, Lovasz et al. designed a hydraulically driven uniaxial robotic knee prosthesis [8] based on a geared inverted Fig. 3 Schematic diagrams of uniaxial and multi-axial robotic knee prostheses. (a) Uniaxial knee prosthesis with friction cone for stance self-locking; (b) uniaxial knee prosthesis with latch mechanism for stance self-locking; (c) uniaxial knee prosthesis with limiter and spring for stair ascent; (d) four-bar knee prosthesis with a variable linkage for gait adjustment; (e) nylon anti-parallel four-bar knee prosthesis with adjustable knee elasticity; (f) four-bar knee prosthesis with magnetorheological fluid damper for swing damping. crank slider mechanism. A hydraulic cylinder hinged on its shank linkage acts as the slider mechanism to drive the geared linkage mechanism, thereby providing actuating torques for the robotic knee prosthesis. Dabiri et al. designed a uniaxial robotic knee prosthesis [9] that can mimic human muscle characteristics. The prosthesis can simulate the movement characteristics of the knee joint via two pneumatic muscles, thereby providing a better walking gait for the above-knee amputees. In addition, Hoover et al. theoretically proposed a motor-screw-driven uniaxial robotic knee prosthesis [10,11] . The motor is hinged to the shank link to actuate the knee joint, thereby achieving a driving effect of 150 W continuous output power and 500 W peak output power.
Multi-axial robotic knee prosthesis
The multi-axial robotic knee prosthesis refers to one with a plurality of hinges or a non-fixed shaft, which can be realized by the multi-bar mechanism or the gear mechanism. Compared to uniaxial robotic knee prostheses, the multi-axial robotic knee prostheses have advantages in bionic performance, prosthetic stability, and net energy expenditure.
Four-bar mechanism
Due to its simple structure and fine bionic and mechanical properties, the four-bar mechanism has been widely used in the design and application of robotic knee prostheses since the 1980s. In recent years, the research on the four-bar mechanism robotic knee prostheses has been mainly focused on the direction of particular locomotion function, integrated actuation, and elastic deformation.
Demsar et al. developed a commercial four-bar robotic knee prosthesis for alpine skiing [12] . It can maintain the stance stability of the amputee while the center of gravity of the patient is maintained in front of the body in a semi-squat state so that the amputee can perform certain sports such as skiing and ice skating. Different from this, Inoue et al. designed a passive four-bar prosthetic knee prototype [13] that can be used in stair ascent. When the prosthetic leg is in contact with the stair ground in a curved state, the hinge of the lower leg linkage is moved into the slot of the side linkage, thereby completing the self-locking of the knee joint under bending in stair ascent. To improve the gait performance of the four-bar knee joint, Awad et al. proposed an experimental prototype of a variable-length four-bar robotic knee prosthesis [14] (Fig. 3d). One of the four-bar linkages is replaced by a motor-screw system. By actuating the screw by the motor, the size of the four-bar mechanism can be adjusted to enable the robotic knee prosthesis to achieve the desired gait. Similarly, Etoundi et al. proposed an experimental model of a deformable four-bar robotic knee prosthesis [15] that can mimic the structure of the human knee joint (Fig. 3e). The robotic knee prosthesis incorporates the shape of the tibia and femur bones and uses an anti-parallel four-bar mechanism made of a stretchable nylon cord to realize the knee function. The force stability of this robotic knee prosthesis is improved, and its flexibility can be adjusted according to the needs of the amputee. In addition, Lu et al. theoretically designed a robotic knee prosthesis based on a flexible four-bar mechanism [16] . The shank linkage is made of a flexible material that can change its shape during prosthetic movement, thus providing stable damping and shock absorption in the stance phase.
To improve the walking performance of the four-bar knee joint, Bulea et al. (Fig. 3f) proposed a four-bar mechanism robotic knee prosthesis controlled by a linear magnetorheological fluid damper [17] . Two ends of the linear magnetorheological fluid damper are installed on the thigh and the shank linkages of the four-bar mechanism, which can provide the patient with a maximum swing damping of 64.5 Nm theoretically. Li et al. designed a four-bar mechanism robotic knee prosthesis integrated with a meniscus [18] , so that the robotic knee prosthesis can bear a larger load while exercising, and has the function of buffering shock absorption. Besides, Fu and Zhang et al. also designed a four-bar robotic knee prosthesis based on parallel springs and dampers [19,20] to stably adjust the swing speed of the prosthetic knee in the swing phase and absorb vibration and maintain stability in the stance phase. To improve the cost-effectiveness of robotic knee prostheses, Arelekatti et al. designed a low-cost passive four-bar prosthetic knee for daily use of amputees in developing countries [21] . The mechanism is implemented based on two functional modules: an automatic early attitude lock for stability, and a differential friction Sun et al.: Review of Recent Progress in Robotic Knee Prosthesis Related Techniques: Structure, Actuation and Control 769 damping system for later attitude and swing control. The results of field tests showed that the early posture locking performance is satisfactory, and the prototype achieves a stable posture transition by promptly launching post-posture flexion.
Multi-axial mechanism
In addition to the four-bar mechanism, other multi-bar mechanisms and non-circular gear mechanisms are gradually applied to the robotic knee prosthesis. These mechanisms contain more design parameters than the four-bar mechanism, providing better bionic knee performance after proper mechanism design. For example, Sun et al. designed a geared five-bar mechanism robotic knee prosthesis, which has superior bionic performance compared with the four-bar one [22] . The Instantaneous Center of Rotation (ICR) can be fine-tuned via adjusting the gear ratio of the prosthetic mechanism, which improves the customizable level of the robotic knee prosthesis. Dalessio et al. theoretically designed a robotic knee prosthesis based on the non-circular gear [23] . It uses the axial plane optimization in the Revolute-Revolute-Spherical-Spherical (RRSS) space to obtain the specific parameters of the non-circular gear, which can ideally simulate the rotation characteristics of the human knee joint. In addition, Wu et al. proposed a robotic knee prosthesis based on the incomplete gear-linkage structure [24] . The robotic knee prosthesis is formed by two rod-connected incomplete gears fixed to the thigh and the shank linkage, which can provide the right ankle trajectory and is of easy control like the uniaxial knee joint.
Design methodology
Wu et al. proposed an optimization method of the four-bar robotic knee prosthesis [25] , which uses the ICR of the human knee joint as the optimization target for the four-bar mechanism parameters to provide better bionic performance for patients than conventional ones. Zhang et al. proposed a four-bar robotic knee prosthesis optimization method based on the ideal human ankle trajectory [26] . The optimized four-bar robotic knee prosthesis can provide a better ankle trajectory to the patient. Furthermore, Ghaemi et al. proposed a force analyzing and optimizing method for the knee joint in flexible multi-bar prosthetic joints [27] . By replacing a rigid hinge in the conventional multi-bar mechanism (such as the four-bar mechanism, the six-bar mechanism, etc.) with a flexible hinge, the optimization objective of the flexible joint can be established, thereby reducing the knee joint control torque. In addition, Pfeifer et al. proposed a four-bar mechanism optimization model based on the motor-screw system [28,29] . The installation position of the motor-screw is optimized to provide desired actuating torques for the robotic knee prosthesis.
Summary of the structural design of the robotic knee prosthesis
The qualitative summary of this section is listed in Table 1. To facilitate readers to obtain a clear and in-depth overview of the introduced prosthetic knee structures, the authors also made a quantitative evaluation of the function and performance differences between the uniaxial, the four-bar, and the multi-axial prosthetic knee structures, namely, functionality (refers to the prosthetic function that can be achieved as a substitute for the knee joint), performance (refers to the motion effect that can be achieved as a knee joint), efficiency (refers to the energy efficiency or consumption of its actuation or movement), and mobility (refers to the motion performance limited by its kinematics and dynamics performance, which is also listed in the last column of Table 1). Each aspect of the introduced prosthetic knee structure is scored from low to high via the five-pointed stars.
Actuating techniques of the robotic knee prosthesis
The actuating technique directly affects the mobility functions of the robotic knee prosthesis, which can be divided into the following three categories according to their characteristics: (1) Non-actuated/passive: This category of robotic knee prostheses is driven by the above-knee amputee via the residual of the thigh, which can only provide auxiliary functions such as stance self-locking and free/damped swinging in the swing phase. Advanced functions such as shock absorbing, stance flexion, stair ascent, etc., can be specially designed via modifying the prosthetic mechanism with limiters or elastic components. When an amputee wears a robotic knee prosthesis of this category, the rehabilitation effect of the robotic knee prosthesis is limited by the prosthetic control ability of himself. In addition, since the human knee joint continuously consumes energy when walking, the non-actuated/passive robotic knee prosthesis will not achieve as desired walking gait as expected.
(2) Rigid actuation: This category of robotic knee prostheses is actuated directly by the actuating components such as motors, pneumatic/hydraulic cylinders, and magnetorheological fluid actuators, etc., which can provide required actuation torque with easy control for amputees under various walking conditions. However, because the motion characteristics of the robotic knee prosthesis are highly nonlinear and unpredictable, rigid actuation may be susceptible to some defects such as slow response, high actuating linearity and low driving precision, etc., which might result in unexpected and unsatisfactory robotic actuation. In addition, rigid actuation may not cope with sudden changes in knee joint movement, such as impact, emergency stop, etc., resulting in reduced walking practicality in daily life.
(3) Elastic actuation: the robotic knee prosthesis is actuated by combining the actuating component and the elastic element. The elastic actuation can realize energy absorption and reuse and is of high control precision, which can enable the robotic knee prosthesis to adapt to various motion situations. The core idea behind the elastic actuation is to use appropriately designed elastic elements to assist the robotic actuation, thus achieving power reduction as well as improving its actuating accuracy and anti-impact performance. Therefore, the elastic actuation has become a hotspot research direction in terms of the robotic knee prosthesis.
Rigid actuation
In recent years, the research of rigid actuation has been mainly focused on the direction of high-performance magnetorheological fluid actuators and special motors to achieve miniaturization, weight reduction, and energy saving.
Guo et al. designed a multi-functional rotary actuator [30] that combines a motor and a magnetorheological fluid damper (Fig. 4a). The rotary actuator is divided into two parts: a motor and a magnetorheologi-cal fluid clutch/brake. The motor part is composed of the coil fixed to the housing and the permanent magnet fixed to the rotor formed by the magnetorheological fluid clutch/braking part. When the rotor is energized, the clutching state between the rotor and the output shaft can be manipulated. The rotary actuator can provide active torque in the swing phase through its motor part on the robotic knee prosthesis, as well as stance self-locking and damping functions in the stance phase using the magnetorheological fluid damper. In the meantime, Guðmundsson optimized a magnetorheological fluid damper with high working output torque and low closed output damping [31] . The authors used the bi-objective optimization method to optimize the specific configuration of the magnetorheological fluid damper and tested appropriate magnetorheological fluid parameters from 22 kinds of magnetorheological fluid materials and their mixtures, resulting in better damping performance than standard commercial magnetorheological fluid dampers. What is more, Solomon et al. designed a magnetorheological damping valve that can control the swing damping of the robotic knee prosthesis, where the damping valve is limited to the required cylindrical volume defined by its radius and height [32] . The results of the Response Surface method Mapping(RSM) Finite Element Analysis (FEA) show that the weight of the damper is reduced by 71% compared with existing magnetorheological dampers while a nearly normal swing phase trajectory can also be guaranteed.
For the motor-based actuators, Furuya et al. designed a high-thrust spiral motor [33] for joint actuation, the motor stator and rotor of which are in the spiral shape (Fig. 4b). When the stator is energized, the rotor is driven to rotate and advances axially along the spiral gap of the stator, thereby generating a theoretical thrust of 101 N. Additionally, Bogert et al. theoretically proposed a rotary hydraulic actuator that utilizes high/low-pressure accumulators for energy storage and release [34,35] . The high/low-pressure accumulator control valves are opened and closed in a particular order of operation to provide active power and damping control for the prosthetic knee. The authors also theoretically designed a prosthetic knee actuator that can realize electrical energy storage [36] via the supercapacitor. The released mechanical energy of the robotic knee prosthesis can be gathered by the super capacity and reused for prosthetic actuation when needed.
Elastic actuation
Due to its high control/actuation accuracy and energy absorption/reuse features, the elastic actuation has become a research hotspot of the prosthetic actuation. At present, the elastic actuation mainly develops toward the bionic series/parallel elastic actuation, series-parallel/parallel-series hybrid elastic actuation, and the variable stiffness actuation.
Martinez-Villalpando et al. designed an antagonistic active robotic knee prosthesis (Fig. 4c) based on the motor-screw series elastic actuator [37] . Two sets of series elastic actuators are utilized to provide the muscle-like agonist-antagonist actuation, thereby achieving a better lower limb gait than conventional actuation. The authors also designed a clutchable series elastic actuated robotic knee prosthesis, which can effectively enhance the en-ergy utilization rate [38,39] . In addition, Hoover et al. theoretically proposed a series elastic actuator based on the variable impedance agonist-antagonist muscle model [40] , which can provide actuation and damping characteristics of human muscles to realize knee self-locking and stability maintenance. Moreover, Grimmer and Seyfarth theoretically proposed a series elastic actuated uniaxial knee that can mimic the muscle function of the lower extremities of the human body [41] , thereby achieving the reduction in the peak power and energy consumption of the motor.
Pfeifer et al. proposed a series-parallel hybrid elastic actuation technique (Fig. 4d) for the robotic knee prosthesis [29] , the lever in which is adjusted via the motor-screw to regulate the working condition of the actuator, thus providing expected knee actuating torque. The parallel spring in the robotic knee prosthesis acts as an auxiliary actuation to reduce the power of the motor. This proposed elastic actuator can realize the function of variable knee stiffness like the human knee, which is biomimetically closer to the human knee joint in terms of knee dynamics. Similarly, Flynn et al. have developed a new type of semi-active actuator with lockable parallel springs (Fig. 4e) for the knee joint of the bionic robot [42] , which is able to provide approximate behavior of a healthy knee during most gait cycles in level-ground walking. The actuator can also be used for functional tasks such as climbing stairs. Compared with passive or variable damping robotic knee prosthesis, the proposed actuator can effectively reduce energy consumption and improve the level-ground walking behavior.
Schuy et al. designed a series elastic actuator (Fig. 4f) with variable torsional stiffness [43,44] , two motors in which are responsible for respective actuation and variable stiffness adjustment. When the meshing position of the counter bearing and the cylindrical elastic component changes, the output torque changes accordingly. The elastic actuator has been experimentally proven to consume less energy in adjusting the elastic output without adjusting the mechanism. Moreover, Wentink et al. also proposed a theoretical model of variable stiffness elastic actuator for the prosthetic joint [45] , demonstrating the superiority of the variable stiffness actuator over a conventional series elastic actuator.
Summary of the actuation techniques of the robotic knee prosthesis
For the actuation of the prosthetic knee joint, the detailed summary of this section is listed in Table 2. To provide readers with an intuitive overview of the introduced actuation techniques for the robotic knee prostheses, this section made a quantitative evaluation of the function and performance differences between the rigid actuation and the elastic actuation. The evaluation was mainly carried out from four aspects, including the implementation (refers to the level of realizability of the actuator), performance (refers to the continuous and the maximum output torques and speeds of the actuator), functionality (refers to whether the actuator can provide the prosthetic knee joint with bionic actuation in natural gaits), and efficiency (refers to the power efficiency or the power consumption of the actuator when providing various knee motion). Each aspect of the introduced actuation technique is scored from low to high via the five-pointed stars.
Control techniques of the robotic knee prosthesis
With the development of computer science, mechatronics, and embedded programming, as well as the progress of signal processing and pattern recognition techniques, the control of the robotic knee prostheses has become more and more volitional and intelligent. According to whether the intention of the above-knee amputee is involved in the control process, the control methods of the robotic knee prosthesis can be divided into the volitional ones and the non-volitional ones. The volitional control usually uses the surface electromyography (sEMG) signal to recognize the intention of the amputees, thereby manipulating the robotic knee prosthesis to achieve various functions. On the contrary, the non-volitional control views the prosthetic as an individual control object, which realizes its autonomous motion control by sensor-based gait detection and planning. The intention of the amputee will not participate in the control process.
In the meantime, the control method of the robotic knee prosthesis can also be divided into categories of automatic control and non-automatic control [46,47] . The automatic control means that the robotic knee prosthesis can automatically realize functions in terms of gait adjustment, knee torque adaptation, walking phase switching, etc., while the non-automatic control can only realize essential functions of stance self-locking, simple swing damping, etc. Currently, the automatic control of robotic knee prostheses can be divided into two categories. One is the expert control method, such as the finite state machine, the rule-based control, etc. The other one is the neural network/fuzzy control methods that use autonomous learning reasoning for models and environmental changes. At present, the volitional control and the automatic control have gradually merged into the volitional-automatic fusion control, as shown in Fig. 5.
sEMG-based control
The sEMG signal of the lower residual reflects the intention of the amputee of how to move the lower limbs. sEMG-based prosthetic knee control is usually realized by pattern classification or neural network to identify the motion intention of the amputee, thus achieving the volitional control of the robotic knee prosthesis [48][49][50][51][52] .
The main flow of the sEMG-based prosthetic knee control is as follows: the sEMG signals of the lower limb are firstly collected by sensors and pre-processed by filtering, denoising, etc., and then pattern classification methods such as Quadratic Discriminant Analysis (QDA), and Linear Discriminant Analysis (LDA) are used to analyze the control intention of the amputee (buckling/flexion). Finally, the expected knee motion can be obtained by integrating the angular velocity calculated by the Principal Component Analysis (PCA).
Mode identification and motion prediction
The state recognition and the motion prediction can provide effective control and feedback parameters for the real-time control of the robotic knee prosthesis, thus enhancing the overall prosthetic control performance.
Mode identification
The mode identification of the robotic knee prosthesis includes the prosthetic knee state/phase identification and the state/phase switching/transition identification, among which the latter one still has many difficulties. Huang et al. proposed an sEMG-mechanical sensor fusion control of the robotic knee prosthesis [53] . Both the sEMG signal and mechanical signal are collected via the sEMG sensor and the inertial sensor installed on the residual limb. The Support Vector Machine (SVM) method is used to realize the continuous identification of the knee mode with the accuracy of 99% in the stance phase and 95% in the swing phase, which is more accurate than methods that use only the sEMG or mechanical sensors. Young et al. proposed a state mode identification method based on pattern training [54] . The authors used the Gaussian mixture model-based state classifier to realize the identification training, which includes the steady mode such as walking, stair ascent, etc., and mode switching such as from walking to stair ascent, from stair descent to walking, etc. The trained classifier can identify both the mode and mode switching with sufficient accuracy.
Motion prediction
The motion prediction can provide significant assistance for the automatic control of the prosthetic knee to improve the lower limb gait of the amputee. Kutilek et al. proposed a lower limb motion prediction method based on the Back-Propagation (BP) neural network and gait angle diagrams [55] . The knee-hip, knee-ankle, and hip-knee-ankle diagrams within one walking cycle are used to train the BP neural network, thereby enabling it to predict any movement of the lower extremities in walking cycles. Vallery et al. proposed a lower limb motion prediction method based on mapping [56] . Combined with residual motions of the amputees, pre-recorded lower limb motion data of healthy people under different motion states are statistically regressed to predict the movement of the lower limbs. Joshi et al. proposed a knee joint angle prediction method based on the contralateral knee joint angle and the trained adaptive fuzzy neural inference system [57] . The real-time input of the knee joint data in the contralateral side is used to infer the knee joint angle of the robotic knee prosthesis, thereby helping the amputee to realize the desired gait at different walking speeds. Moreover, Chen et al. proposed a method for recognizing knee joint torque based on surface EMG signals [58] . A knee joint torque calculation method based on the EMG model is established, where the Support Vector Machine (SVM) method is used to predict the knee joint torque. This method can successfully identify the required actuating torque for the knee joint and can provide theoretical support for the torque control of the robotic knee prosthesis.
Automatic control
Lawson et al. proposed a prosthetic limb control method based on the Finite State Machine (FSM) [59] . Corresponding prosthetic joint actuating torque under different conditions is provided to enable the stair ascent/descent functions. The authors also realized the adaptive foot stabilization function via the same method [60] . To enhance the control effect of the FSM-based approaches, Liu et al. applied the Dempster Shafer theory-based state transition rule [61] to improve the motion switching effect of the knee joint.
In addition to the FSM-based approaches, Xie et al. applied the neural network in the control of the robotic knee prosthesis to improve the walking gait of a four-bar mechanism robotic knee prosthesis actuated via the magnetorheological fluid damper [62] . Moreover, Ekkachai et al. also proposed a robotic knee prosthesis neural network predictive control method based on particle swarm optimization [63] . This method predicted the motion state of the robotic knee prosthesis in the first place. Then the damping compensation amount is provided in advance by the magnetorheological fluid damper prior to the active actuation, where the optimal voltage of the magnetorheological fluid damper is obtained by the particle swarm optimization to increase the gait performance.
Wen et al. also proposed a robotic knee prosthesis control method based on adaptive dynamic programming [64] . Because the parameters of the robotic knee prosthesis controller in each state of the FSM are artificially given and lack versatility, the authors used adaptive dynamic programming to automatically tune the parameters in different working states, thus achieving the adaptive control of the prosthetic knee. In the meantime, Quintero proposed a prosthetic control method that unifies different periods of gait through virtual constraints driven by human-inspired phase variables [65] . The controller is implemented for different walking speeds in the amputee biped walker model, and the feasibility of the control strategy is verified through experiments. What is more, Inoue et al. developed a control method to deal with changes in the gait parameters of the amputee [66] . The algorithm is evaluated using the level-ground gait database of healthy subjects. The results show that the precision and the recall of the proposed method are increased in both the stance phase and the swing phase of the level-ground walking.
Summary of the control techniques of the robotic knee prosthesis
The detailed summary of the control of the prosthetic knee joint is listed in Table 3. According to the simulation or experiment data in the introduced references, this section evaluated and compared the performance differences between mode identification, motion prediction, and automatic control. The evaluation includes four aspects: implementation (refers to the structure and algorithm of the control scheme), precision (refers to the accuracy of the identification, prediction or control of the algorithm), effectiveness (refers to the control effect of the control method on the prosthetic knee joint) and efficiency (refers to the stability and delay of control and other parameters that affect the control performance of the prosthetic knee joint). These four aspects of prosthetic knee joint control were scored respectively, and the performance was characterized as 1-5 five-point stars from low to high.
Development status of the robotic knee prosthesis
In this section, the data in the references regarding the structure, actuation, and control of the prosthetic knee joint are quantitatively analyzed in depth. Because the structural parameters, actuation parameters, and control parameters of the prosthetic knee joint in each reference are not uniform, and the implementation schemes and effects of various prosthetic knee joints are also different, this paper adopted some typical comparative analysis method [67][68][69] and evaluated the performance indices of the introduced structures, actuation, and control of the prosthetic knee joint.
Structure
According to the quantitative evaluation data of robotic knee structures in section 2, the structural performance indices of uniaxial, four-bar, and multi-axial prosthetic knee joints were drawn in Fig. 6.
It can be summarized from Fig. 6 that most robotic knee prostheses use mechanisms with fixed structural characteristics such as the single-axis mechanism or the four-bar mechanism, which only have one Degree of Freedom (DOF) and are less conducive to the realization of individual physiological characteristics. Additionally, some structures of current robotic knee prostheses may not fully simulate the knee joint, which might not bring in natural gait for above-knee amputees. However, by carefully analyzing and summarizing the robotic knee structures, it can be inferred from the development trend that the knee prosthesis has gradually become more bionic and lightweight with better mechanical performance. For example, the recent uniaxial robotic knee prosthesis developed by Lenzi et al. weighs only 1.7 kg (including battery), which is far more lightweight than the uniaxial ones proposed in previous years. Similarly, the four-bar knee proposed by Zhang et al. can absorb vibration during walking via dampers, demonstrating better mechanical performance than convention rigid four-bar knees. In addition, the multi-axial robotic knee prosthesis designed by Sun et al. can also provide subjects with adjustable bionic performance, which outperforms previous four-bar robotic knee prostheses with fixed bionic features. Purposeful structural designs are beneficial to the miniaturization of the actuation and control system, making the robotic knee prosthesis more adapted to the gait diversification and the multi-modality of human lower limb movement characteristics (such as running, stair ascent/descent, jumping, etc.).
Actuation
According to the quantitative evaluation data of robotic knee actuation techniques in section 3, the actuating performance indices of the rigid actuation and elastic actuation are plotted in Fig. 7.
As discussed in section 3, the purely passive robotic knee prosthesis may not achieve the active swing nec-essary for a natural walking gait. Some of the semi-active and fully active knee prostheses are short of shock absorption and knee energy reuse functions. However, after summarizing current actuation techniques via Fig. 7, it can be found that the elastic actuation or hybrid elastic actuation is theoretically more energy-saving and can enable the robotic knee prosthesis with more stable and smooth movement.
Yet, this emerging actuation technique has not been commercially applied. At present, the energy-saving effect introduced by the elastic actuation may not compensate for the increase in power consumption caused by the additional weight of the elastic actuation component. What is more, the trade-off between the device price and performance might also slow down its commercial applications. From the viewpoint of the authors, the goal of weight reduction should be taken seriously for its practical and commercial application in the design of elastic actuators. Some lightweight elastic actuation approaches such as the dielectric elastomer, the magnetorheological fluid, the electrorheological fluid, the pneumatic actuation, etc., might bring desired performance while the weight of the actuator may be significantly reduced.
Control
According to the quantitative evaluation data of robotic knee control techniques in section 4, the control performance indices of pattern recognition, motion prediction, and automatic control are plotted and in Fig. 8. The purpose of the prosthetic knee control is to realize expected gait movements for the amputee. As the low-level control of the robotic knee prosthesis belongs to the complex nonlinear human-machine interaction, the realization of precise lower limb motion tracking and control still needs improvement in algorithms and hardware. In addition, the transition identification between each locomotion mode still hindered the improvement of gait detection accuracy. However, it can be found from Fig. 8 that the recent automatic control methods based on artificial intelligence (such as machine learning, reinforcement learning, etc.) have gradually shown better control effects. For example, the algorithms proposed by Inoue et al. show a stable level-ground walking with 99.5% accuracy, which means a walking control error will only occur after more than two hundred steps. Certainly, to further enhance the control performance of the robotic knee prosthesis, it is a better (maybe not optimal) approach to collect as many signals as possible (any volitional signals and gait signals) as long as the algorithm complexity is within the limit of the hardware computing power.
New explorations in robotic knee prostheses
By analyzing the researches in the aspect of struc-tures, actuation, and co050trols, it is summarized that the bionic functionality, energy efficiency, and natural gait are three areas that could be focused on to realize performance enhancement for the robotic knee prosthesis. As a consideration of these three areas, the following feasible research discussions are listed.
Customized lightweight bionic knee joint structure
From the very beginning of the research of the passive/semi-active robotic knee prosthesis, the relative rotation/sliding bionic motion characteristics between the human thigh and shank have become the research focus, where the fundamental four-bar robotic knee prosthesis and subsequent five-bar, six-bar ones that could simulate this knee motion characteristics effectively improved the joint stability and walking gait performance. With the development of modern micro-electromechanical technology, the active robotic knee prosthesis has become a new high ground for prosthetic research due to its better human-like actuating characteristics.
However, there are still some structural issues that need to be solved.
(1) Space and weight requirements for high-efficiency actuating devices significantly reduce the bionic design space of the knee mechanism itself. (2) The bulky multi-bar mechanism may also not be conducive to the lightweight requirements of the active robotic knee prosthesis.
Designers have to make a trade-off between bionic and lightweight: "active lightweight" or "active bionic"? The actual conflict between them is that the bionic multi-axis knee prosthesis cannot meet the lightweight requirements of the actively actuated knee prosthesis. For some researchers, the lightweight and battery-life benefits of abandoning the bionic multi-bar mechanism are more significant than the invisible bionic performance. Therefore, to realize the fusion and coexistence of active actuation and bionic structure, the following possible research directions could be considered to realize customized lightweight bionic knee joint for unilateral and bilateral above-knee amputees.
(2) Design a set of configuration adjustment mechanisms suitable for different patients.
Highly efficient compact bionic actuation
The human knee joint can achieve diversified and complex movements due to its extremely high-power density and efficient energy recovery mechanism. In daily life, the human body has unpredictable behavior and will switch randomly in real-time among the movement modes of slow/normal/fast walking, slope, stairs, standing up, and sitting down, where the power of the knee joint will alternately switch between positive and negative power states. When the knee joint is in a negative power state, it will release mechanical energy to the outside instead of applying actuating energy to it. This phenomenon will bring the following challenges to the actuator of the robotic knee prosthesis.
(1) The negative power state of the knee joint is highly related to the motion state of the subject and has the characteristics of high frequency, short period, and high power.
(2) If the negative power of the knee joint is not handled well, the energy consumption of the actuator will increase sharply due to braking.
In the human body, the mechanical energy released by the negative work of the knee joint will be transmitted to the hip and ankle joints via the gastrocnemius, semitendinosus, and rectus femoris muscles, so as to achieve energy absorption/reuse and reduce the actual consumption of human metabolism. Therefore, to achieve the efficient and active actuation of the robotic knee prosthesis, the following possible research directions could be considered to reduce the hardware requirements for the peak power and torque of the joint actuator during active actuation.
781
(1) Reduce the weight of the joint and increase the power density of the joint actuator, including applying lightweight materials and topology optimization to reduce weight, installing high power density motors and efficient transmission devices, and using a power supply system with higher power density.
(2) Explore the power characteristics of the knee joint in multi-mode motion from the perspective of energy recovery by storing/buffering energy fluctuations based on physical hardware such as elastic elements or power generation systems as well as necessary software algorithms.
Personalized daily multi-mode gait adaptation
Nowadays, although the gait recognition algorithm can realize high accuracy gait recognition in typical walking scenes (including level-ground, ramp, stairs), for large movements such as running and jumping in daily life, gait recognition is still challenging. In addition, different patients have different characteristics of gait signal mutations, which reduces the processing ability of one single gait model. Therefore, the following gait issues still need to be solved: (1) individual gait data differences between various amputees and time distortion fluctuations caused by the difference in walking speed; (2) the transition problem between gaits has always been a difficult point for current gait recognition and prediction algorithms.
What is more, the demand for hardware sensors in gait recognition may bring a difficulty to accuracy improvement: (1) it is difficult to ensure the orientation of the sensor in practical applications as large movements will cause the sensor to lose; (2) there will be a time difference between input and output due to the calculation time of gait recognition.
Because the accelerometer is sensitive to the direction, the uncertainty of the sensor orientation will lead to confusion in the collected data. Furthermore, an excessive delay will lead to poor user experience and reduce user comfort, which will even cause gait disturbances. Moreover, differences in patient wear, including clothing outside the skin, shoes, and weight-bearing conditions, will also cause negative disturbances and affect the sensor data accuracy. Therefore, the research of daily adaptive multi-mode gait still needs to explore algorithms with higher generalization capabilities as well as high precision sensing. The following possible research direction could be considered: (1) the deep learning-based approaches gradually show their advantages in big data processing with high generalization.
(2) exploring new non-contact sensing technologies might be a new approach to improve the adaptive ability of gait recognition.
Conclusion
The robotic knee prosthesis is a key apparatus for the rehabilitation of patients with above-knee amputations, the research theories and methods of which are the frontier embodiment of mechatronics and human-computer interaction technologies. This paper made an in-depth investigation on the development of robotic knee prostheses in the past ten years and categorized them based on their functionality and mutual relationships. Detailed qualitative and quantitative analysis of its structural design, actuation, and control strategies are presented. In addition, challenges and innovations in the development of robotic knee prostheses are also expanded. The structure of the robotic knee prosthesis is gradually becoming more bionic and lighter with better mechanical performance. The elastic actuation and elastic hybrid actuation are showing better energy-saving effect, and bionic actuation features. Control methods based on artificial intelligence are gradually improving the daily use of robotic knee prostheses. The knee prosthesis-related techniques are essentially a concrete instance of mechatronic wearable devices, which require more breakthroughs and innovations in basic mechanical engineering, material technology, automatic control, and computer science. However, as the authorsʼ prospect, the robotic knee prosthesis will gradually become more bionic, intelligent, and energy-efficient to help amputation patients to achieve their physical and psychological returns. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 2021-09-29T16:04:12.201Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "d9c651075fda580f56be05c00b163ea4df54b7b8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s42235-021-0065-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "56dca474c955f777a070b1118f153e28c0df4f12",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
260669209 | pes2o/s2orc | v3-fos-license | Dynamics of unipolar J-ST elevation coupled to bipolar delayed potentials on the epicardium in Brugada syndrome: a case report
Abstract Background The area of abnormal bipolar potentials in the right ventricular epicardium is recognized as an arrhythmogenic substrate in patients with Brugada syndrome (BrS); however, the correlation between local potentials and Brugada-type surface electrocardiograms (ECGs) remains unclear. Case summary A 49-year-old man with BrS who was hospitalized for refractory ventricular fibrillation underwent an electrocardiographic study with unipolar electrodes with the same bandwidth as surface ECGs. The right ventricular outflow tract epicardium showed abnormal bipolar potentials composed of split sharp and delayed dull components with coved-type J-ST elevation in the unipolar electrodes. The additional stimuli from the atrium gradually decreased the number of unipolar electrodes showing coved-type J-ST elevation along with a shortening of the local bipolar activation time. The pilsicainide provocation test induced a change in unipolar morphology from coved type to convex type and an intermittent local block of the divided and sharp components in bipolar electrodes. Of note, the unipolar J-ST elevation was not changed along with the localized conduction block in bipolar leads. Discussion The unipolar electrode waveforms during sinus rhythm change together with bipolar electrodes, consisting of sharp and blunt components in BrS. However, the convex-type J-ST elevation in unipolar leads persisted irrespective of the local conduction block in bipolar leads after pilsicainide provocation. These findings suggest the complexity of BrS mechanisms.
Introduction
Epicardial substrate ablation targeting fractionated or delayed potentials in the epicardium mainly at the right ventricular outflow tract (RVOT) has been established as a strategy to suppress ventricular fibrillation in patients with Brugada syndrome (BrS). 1 However, the relationship between abnormal bipolar electroactivities and J-ST segment elevation in surface electrocardiograms (ECGs) remains uncertain. We previously reported the correlation between local activation time (LAT) delay in the bipolar recording and J-ST elevation in the unipolar recording using a 0.05-100 Hz bandwidth similar to surface ECGs on the RVOT epicardium in a BrS patient. 2 Haïssaguerre et al. 3 proposed a hypothesis that unipolar ST segment changes occur with a multisite conduction block in the epicardium. Meanwhile, Pannone et al. 4 reported that epicardial high-frequency and low-frequency potentials demonstrated depolarization and repolarization abnormalities, respectively. These papers suggested that delayed bipolar electrograms influence the manifestation of unipolar J-ST segment elevation. However, the response to pacing and sodium channel blocker testing has not yet been studied in detail.
Herein, we present a unique BrS case with epicardial mapping using unipolar electrodes with the above-described setting and bipolar electrodes. 2 The correlation between unipolar J-ST segment morphology and bipolar delayed potentials in response to programmed pacing and the pilsicainide test was investigated.
Case presentation
A 49-year-old man had an episode of aborted cardiac arrest leading to the diagnosis of BrS with spontaneous coved-type J-ST elevation on ECG ( Figure 1A). He was admitted to our institute for frequent appropriate implantable cardioverter defibrillator (ICD) therapies. We decided to perform epicardial catheter ablation due to refractory ventricular fibrillation, which did not respond to quinidine for suppressing Ito channels, bepridil for suppressing Ito channels and increasing sodium current, and cilostazol for suppressing phosphodiesterase III and leading to an increase in calcium current. The three-dimensional mapping system (CARTO UNIV, Biosense Webster, Diamond Bar, CA) was used for substrate mapping with deep sedation using intravenous propofol and dexmedetomidine. The voltage map and the map of late activation time, which was defined as the time from the beginning of QRS in lead V2 to the offset of the latest local bipolar component, were obtained using DECANAV (Biosense Webster, Diamond Bar, CA). A low voltage was defined as ≤1.5 mV in the bipolar amplitude. 2 A local unipolar J-ST morphology was assessed to confirm coved-type J-ST elevation, which was defined as ≥0.2 mV in J-ST together with a negative T wave. 2 The bipolar voltage map of the endocardium showed patchy lowvoltage areas in the RVOT ( Figure 1B). Alternatively, the epicardial voltage in the right ventricle was generally decreased except in the free wall ( Figure 1C). The area of coved-type J-ST elevation in the unipolar electrodes (circular dots in Figure 1D) corresponded to the most delayed regions of activation, particularly in the RVOT ( Figure 1E). The right ventricular epicardial local electrograms, with electrodes arranged as the lines in Figure 2A, showed split sharp and delayed dull potentials in the bipolar electrodes, which presented a coved-type J-ST elevation Dynamics of unipolar J-ST elevation coupled to bipolar delayed potentials on the epicardium in BrS: a case report in the unipolar electrodes ( Figure 2B). The additional stimuli from the right atrium induced a gradual shortening of the delayed potential duration (arrows in Figure 2B and polygonal lines in Figure 2C). It is noteworthy that the number of unipolar electrodes showing a coved-type J-ST elevation also decreased along with the shortening of the extra stimulus intervals ( Figure 2B and the bar graph in Figure 2C).
The provocation test with pilsicainide (50 mg) induced a change in unipolar morphology from coved-type J-ST elevation to convex-type J-ST elevation, and bipolar electrodes demonstrated an intermittent localized block of the split and sharp delayed potentials ( Figure 3). The dull delayed potentials in bipolar electrodes showed no significant change. In addition, the unipolar J-ST morphology showed notches at the same time as the bipolar delayed potentials, and they disappeared along with the local conduction block (B in Figure 3). It is noteworthy that the convex-type J-ST elevation was ensured even though the localized conduction block occurred.
Catheter ablation was performed on the epicardial substrate, showing delayed potentials in bipolar leads and coved-type J-ST elevation in unipolar leads. Following the procedure, the J-ST levels in surface ECGs returned to the baseline and no episodes of ventricular tachyarrhythmias occurred, even after stopping all antiarrhythmic drugs. Genetic testing using a gene panel developed in the National Cerebral and Cardiovascular Center (Osaka, Japan) was performed; however, no pathogenic mutation including SCN5A was identified.
Discussion
Nowadays, the mechanisms of Brugada-type ECGs are debated between the conduction and repolarization abnormality theories. 5 One of the strong ideas about BrS is the 'conduction abnormality theory'. 6 In some cases of BrS, typical conditions of conduction abnormality, such as right bundle branch block or prolonged His-ventricle intervals, are demonstrated. 5 In addition, recently developed three-dimensional electrophysiological mapping techniques have revealed abnormal local potentials that resemble conduction abnormalities in the epicardium. 3 However, the present case demonstrated a unique phenomenon during extra stimuli from the atrium, which could not be explained by the conduction abnormality theory alone. The sharp and split-delayed potentials in bipolar electrodes were blocked and disappeared along with the shortening of extra stimulation intervals. Furthermore, these delayed potentials were intermittently blocked even during sinus rhythm following pilsicainide provocation. These findings likely demonstrated the relationship between the delayed potentials and the conduction abnormality theory. However, the LAT significantly decreased along with the shortening of extra stimulation intervals ( Figure 2B). Usually, the prolongation of LAT should be observed under the conduction abnormality theory. Moreover, epicardial planar repolarization heterogeneities, which could be confirmed as differences in the unipolar morphologies, manifested after pilsicainide infusion compared with those before infusion (Figure 3).
A more detailed analysis of bipolar potential morphology is required to interpret these findings. As described in the Figure 2A legend, bipolar electrodes showed split sharp and delayed dull potentials during sinus rhythm. These split frequency potentials could reflect two different theories of mechanisms. The mechanisms of these two frequency potentials were described as Ito blockade in the epicardium with loss of dome in acute ischaemia animal models. 7 Moreover, Pannone et al. 4 reported high-frequency and low-frequency delayed potentials in bipolar electrodes in BrS. Our case also showed two divided frequency components and demonstrated different changes after pilsicainide administration. The high-frequency potentials demonstrated a localized block along with the shortening of extra stimulation intervals and pilsicainide provocation test; however, the low-frequency potentials gradually decreased those amplitudes along with the shortening of extra stimulation intervals and demonstrated no change in the duration after pilsicainide administration (Figures 2 and 3). These findings suggest that other mechanisms, rather than the conduction abnormality theory, influence the behaviour of the low-frequency potentials. 8 The relationships between unipolar J-ST levels with the same bandwidth as the surface ECG and bipolar abnormal potentials also remain unclear. In our case, the J-ST level decreased along with the shortening of the extra pacing intervals; however, convex-type J-ST elevation remained unchanged even though the bipolar sharp high-frequency potentials were blocked. The findings might indicate that low-frequency potentials, rather than high-frequency potentials, influence unipolar J-ST elevation. Although the measurement of the LAT during single extra stimuli could not separate the two frequency components due to low amplitudes, the gradual decrease in LAT ( Figure 2B) may reflect the shortening of low-frequency potentials, leading to a unipolar J-ST level decrease. Mapping both endocardial and epicardial unipolar signals will reflect transmural heterogeneity of action potential duration including depolarization and repolarization phases. To better understand the mechanisms of the localized conduction block in bipolar potentials, mapping of both the endocardium and the epicardium, not only before but also after pilsicainide infusion, may be more helpful.
Conclusion
This case demonstrated that the delayed potentials in the bipolar electrodes consisted of sharp and blunt components, accompanied by coved-type J-ST elevation in the unipolar electrodes. Before the pilsicainide test, J-ST levels showed fluctuation along with the two bipolar components. However, after pilsicainide provocation, convex-type J-ST elevation persisted, irrespective of the local conduction block of the sharp components in bipolar electrodes. We believe that these contradictory findings suggest the complexity of BrS mechanisms. | 2023-08-07T15:36:50.277Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "94e27be2da60a4551eaee43b0239e58e184f949f",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "67a1e2a95c3ab7af281de4eca3f796f9d728c673",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11584303 | pes2o/s2orc | v3-fos-license | Psychometric characteristics of the short form 36 health survey and functional assessment of chronic illness Therapy-Fatigue subscale for patients with ankylosing spondylitis
Background We evaluated the psychometric characteristics of the Short Form 36 (SF-36) Health Survey and the Functional Assessment of Chronic Illness Therapy (FACIT)-Fatigue subscale in patients with ankylosing spondylitis (AS). Methods We analyzed clinical and patient-reported outcome (PRO) data collected during 12-week, double-blind, placebo-controlled periods of two randomized controlled trials comparing adalimumab and placebo for the treatment of active AS. The Bath Ankylosing Spondylitis Disease Activity Index, Bath Ankylosing Spondylitis Functional Index, and other clinical measures were collected during the clinical trial. We evaluated internal consistency/reliability, construct validity, and responsiveness to change for the SF-36 and FACIT-Fatigue. Results The SF-36 (Cronbach alpha, 0.74-0.92) and FACIT-Fatigue (Cronbach alpha, 0.82-0.86) both had good internal consistency/reliability. At baseline, SF-36 and FACIT-Fatigue scores correlated significantly with Ankylosing Spondylitis Quality of Life scores (r = -0.36 to -0.66 and r = -0.70, respectively; all p < 0.0001). SF-36 scores varied by indicators of clinical severity, with greater impairment observed for more severe degrees of clinical activity (all p < 0.0001). FACIT-Fatigue scores correlated significantly with SF-36 scores (r = 0.42 to 0.74; all p < 0.0001) and varied by clinical severity (p < 0.05 to p < 0.0001). Conclusions The SF-36 is a reliable, valid, and responsive measure of health-related quality of life and the FACIT-Fatigue is a brief and psychometrically sound measure of the effects of fatigue on patients with AS. These PROs may be useful in evaluating effectiveness of new treatments for AS. Trial Registration ClinicalTrials.gov: NCT00085644 and NCT00195819
Background
Ankylosing spondylitis (AS) is a chronic and progressive inflammatory disorder that primarily affects the axial skeleton, sacroiliac joints of the pelvis, and thoracic cage [1,2]. Patients experience pain, joint stiffness, and the eventual loss of spinal mobility with disease progression. Patients with AS frequently experience impaired physical function and well-being, require time away from work because of disability, and suffer from diminished health-related quality of life (HRQOL) [3][4][5][6][7]. The impact of AS on functioning and everyday life varies by patient, but most patients typically have a broad spectrum of impairments, including the physical, psychological, and social domains of HRQOL.
Patient-reported outcomes (PROs), including HRQOL assessments, symptom scales, and other measures, are increasingly used to evaluate the health-related outcomes of rheumatology treatments from the patient perspective. PROs are incorporated into clinical studies of patients with AS and provide important assessments of functioning and well-being that complement and expand on traditional clinical outcomes in AS [8]. AS impacts multiple HRQOL domains [6], including pain, physical function, fatigue, and psychological well-being [3,4,[7][8][9][10]. Therefore, assessing HRQOL outcomes is important for a comprehensive understanding of the effectiveness of new treatments for AS. HRQOL outcome results from randomized controlled clinical trials permit physicians and patients to better understand and compare the health benefits of various therapies for AS.
Although disease-specific HRQL measures, such as the AS Quality of Life (ASQoL) Questionnaire, may be more sensitive to treatment effects and changes in clinical status, generic HRQL measures are useful for evaluating the burden of disease and for normative comparisons with general population samples. Generic HRQL measures, such as the Short Form 36 (SF-36) Health Survey, have been used to document the impact of chronic diseases on patient functioning and wellbeing, including AS [4]. Given the availability of general population samples, these generic HRQL measures can also be used for normative comparisons with chronic disease groups, such as AS, which can help interpret changes in HRQL related to treatment or disease progression. Fatigue is associated with rheumatoid arthritis and AS [3,4,[6][7][8][9][10], and generic measures of fatigue may assist clinicians in understanding the experience of patients with AS.
Several PRO measures have been used in studies of patients with AS [8,11]. These measures include the SF-36 Health Survey [12,13]; the AS Quality of Life Questionnaire [14,15]; and the Revised Leeds Disability Questionnaire [16]. For application in clinical trials, it is necessary to demonstrate a measure's reliability, construct validity, and responsiveness [17,18]. The SF-36 has been used in several studies of patients with AS [12,[19][20][21][22]. The developers of the ASQoL provided evidence supporting the reliability and validity of the measure [14,15]. However, additional information is needed to confirm the psychometric qualities of the SF-36 in AS.
The objective of the current analysis was to evaluate the psychometric characteristics of two PRO measuresthe SF-36 Health Survey and the Functional Assessment of Chronic Illness Therapy (FACIT)-Fatigue subscalein a sample of patients with AS. The psychometric characteristics examined included internal consistency/reliability, construct validity, and responsiveness to changes in patients' clinical disease activities. The current analyses are based on data from two completed clinical trials of adalimumab in patients with active AS [23,24]. Evidence supporting the psychometric characteristics of PROs is necessary for clinical trial analyses that compare treatments to support claims of HRQOL benefit [18].
Study design and patients
These psychometric analyses were completed based on clinical and PRO data collected from two Phase III, randomized, double-blind, placebo-controlled clinical trials that assessed the safety and clinical efficacy of subcutaneous injections of adalimumab in patients with AS. The two clinical trialsthe Adalimumab Trial Evaluating Long-Term Efficacy and Safety in Ankylosing Spondylitis (ATLAS) [23][24][25] and the M03-606 study [26] were similar in research design. ATLAS was completed in 43 centers in the United States and Europe (Clinical-Trials.gov identifier, NCT00085644), and the M03-606 study was conducted in 11 clinical centers in Canada (ClinicalTrials.gov identifier, NCT00195819).
Briefly, patients were 18 years of age or older with a diagnosis of AS according to the Modified New York criteria [27]. Patients also had to exhibit active disease, defined as meeting at least two of the following three conditions: Bath AS Disease Activity Index (BASDAI) score ≥4 (0-10-cm scale); total back pain score ≥4 on a 0-10-cm visual analog scale (VAS); or morning stiffness ≥1 hour. Inclusion criteria included an inadequate response to or intolerance of one or more nonsteroidal anti-inflammatory drugs or disease modifying antirheumatic drugs and a willingness to self-administer subcutaneous injections of adalimumab (HUMIRA ® ; Abbott Laboratories, Abbott Park, IL, USA) or matching placebo. Patients with radiologic evidence of total spinal ankylosis (bamboo spine) were excluded from participation in the M03-606 study, and enrollment of patients with total spinal ankylosis in ATLAS was limited to ≤10% of the total study sample.
In both studies, eligible patients were randomized to receive adalimumab 40 mg every other week or placebo for the initial 12-week, double-blind period of each study. The psychometric analyses described here were not designed to evaluate the treatment effects of adalimumab compared with placebo. This report describes only the methods and results of a blinded evaluation of the psychometric qualities of selected PROs assessed during the initial 12-week period of both clinical trials. Institutional review boards at participating clinical centers approved the protocol and all patients provided voluntary, written informed consent. Both studies were conducted in accordance with the Declaration of Helsinki.
Patient-reported outcomes
Three PRO measures were included in this psychometric evaluation study: the SF-36 Health Survey, the FACIT-Fatigue scale, and the ASQoL. The SF-36 and ASQoL were included in both ATLAS and the M03-606 study, and were selected to comprehensively measure disease-specific (ASQoL) and generic (SF-36) domains of HRQOL. The FACIT-Fatigue was employed only in M03-606, and was selected to evaluate the impact of AS treatment on fatigue outcomes, as previous studies have demonstrated treatment effects on fatigue in patients with rheumatoid arthritis [28] and AS [23]. For the ASQoL and FACIT-Fatigue, data were collected at baseline, Week 2, and Week 12; the SF-36 was completed only at baseline and Week 12. The ASQoL was included to evaluate the construct validity of the other PRO measures.
Ankylosing Spondylitis Quality of Life Questionnaire
The ASQoL is a disease-specific instrument designed to measure quality of life in patients with AS and was developed using a needs-based model [15]. The instrument contains 18 yes or no items on the impact of AS "at this moment." The ASQoL has a total score ranging from 0 to 18, with lower scores representing better ASspecific quality of life. The instrument has good reliability and construct validity across several different AS populations [8,14,15,22].
Functional Assessment of Chronic Illness Therapy (FACIT)-Fatigue Subscale
The FACIT-Fatigue is a frequently used instrument that measures fatigue and its effect on functioning and daily activities [28]. The FACIT-Fatigue has 13 items answered on a 5-point rating scale based on a 7-day recall period. Scores range from 0 to 52, with lower scores reflecting greater fatigue. The instrument has good reliability and validity based on analyses of the general population in the United States, patients with cancer, and patients with rheumatoid arthritis [28][29][30], but a search of the medical literature indicated no published data on the psychometric qualities of the FACIT-Fatigue in AS patients.
Short Form 36 Health Survey
The SF-36 Health Survey is a generic health status instrument developed for application in primary care and chronic disease populations [13]. The SF-36 Version 1, with a 4-week recall period, was used in this study. The SF-36 contains two summary scores (Mental Component Summary [MCS] and Physical Component Summary [PCS] scores), and the following eight subscales: physical function, bodily pain, role-physical, general health, vitality, social function, role-emotional, and mental health. The SF-36 subscales and summary scores have excellent reliability and good construct validity across the general US population and chronic disease populations [13,31], including in patients with AS [4,[19][20][21][22]32,33].
Bath Ankylosing Spondylitis Disease Activity
The BASDAI is a six-item measure of disease activity and includes questions on fatigue, spinal pain, peripheral arthritis, enthesitis (i.e., inflammation at the attachment of ligaments or tendons to bone), and morning stiffness [34]. The BASDAI is a well-established instrument widely used in clinical studies to evaluate AS disease activity. All items are patient reported using a 0-10 VAS, and lower scores indicate less disease activity.
Bath Ankylosing Spondylitis Functional Index
The BASFI consists of 10 questions related to daily activities. The original BASFI used a 0 to 10 VAS for each of the 10 questions, for which 0 indicates that an activity was performed without difficulty and 10 indicates that an activity was impossible to perform. The mean of these yields the final BASFI score of 0-10 [35]. However, in ATLAS and the M03-606 study, patients answered each of the same 10 questions using a 0-100mm VAS, and the mean gave a final BASFI score of 0-100.
Clinical measures
The Assessment of SpondyloArthritis international Society (ASAS) [36] was used in these psychometric analyses. The ASAS response criterion [36] consists of the percentage of improvement in three of the following four domains: patient's global assessment of disease activity VAS, pain, function (represented as the mean BASFI score [35]), and inflammation (represented as the mean of the two morning stiffness-related BASDAI questions). The ASAS 20% improvement (ASAS20), ASAS50, and ASAS70 response criteria were used in the clinical trials and have been applied by regulatory agencies to evaluate the clinical efficacy of tumor necrosis factor (TNF) antagonists for the treatment of AS. For the psychometric analyses based on ASAS response criteria, patients were categorized into mutually exclusive groups corresponding to at least ASAS20, at least ASAS50, or at least ASAS70 response criteria based on the Week 12 assessment.
Statistical analyses
The psychometric qualities of the PROs were assessed to determine reliability, validity, and responsiveness [17,37]. The internal consistency/reliability of the multiitem instruments and subscales was evaluated using the Cronbach alpha coefficient [38]. Reliabilities were estimated at baseline and at Week 12. A Cronbach alpha coefficient of > 0.70 was indicative of acceptable internal consistency/reliability of group comparisons [17]. We examined the item-total correlations (corrected for overlap) as another indicator of reliability for the SF-36 subscales and FACIT-Fatigue. In addition, we replicated the factor analysis of the SF-36 subscales to verify the factor structure underlying the PCS and MCS scores.
Validity reflects the extent to which an instrument or subscale actually measures the construct it is intended to measure [17]. Validity of the PROs was examined by specifying and testing hypotheses about the relationships between the measures and clinical assessments or other PRO measures. We evaluated the relationship between SF-36 scores and FACIT-Fatigue scores and selected clinical measures (specifically the BASDAI total score, the BASDAI fatigue question, the BASDAI pain question, total back pain, BASFI, patient's global assessment of disease activity, and the physician's global assessment of disease activity). We hypothesized that the BASDAI, BASFI, BASDAI fatigue question, and patient global assessment of disease activity would have moderate to strong (0.40-0.60) correlations with the FACIT-Fatigue, and SF-36 PCS scores, while the pain and physician global measures would have moderate to strong correlations with the SF-36 PCS, and low to moderate (0.20-0.40) correlations with FACIT-Fatigue. We consider correlations < 0.30 to be low; correlations 0.30 to 0.60 to be moderate; and correlations > 0.60 to be strong. In addition, we examined the relationships between the FACIT-Fatigue and SF-36 subscales and summary scores and the ASQoL scores. Pearson's correlation coefficients were used to evaluate the strength and direction of these associations at baseline and at Week 12.
The responsiveness of the FACIT-Fatigue and SF-36 subscales and summary scores was evaluated by determining the association between the ASAS and the PRO measures. ANCOVA models were used to estimate least-square mean baseline to 12-week change scores for the SF-36 subscales, as well as the PCS, MCS, and FACIT-Fatigue scores. The ANCOVA models included factors for the ASAS response group (i.e. < 20%; ≥20% to < 50%; ≥50% to < 70%; and ≥70%), age, sex, and the relevant baseline PRO score. The clinical responsiveness analyses focused on the baseline to 12-week changes in the clinical and PRO measures. Effect-size estimates were also included for interpretation purposes.
All statistical tests were based on an alpha of 0.05, with no adjustments for multiple statistical tests. The results were interpreted with consideration for the number of statistical analyses performed.
Results
ATLAS enrolled 315 patients and the M03-606 study enrolled 82 patients. The average age of patients with AS who participated in the two clinical trials was 42.0 years (SD, 11.5), and the sample was mostly male (75.8%) and white (95.7%) ( Table 1). At baseline, the mean BASFI score was 53.9 (SD, 21.9), the mean BAS-DAI score was 6.3 (SD, 1.7), and the mean ASQoL score was 10.5 (SD, 4.3). The average duration of AS was 11.3 years (SD, 9.4).
Patient-reported outcome descriptive statistics and reliability
Complete baseline and Week 12 data were available for 98.2% of patients in the two clinical trials. The baseline means, standard deviations, and internal consistency/ reliability coefficients for the FACIT-Fatigue and SF-36 subscale scores are summarized in Table 2. The internal consistency/reliability coefficients for the FACIT-Fatigue were 0.82 at baseline and 0.86 at Week 12. All reliability coefficients for the SF-36 subscales exceeded 0.75 at both baseline and Week 12, except for general health at baseline (0.74) ( Table 2). For the FACIT-Fatigue, itemtotal correlations were 0.56-0.88 for both visits. At baseline, item-total correlations for the SF-36 subscales were 0.35-0.74 for physical function; 0.49-0.61 for role- . We replicated the factor analysis of the SF-36 subscale scores and found a comparable factor structure to that reported by Ware et al. [13] (data not shown).
Relationships between patient-reported outcomes
The correlations between the ASQoL and the FACIT-Fatigue were -0.70 (p < 0.0001) at baseline and -0.81 (p < 0.0001) at Week 12. Correlations between the ASQoL, FACIT-Fatigue, and SF-36 subscales and summary scores are reported in Table 3. ASQoL scores were significantly correlated with SF-36 summary scores at both baseline and Week 12 (p < 0.0001), with the greatest baseline correlations between the ASQoL and social function (r = -0.66, p < 0.0001), bodily pain (r = -0.60, p < 0.0001), and physical function (r = -0.59, p < 0.0001). The Week 12 correlations were greater than the baseline correlations (Table 3).
FACIT-Fatigue scale scores were significantly correlated with SF-36 subscale scores at baseline and at 12 weeks ( Table 3). The greatest correlations were between the FACIT-Fatigue score and the SF-36 vitality subscale score (r = 0.74 at baseline and 0.82 at Week 12, both p < 0.0001). FACIT-Fatigue scores were also well-correlated at baseline with social function (r = 0.67, p < 0.0001), physical function (r = 0.58, p < 0.0001), and bodily pain (r = 0.56, p < 0.0001). The correlations for the Week 12 scores were similar but greater. The PCS and MCS scores were both significantly correlated with FACIT-Fatigue scores at baseline (r = 0.54 and r = 0.53, respectively, both p < 0.0001) and Week 12 (r = 0.63 and r = 0.71, respectively, both p < 0.0001).
Relationships between patient-reported outcomes and clinical measures
FACIT-Fatigue scores were correlated with all of the selected clinical outcome measures ( Table 4). The FACIT-Fatigue was most substantially correlated with the BASDAI fatigue item (r = -0.69, P < 0.0001), the BASDAI (r = -0.60, p < 0.0001), and the BASFI (r = -0.56, p < 0.0001). For the SF-36, the greatest correlations were observed between the clinical assessments and the physical function (r = -0.36 to r = -0.72, all p < 0.0001) and bodily pain subscale scores (r = -0.42 to r = -0.64, all p < 0.0001). In general, the PCS was more substantially correlated with the clinical outcome measures than the MCS ( Table 4). The BASDAI was significantly correlated with the PCS (r = -0.47, p < 0.0001) and MCS (r = -0.22, p < 0.0001). The BASFI was more strongly correlated with the PCS (r = -0.65, p < 0.0001) than the MCS (r = -0.15, p < 0.05). ANCOVA models, adjusting for age and sex, were used to evaluate the association between clinical severity and the PRO measures (Table 5). For the FACIT-Fatigue, there were statistically significant differences in mean scores by BASDAI (p < 0.0001), BASDAI fatigue (p < 0.0001), BASDAI pain (p = 0.0002), total back pain (p = 0.013), and BASFI (p < 0.0001). As clinical severity increased, mean FACIT-Fatigue scores decreased (i.e. worse fatigue symptoms). For example, for the BASDAI fatigue item, mean FACIT-Fatigue scores were least for those patients in the most severe group compared with the less severe groups (Table 5). Similar patterns of mean FACIT-Fatigue scores were observed for the other clinical severity measures. Patients who rated their disease activities ≥66 had mean FACIT-Fatigue scores of 20.4 compared with those who rated their disease activities <66 (mean FACIT-Fatigue, 29.5) (p < 0.0001, Figure 1).
We also compared mean PCS and MCS scores by the clinical severity measures (Table 5). For the PCS, there were statistically significant differences in mean scores by BASDAI, BASDAI fatigue, BASDAI pain, total back pain, and BASFI (all p < 0.0001). In all cases, mean PCS scores were worse (i.e. lower indicating impaired physical health status) for those patients reporting greater clinical severities. Mean PCS scores varied significantly by patient's and physician's global assessments of disease severity (p < 0.0001, Figure 2). For the MCS, there were statistically significant differences in mean scores by BASDAI (p = 0.0003), BASDAI fatigue (p < 0.0001), and BASFI (p = 0.028), but not for BASDAI pain (p = 0.289), total back pain (p = 0.076), or the physician's global assessment (p = 0.750). The mean MCS scores were generally better for patients reporting lower clinical severity of symptoms. For example, for BASDAI fatigue, MCS scores were most impaired for the most severe group (mean BASDAI fatigue, 40.2), less impaired for the moderate group (mean BASDAI fatigue, 45.9), and best for the mild group (mean BASDAI fatigue, 48.9). Mean MCS scores varied significantly by patient's global ratings (p = 0.021) but not the physician's global assessment of disease (data not shown).
Responsiveness of patient-reported outcomes
Clinical responsiveness was evaluated by determining the relationships between mean baseline to Week 12 changes in FACIT-Fatigue and SF-36 scores by ASAS response criteria (i.e. non-responders and 20%, 50%, or 70% responders). The ASAS responder groups achieved statistically significant improvements in FACIT-Fatigue scores compared with the nonresponder group (p < 0.0001). Differences in mean baseline to 12-week change scores between the nonresponder group and the responder groups were 9.7 points for ASAS20, 14.6 points for ASAS50, and 15.4 points for ASAS70 (Table 6). FACIT-Fatigue change scores for patients meeting ASAS70 response criteria were fairly similar to those meeting ASAS50 response criteria.
There were statistically significant differences in mean baseline to 12-week changes in PCS scores between the ASAS responder groups (p < 0.0001, Table 6). After 12 weeks of treatment, mean change scores from baseline for SF-36 PCS scores were significantly greater for patients who responded to therapy compared with those who did not respond to therapy (p < 0.001). Changes in SF-36 PCS scores were lowest for ASAS nonresponders and greatest for ASAS70 responders. Mean changes for the ASAS50 responders were almost double the changes for the ASAS20 responders, and mean changes in the ASAS70 responders were more than double the changes in the ASAS20 responders.
There were also statistically significant differences in mean baseline to Week 12 MCS scores across different ASAS responder groups (p < 0.0001, Table 6). The baseline to 12-week change scores for patients achieving the ASAS20 and ASAS70 response criteria were 2.8 and 9.1 points greater, respectively, compared with non-responders (p < 0.0001). The ASAS50 responders actually had mean change scores that were less than those of ASAS20 responders, but greater than those of nonresponders.
We compared the mean baseline to Week 12 changes in the SF-36 subscale scores by ASAS responder status (Table 6). There were statistically significant improvements in all of the SF-36 subscale scores between non-responder and responder groups (all p < 0.0001). For example, for SF-36 physical function scores, we observed a 14.3-point improvement in the ASAS20 responder group, a 16.8-point improvement in the ASAS50 responder group, and a 24.9-point improvement in the ASAS70 responder group, all compared with the non-responder group. These differences in mean change scores were seen across most of the SF-36 subscales, except for mental health and social function, with the greatest effects observed in the ASAS50 and ASAS70 responder groups.
Discussion
Patient-reported outcomes, such as HRQOL, functional status, and fatigue measures, are increasingly used to examine the effectiveness of new therapies for AS [19,23,[39][40][41][42], but there is little documentation as to the The ASQoL is a disease-specific measure of quality of life with evidence supporting reliability and validity [14,15]. However, recent qualitative research suggests that the ASQoL may not cover all important and frequently mentioned patient concerns about HRQOL [43]. We believe that the addition of the SF-36 and FACIT-Fatigue scales helps provide a more comprehensive assessment of the main health outcomes important to patients with AS.
The SF-36 summary scores and subscales (or domains) were also found to have acceptable reliability and good evidence of validity in this sample of patients with AS. In addition, we found a comparable factor solution for the PCS and MCS using the AS sample. Significant relationships between the SF-36 scores and ASQoL, FACIT-Fatigue, and clinical endpoints were observed. We observed an increase in the correlations among the patientreported outcomes at the 12-week assessment, and this increase was likely attributable to the more restrictive ranges in patient-reported outcome scores at baseline because of clinical trial entry criteria. Restricted ranges in scores may attenuate the correlation coefficients.
The SF-36 scores, especially those measuring physical function and pain, were responsive to clinical improvements as assessed with the ASAS response criteria. For example, there was a 5.6-point (SD = 6.6) difference in PCS scores between ASAS nonresponders and ASAS20 responders, a 10.6-point (SD = 6.2) difference between nonresponders and ASAS50 responders, and a 13.5point (SD = 6.1) difference between nonresponders and ASAS70 responders. Less consistent findings were observed for the MCS. ASAS50 and ASAS70 responder groups achieved greater improvements vs. the nonresponders or ASAS20 responders for measures of physical and role function, pain, general health, and vitality. Published clinical trials in AS have found that several SF-36 subscale scores are responsive to treatment effects [11,19,21,[39][40][41][42]. Certainly, 5-point differences or mean changes in PCS or MCS scores are clinically relevant, and lesser changes of 2.5 to 3.0 points are likely to be clinically meaningful.
The AS patients in the current analysis had significant impairment in health status at baseline, consistent with previous studies [4,32]. In the current analysis, mean baseline PCS and MCS scores were 32.6 and 43.4, respectively. The mean scores for these patients with AS are considerably lower than mean scores of the general US population [29], with differences of 1.7 standard deviation units for the PCS and 0.7 standard deviation units for the MCS. The SF-36 subscales scores for the current analysis are also less than those reported by Dagfinrud and colleagues [4] and Chorus and colleagues [32]. Therefore, the current analysis provides additional evidence of the significant impairment in health status and functioning in AS across multiple domains of physical and role functions, pain, energy, emotional wellbeing, and general health perceptions.
FACIT-Fatigue was also shown to have good reliability and validity in this sample of patients with AS. This measure focuses on fatigue-related problems and concerns and was originally developed to assess fatigue in oncology patients [29,30,44] but has been applied to other chronic diseases, such as rheumatoid arthritis [28,45]. In this AS sample, we found reliabilities exceeding 0.80 and statistically significant relationships between the FACIT-Fatigue and measures of vitality, physical function, role-physical, social function, and clinical severity.
As expected, the FACIT-Fatigue scores were most closely related to similar endpoints, such as the SF-36 vitality score and the BASDAI fatigue item. However, meaningful associations were observed for the other patient-reported and clinical outcomes, supporting the validity of the FACIT-Fatigue. These findings for AS patients support the psychometric qualities of the FACIT-Fatigue for application in clinical studies of other populations with rheumatic diseases, such as patients with rheumatoid arthritis [28,45]. In fact, the mean baseline FACIT-Fatigue scores observed in these patients with AS were much less than those observed for patients with rheumatoid arthritis (mean, 23.9 vs. 29.2) or the general US population [29]. These results suggest that fatigue should be a focus of attention in the treatment of AS.
FACIT-Fatigue was responsive to changes in clinical status based on the ASAS response criteria. We observed significant improvements in FACIT-Fatigue scores, with the greatest mean changes observed for patients meeting ASAS50 or ASAS70 response criteria. FACIT-Fatigue scores demonstrated a 9.7-point difference in improvement between nonresponders and ASAS20 responders. These differences significantly exceeded the minimum clinically important difference of 3 to 4 points validated for patients with rheumatoid arthritis [28]. For AS, differences of 4 to 5 points in FACIT-Fatigue scores may be clinically meaningful; however, further confirmation is needed.
The results of these psychometric analyses are limited to patients participating in one of these two clinical trials, and may not be generalizable to all patients with AS. However, we propose that, based on the strengths of the secondary analysis, including comparative clinical measures, a well-defined patient population, and longitudinal data all provide good evidence supporting the psychometric characteristics of the SF-36 and FACIT-Fatigue for AS patients.
Conclusions
In summary, our analysis provides additional evidence supporting the reliability and validity of the SF-36 and FACIT-Fatigue in patients with AS. The SF-36 has been widely used in rheumatoid arthritis and AS clinical trials, and this analysis demonstrated that this generic health status measure is psychometrically sound and responsive in AS. As AS has broad and extensive impacts on HRQOL, comprehensive measures of patient outcomes are necessary for evaluating the effectiveness of new treatments. The FACIT-Fatigue has not been widely used in AS studies. However, we have provided evidence supporting its validity and, more importantly, its responsiveness. Based on these findings, a PRO battery consisting of the ASQoL, SF-36, and FACIT-Fatigue scale represents a useful, valid, and responsive approach to fully capturing effects of treatment on the health outcomes of AS patients. These PRO data, combined with clinical endpoints, may also assist physicians and their patients in determining the most effective treatments for AS. | 2016-05-12T22:15:10.714Z | 2011-05-22T00:00:00.000 | {
"year": 2011,
"sha1": "98cd4213bc866c96f287c96bc2fb66ee80be982c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/1477-7525-9-36",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3960a43e9cb02ca59f57ec359e416325cc4de7b2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248031219 | pes2o/s2orc | v3-fos-license | Recent Progress in Micro-LED-Based Display Technologies
The demand for high-performance displays is continuously increasing because of their wide range of applications in smart devices (smartphones/watches), augmented reality, virtual reality, and naked eye 3D projection. High-resolution, transparent, and flexible displays are the main types of display to be used in future. In the above scenario, the micro-LEDs (light-emitting diodes) display which has outstanding features, such as low power consumption, wider color gamut, longer lifetime, and short response-time, can replace traditional liquid crystal displays and organic LEDs-based display technologies. However, to attain a remarkable position in future display technology, the micro-LEDs need to overcome problems associated with mass transfer and its high cost of manufacturing. Besides micro-LEDs, the other option for future displays includes the usage of color conversion medium (phosphor/ quantum dots) to convert some of the blue light into other colors. In this review, the various mass transfer display technologies and color conversion strategies which are being used for the realization of a full-color display are discussed.
Introduction
[3][4] In display technology, the very first cathode ray tube (CRT)-based color television was introduced in 1940s, leading to the first color broadcast in 1954.The design of CRT is comprised of the vacuum tube having an electron gun which generates the beam of guided electrons which is then directed toward the specially designed screen to emit primary colors, i.e., red, green, and blue (RGB). [5]ecause of their prominent features such as good response rate and best visual depth, CRT-based televisions have gained splendid popularity in the display industry for nearly four decades, from 1950s to 1990s.In 2000s, the liquid crystal display (LCD) was successfully introduced as an alternative to CRT technology. [6]CD is a nonemissive technology where a unit of backlight source is used to create light propagation through a liquid crystal panel (the polarizer absorbs ≈50% of the incident light) and then color filters are used to convert this mixture of white light into perfect red, green, and blue color. [7]The LCD technology gained significant attention because of its high-power efficiency and portability.In CRT technology, portability is one of the major drawbacks.Albeit LCD technology faced serious issues in the perspective of color saturation, response time, and conversion efficiency. [8]To improve their performance, materials with high color saturation and better response time and upgraded backlight unit with better conversion efficiency were used. [9,10]These modifications improved optical efficiency and color gamut of LCD, however, two-thirds of light generated is still being wasted.In 1990s, organic light-emitting diodes (OLEDs) were introduced to display technology. [11]The OLEDs are efficient power-saving devices, have a high response rate and broad viewing angle, and do not need an extra backlight unit in them. [12,13]They have been widely used in smart electronic products such as foldable and curved displays because of their prominent features of transparency and perfect flexibility. [14]However, the OLEDs faced cumbersome issues of their shorter device lifetime and inefficient color purity which is limiting its application in displays. [15]In short, both technologies (LCDs and OLEDs) have some critical issues which need to be addressed for their successful utilization in future smart displays. [16,17]Recently, the inorganic material-based light-emitting diodes (LEDs) with the lateral dimension down to less than 100 μm × 100 μm (which is also called micro-LEDs) have attained much more attention for a display technology [18][19][20] due to their distinguished features such as lower power consumption (30-40% of LCD), longer life span, the quick response time (nanosecond level), and wide RGB color gamut. [21,22]In comparison to the already existing LCD, the micro-LED display has a ten times faster response rate, 1.4 times broader color gamut, and 3.5 times higher contrast ratio. [23]Further improvements in luminescence efficiency of these micro-LEDs were achieved by enhancing light extraction efficiency, e.g., Choi et al. have introduced the concept of micropillar geometry by patterning the micropillars on LED mesa with diameters ranging from 4 to 20 μm.The device with a pillar diameter of 4 μm showed better performance compared to larger diameters.This performance improvement was attributed to strain relaxation (reduction of piezoelectric field), minimum self-heating (which degrade their performance), and also to proper current spreading across the device. [24,25]Recently, Bower et al. have demonstrated a variety of prototype passive-matrix and micro-IC-driven activematrix displays utilizing both lateral and flip-chip micro-LEDs. [26]he micro-LEDs-based display technology has also attracted significant attention from investors and companies. [27]In 2012, Sony introduced its first micro-LED-based television panel (55 in.) which has six million micro-LEDs. [23]After that, in 2018, LG successfully presented micro-LEDs-based large (175 in.) display.Samsung, also in 2018, launched 146 in.micro-LEDs-based television.In the same year, PlayNitride stepped in toward micro-LED-based display technology by introducing 0.89 in.64 × 64 panels (105 pixels per inch) and another 3.12 in.256 × 256 panels (116 pixels per inch) full-color display.Another company, X-Celeprint has also introduced 5.1 in.micro-LED-based display which was comprised of 8 × 15 μm RGB LEDs having 70 pixels per inch.In 2020, numerous companies including Lumiode, Jade Bird Display, and Plessey have announced the launching of their product soon. [21]ostly, phosphors have been used as color converters to achieve full color micro-LEDs.However, recently, the color conversion of the micro-LEDs has also been improved using quantum dots (QDs) and nanoparticles. [28,29]Apart from displays, micro-LEDs have been employed to achieve high-speed as well as long-distance visible light communication. [30,31] sudden efficiency degradation is normally seen by decreasing the dimension of devices from LEDs to micro-LEDs.In this review, we discussed the problems associated with the epitaxial growth of micro-LEDs which lead to these efficiency degradations and compared the various technologies for the mass production of micro-LEDs.We also discussed the phosphor and QD-based color conversion technologies with their positive and negative aspects for full color display.Finally, we listed some prominent strategies which could be useful to overcome the problems associated with color conversion technology.
Epitaxial Growth and Chip Processing of LEDs
The III-V-based LEDs is widely being used in a variety of applications.To meet the demand of the market and decrease the manufacturing cost, the industry is moving from 2 in. to 4 and 6 in.wafer sizes. [32,33]Although, sapphire remains the dominant substrate material for the epitaxy of nitrides LEDs, because it allows lower dislocations densities and the best crystalline quality. [34]But the use of large Si substrates also attracts great interest due to its availability in the market at a lower cost.Further, the growth over Si could be useful to integrate nitrides with silicon standard electronics. [35,36]However, Si is known to have high chemical reactivity toward other elements, e.g., gallium is reported to form an alloy with Si which further makes GaN-on-Si challenging.Si has also been known to diffuse into GaN causing n-type conductivity. [37]An additional layer such as Al, GaAs, etc., thus, has been traditionally used between GaN and Si. [38,39]nother challenge is that opaque silicon leads to around 80% unwanted light absorption. [40]Further, it is also challenging to achieve crack-free GaN-on-Si for efficient micro-LEDs. [41]ormally, the metal-organic chemical vapor deposition technique is utilized for the fabrication of InGaN/GaN-based LEDs.[44] Then, the p-contact layer is grown over the active region (InGaN/GaN) which is usually indium tin oxide for better current spreading and also to achieve an ohmic contact. [45,46]By the etching process, the LED mesa is defined, and is etched down to the n-doped GaN layer.Finally, n and p electrodes are deposited by utilizing the beam evaporation technique. [47,48]The chip size of conventional LEDs (for traditional solid-state lighting) is of the order of hundred microns to a few millimeters.The fabrication processes of micro-LEDs are the same as mentioned above for conventional LEDs, except for the smaller size of an LED mesa.As the size of the LED mesa is reduced to tens of microns, it poses a serious challenge to the LED epitaxy (and leads to more defects).Therefore, the performance of micro-LEDs is more degraded in comparison to conventional LEDs.These defects provide paths for reverse current leakage (surface leakage current) in the device. [49]n conventional LEDs, the external quantum efficiency (EQE) is not affected by these factors because the density of dislocation defects are low (≈10 8 cm −2 ) compared to micro-LED. [50]For successful epitaxial growth of gallium nitride (GaN)-based micro-LEDs, low defect density, single-wavelength light emission, and uniform spreading of driving current are required. [51]To meet the requirement of large-scale micro-LEDs, a sapphire substrate with a large dimension is also required.Larger substrate dimension leads to higher thermal and lattice mismatch with the epitaxial LED layer. [52,53]The thermal mismatch at a surface can lead to the nonuniform distribution of indium content in the InGaN/GaN multi-quantum wells (MQWs) wafer.Basically, the difference in temperature of 1 °C at a surface could induce a shift in emission wavelength of ≈1.8 nm in the case of blue LEDs and 2.5 nm in the case of green LEDs. [54,55]To address these problems, a number of solutions have been proposed, e.g., Lu et al. reported the uniform emission with ≈2 nm deviation by implementing the proper pocket design for 2, 4, and 6 in.wafer. [53]Aida et al. have reported GaN on the sapphire substrate with reduced bowing in the wafer by utilizing laser processing for stress implantation. [56]ith the decrease in size from LED to micro-LED, both chip fabrication and their performance are greatly affected.This is due to the enhanced density of surface defects because of the sidewall effect.Normally defects cause the nonradiative recombination (Shockley-Read-Hall (SRH) recombination) to occur. [57]igure 1.a) Cross-sectional and magnified TEM image of a pyramidal μ-LED structure, the detail of the structure, i.e., n-GaN, MQWs, and p-GaN is shown in magnified image with black arrow.b) Comparison of light output power as a function of injection current (inset showing light-emitting images collected at an injection current of 15 μA).Reproduced with permission. [65]Copyright 2015, The Japan Society of Applied Physics.c) Schematic diagram of μ-LED with sidewall passivation layer by using ALD.Reproduced with permission. [66]Copyright 2018, Optical Society of America, and d) effect of chip size on EQE as a function of current density (A/cm 2 ).Reproduced with permission. [49]Copyright 2016, Elsevier publishing.
Due to these SRH recombination centers, the internal quantum efficiency of the device is reported to decrease at low current density whereas, for high current density, the effect of current crowding dominates in the device.Such defects can be reduced via thermal annealing.
Generally, the InGaN alloys have a lower surface recombination rate (102-104 cm s −1 ) in comparison to AlGaInP, therefore InGaN LEDs have comparatively higher EQE. [58]The sidewall recombination which is also called SRH nonradiative recombination caused by plasma dry etching originates from the higher surface to volume ratio.The primary channels of carrier losses are surface recombination velocity and carrier diffusion length.Bulashevich and Karpov found that narrow bandgap and zinc blende crystal structures have relatively higher values than those of wurtzite crystal structures such as GaN and InGaN. [59]Hence, as the chip size is reduced, the nonradiative recombination at the sidewalls of the active region in the case of AlGaInP-based μ-LEDs is enhanced which then has a detrimental influence on their efficiencies.On the other hand, Li et al. reported nearly sizeindependent EQE in the case of InGaN-based μ-LEDs due to very low surface recombination. [60] comparison between the chip size and EQE is given in Figure 1d which shows that the decrease in chip size from 500 μm × 500 μm to 10 μm × 10 μm led to a reduction in the EQE from ≈10% to 5%. [49,61,62]To solve this problem, numerous strategies are reported. [63,64]We have included a few of them here, e.g., Chen et al. have reported pyramidal micro-LED, in which reverse current leakage was minimized by a factor of two by employing SiO 2 current confinement layer as shown in Figure 1a. [65]Consequently, the light output power was enhanced approximately No impurities transfer on substrate surface.
Laser source can damage their transfer stability.
Flexible to use and have perfect repeatability.
Due to electrostatic, the charges are induced which can degrade device performance 1-100 μm Apple/Luxvue [84] Fluidic-based assembly Transfer yield ≈65%.The transfer rate is ≈50 million per hour.
Economical, easy to operate and minimum parasitic effect. [21]efficient, the probability of pixel damages during transfer is high.
It can be transferred by efficient and economical way because of its stickiness (elastomer stamp) nature.
It has poor repeatability because the stamp adhesion force is controlled by peeling speed.It is optimized by a magnetorheological stamp. [88]0 μm X-Celeprint [89] Roll-to-Roll/R2R Transfer yield 99.99%.The transfer rate is ≈10 000 per second.
Economical, high efficiency and high throughput
Probability of device damages is high.
Assembling Technologies for Mass Production of Micro-LEDs Display
Despite their tremendous performance, this technology has faced a lot of problems in their high commercial production.
[69] Normally, in inorganic LEDs, the dimension of developed wafers is usually 4, 6, and 8 in. [70]ecause of this, it is compulsory to utilize mass transfer technology for large area micro-LED displays by shifting the LED array to a receiver substrate.In parallel to mass transfer technology, pick and place, selective release and transfer, and selfassembly have also been utilized for shifting the LED array on a substrate. [21,71,72]Each technology has its own advantages and disadvantages and can impact the performance of the display.For example, in order to realize a 3840 × 2160 × 3 micro-LED full-color display consisting of 4 K pixels per inch, nearly 2488 pixels must be destroyed after transmission for 99.99% reliable transmission technology. [63,73]The manufacturing companies have introduced some approaches such as electrostatic, [74,75] laser-based, [76][77][78] van der Waals, [3,79,80] and fluidic-based assembly [81][82][83] for efficient transfer.Table 1 shows the comparison of different mass transfer technologies in the aspect of their advantages/disadvantages, transfer yield, and transfer rate.
The electrostatic-based mass transfer technology was introduced by the Apple Co.-owned company "LuxVue" in 2012.They successfully realized a micro device array by utilizing this technology. [84]The transfer of an LED chip from a host substrate to a receiving substrate is done using an electrostatically charged transfer stamp/target substrate.By applying a voltage, the elec-trostatic transfer head array picks up micro device array from the host substrate via charge adsorption force.Then, the receiving substrate is brought into contact with a micro device array.Meanwhile, the applied voltage on the electrostatic transfer head is removed.The transfer is flexible because it can selectively transfer each component, i.e., the pitch of electrostatic head array and pitch of the micro-LED on the receiving substrate.But the applied voltage to the head during the electrostatic induction can produce a charging phenomenon that can damage the micro devices (e.g., micro-LEDs). [21]Therefore, careful control of the voltage is very important during the transfer.
Laser-based transfer technique (laser-induced forward transfer) utilizes laser beams to detach the micro-LEDs from the carrier substrate and then shifts them toward the receiver substrate. [85]Like the laser-lift-off technique, laser beam irradiation causes ablation at the interface of substrate/LED to detach the chip from the carrier substrate and, in the meantime generates a force to push the chip toward the receiver substrate.In 2013, Uniqarta introduced massively parallel laser-enabled technology (MPLET) for large-scale transfer of micro-LEDs. [78]The micro-LEDs were deposited on the substrate (glass) plated with a dynamic release layer (DRL).When a UV beam irradiates the targeted area of a substrate, bubbles was generated between the substrate and DRL.Under the action of bubbles expansion and gravity, the micro-LEDs were shifted toward the receiver substrate with a pitch of 10-300 μm.MPLET technology utilizes a diffractive optical element to diffract a single beam into the multiple sub-beams.Each sub-beam corresponds to the transfer of a micro-LED.By utilizing MPLET, Marinov successfully shifted micro-LEDs with an average placement error of 1.8 μm at a high transfer speed of more than 100 M units h −1 . [78]ogers group used the elastomer stamp to initially develop micro-transfer printing technology (TP). [79,92]In TP, the transfer of the micro-LEDs from one substrate to another was performed using the difference in adhesion between the stamp and the substrate.For picking (donor substrate) and placing (recep-tor substrate) of the stamp, the peeling speed is very important.The peeling speed of ≈10 cm s −1 is generally used for pulling the stamp away from the donor substrate which can generate effective adhesive force between the stamp and micro-LEDs.Whereas, the peeling speed of ≈1 mm s −1 is usually used for placing the micro-LEDs on a receptor substrate (i.e., by decreasing the peeling speed, the adhesive force is also reduced). [86]The stamp, used to pick and transfer the micro-LEDs, is made up of soft and elastic materials normally the major portion is made up of polydimethylsiloxane and shape memory polymers. [93]Because of its excellent nature of stickiness and flexibility, it is suitable to transfer technology for curved screens and wearable applications.In 2017, Radauscher et al. used this technology to develop active and passive color micro-LED arrays and showed a high transfer yield of ≈99.99%. [94]There are also reports where poor repeatability of this method is reported due to the dependence of adhesion force on the peeling speed. [88]To address this problem, Kim et al. in 2019 used magnetorheological stamps and controlled the adhesion force by the varying magnetic field during the pick and place. [88]In 2020, Lu et al. optimized the stamp transfer technology by utilizing the support vector model. [95]The placement rate of ≈1 M units h −1 was achieved using a stamp size of 150 mm.This is suitable for a small area or finer micro-LEDs display applications but not enough for large area displays. [86]he fluidic-based assembly is cost-effective technology and can be used to transfer the large-spacing micro-LEDs.This technology can easily manage microstructures (e.g., micro-LEDs) on a large area substrate with high throughput.Moreover, the interconnection parasitic effect is low in comparison to other available transfer methods. [21,96]In this technology, the target substrate, micro-LEDs, and transfer components are immersed in liquid (e.g., isopropanol, acetone, or distilled water).This liquid is then utilized as a medium to connect the chip and target substrate electrically and mechanically.The fluidic assembly uses gravity and surface tension as a driving force to drive and capture the micro-LEDs on a target substrate.After being positioned on the substrate, the anode and cathodes electrodes are bonded with the driver ICs for electrical connections. [27,87]In 1994, Yeh and Smith performed the transfer of trapezoidal-shaped GaAs LED devices from a growth wafer to a Si substrate via this technology. [82]ater in 2007, Saeedi et al. utilized this technology to transfer the micro-LEDs grown on AlGaAs to a flexible substrate with a yield of ≈65%. [97]In 2017, Sasaki et al. reported fluidic assembly technology for the massive parallel assembly of micro-LEDs. [81]In 2019, Cho et al. demonstrated high yield fluidic assembly technology for GaN microchips for display applications, where, they precisely assembled more than 19 000 blue GaN-microchips with 45 μm in diameter at 99.90% yield within 1 min. [98]Despite these successes, this technology is not mature yet and still needs to be flourished (in the perspective of low melting temperature alloys which damage the chip performance) further for commercial usage.
Another transfer approach "roll-to-roll (roll-to-plate)" was developed by the Korean Institute of Machinery and Materials (KIMM). [90,91]This can be used to shift micro-LEDs with a chip size and thickness below 100 μm and 10 μm, respectively.This technique can provide a high transfer rate of ≈10 000 devices per second for lightweight, flexible, and stretchable displays.For this, the chips need to be transferred to the roller, and then the rotation between the rollers is used to imprint the chip onto a target substrate.The advantage of this approach is that the same method is used to pick up the thin-film transistor (electronic components) and LED components and place them on the desired substrate.Therefore, this process increases the production speed. [99]However, it cannot selectively transfer micro-LEDs.Further, precision and reliability are also difficult to guarantee.
Among the above-mentioned mass transfer technologies (in Table 1), the elastomer stamp (van der Waals interaction) is most frequently used [68,89,101] because of its unique feature (e.g., stickiness) which can be easily transferred to the receiver substrate with 99.99% transfer yield. [102]Comparatively, laser-based and electrostatic technologies have good repeatability which greatly reduces their mass transfer cost, whereas the poor repeatability was the negative point of elastomer technology which can be improved by introducing a magnetorheological stamp (electromagnetic in the nutshell, the elastomer stamp (van der Waals interaction) is the promising technology for the small devices). [99]Moreover, the replacement of the stamp head with an electromagneticbased head is considered a prominent solution for the enhancement of cost effectiveness and reliability of transfer.However, for the realization of a large-size display of micro-LED, the roll-to-roll technology has advantage because of its high throughput.The schematic working diagram of the mass transfer technologies is shown in Figure 2a-d.
Color Conversion Technology for RGB Display
In full color micro-LED display, each pixel contains three primary colors (red, green, and blue) that can be combined to produce the required color.To achieve the different combinations of RGB colors, the different biases (current) are applied to control each (red/green/blue) LED.However, in LCD technology, each pixel contains color filters and liquid crystals (light switches).The required color is generated by passing the light from the backlight unit through each pixel.[105] However, there is a critical challenge for the researchers to develop an approach to achieve fullcolor micro-LED-based display from single color LED. [106,107]he emission wavelength of InGaN/GaN MQW LEDs can be changed from blue to red through the variations of indium content. [108]Comparative to blue GaN-based LEDs, the luminescence efficiency of green and red LEDs is not only very low [105,109] but also decreases abruptly with the increase in operating current density. [110]This is mainly due to the lattice mismatch between InGaN and GaN layers which leads to misfit strain.[113] The biaxial compressive strain hinders the incorporation of In atoms in the InGaN lattice, thus causing the so-called compositional pulling effect. [111,112]An accumulation of large strain in InGaN/GaN QWs can also lead to the generation of defects, which act as nonradiative recombination centers and can reduce the EQE. [113,114]In order to achieve the emission in green and red spectral regions, one can try to exploit the quantum-confined Stark effect (QCSE) which occurs in the c-plane of InGaN/GaN QWs. [115]Due to the presence of the built-in electric field in these structures, [85] Copyright 2015, Elsevier publishing.b) Fluidic-based assembly, c) elastomer stamp, d) roll-to-roll/R2R assembly.Reproduced with permission. [100]Copyright 2017, John Wiley and Sons.Reproduced with permission. [1]Copyright 2018, MDPI publishing.
the bandgap decreases with increasing the QW width, which then leads to the redshift in light emission.However, this effect decreases the probability of the interband optical transitions and leads to lower EQE due to reduced electron-hole wave function overlap. [116,117]Additionally, the QCSE decreases with increasing the injection current due to the screening of the built-in electric field by free carrier. [118]The other option is to use the GaP/GaAs-based LEDs for red-light emission. [119]However, the difference in LEDs material (InGaN/GaN for blue and green while GaP/GaAs for the red color) leads to a difference in perspective of their key parameters (current, voltage, temperature, and device lifetime). [61,120]Consequently, the performance of the RGB display can be affected.Therefore, for the perfect realization of micro-LED-based RGB display, the same material system is required for all three primary colors. [121,122]To address this (red InGaN LEDs) challenge, different solutions including customized substrates, bandgap engineering, and optimized growth conditions have been reported. [123]For example, using the metamorphic InGaN buffer layers or InGaN pseudosubstrates (InGaNOS), which reduce the strain in InGaN QWs can lead to more efficient indium incorporation and better material quality for the active layer of LEDs. [124,125]128] As a result, efficient light emission in the spectral range from 482 to 617 nm was achieved in InGaN-based LEDs grown on InGaNOS (as shown in Figure 3b). [1,128]More recently, porous GaN pseudo-substrates have been developed and red InGaN micro-LEDs emitting at 632 nm have been demonstrated. [129,130]urther progress in InGaN pseudo-substrates will move the emission wavelength of InGaN LEDs to the IR spectral region. [131][134] In future perspectives, one can expect that InGaN pseudo-substrates will make it possible to reach the terahertz emission from InN/InGaN QWs and close the energy gap in these heterostructures, leading to the topological phase transition. [135]38][139][140][141][142]
Color Conversion Method
The excitation source (blue/UV micro-LED) and color converters (phosphors/quantum dots) are required for full color display.The red, green, and blue color converters are needed with an excitation source of UV micro-LED, whereas only red and green color converters are required with blue micro-LED.However, the perfect deposition of color-converter on LED pixels is a challenge.Therefore, different printing technologies such as aerosol jet, inkjet, stamp, and coating techniques such as spin-coating, pulse spray, and mist have been attempted for the deposition of color-Figure 4. SEM images of phosphor with various mean particle diameter ranging from 4 to 26 μm.Reproduced with permission. [148]Copyright 2014, The Nonferrous Metals Society of China and Springer-Verlag Berlin Heidelberg.
[145][146] Among them, the aerosol jet printing is better because it did not require contact and mask, give a very precise deposition, and is easy to handle with viscous ink.
Phosphor-Based Color Conversion
The prominent features of phosphors-based conversion technology are i) high quantum yield which is approximately more than 80%, ii) high thermal stability which is approximately above 150 °C, iii) high resistivity for the moisture (chemically stable), iv) fast-luminesce decay and very stable emission spectrum under the irradiation of continuous light flux.[149][150] Normally, Ca 1−x Sr x S:Eu 2+ , Sr 2 Si 5 N 8 :Eu 2+ , CaSiN 2 :Ce 3+ are used for generating a phosphor-based red emission, while, SrGa 2 S 4 :Eu 2+ , SrSi 2 O 2 N 2 :Eu 2+ for green light, and LiCaPO 4 :Eu 2+ , Sr 5 (PO 4 ) 3 Cl:Eu 2+ for blue light. [150]Another way of conversion is also possible in which phosphor-based conversion from blue micro-LED to white light is carried out, then color filters are used for full-color display.In this technology, a portion of the light is absorbed by the color-conversion filters which then lead to crosstalk among the sub-pixels because of the light scattering.Hence, an alternative and efficient way of conversion is required.Therefore, different deposition techniques have been explored to coat the phosphor layer on each pixel. [151,152]or the successful realization of micro-LEDs-based full color display using phosphor as a color-conversion layer, the particle size of the phosphor is very important.The size of phosphor particles depends upon their preparation methodology, e.g., the solid-state reaction method normally leads to the particle size of greater than 5 μm, spray method to 100 nm to 2 μm, combustion to 500 nm to 2 μm, hydrothermal to 10 nm to 1 μm, solgel to 10 nm to 2 μm, and co-precipitation gives particle size of 10 nm to 1 μm. [147]This means that phosphor particles within the nanosize range can be achieved using the above technologies which will be useful for color uniformity.But on the other side, the light-conversion efficiency will be reduced [150] because lightconversion efficiency is inversely proportional to the size.
In order to overcome this trade-off between the light efficiency and color uniformity, Chen et al. have introduced small (4 μm) and large-sized (22 μm) phosphors particles together with the ratio of 3:2 in color-conversion layer [148] (Figure 4).In Summary, color uniformity can be enhanced by a phosphor-based color conversion layer with small (nano)-sized particles because of reduced light scattering.However, the use of these (nano)-sized particles in color conversion layer effectively reduces the light conversion efficiency and affects the performance of micro-LEDs-based fullcolor display.Therefore, it is required to explore other conversion materials to replace phosphor-based materials.With the prominent features of small-size and high light conversion efficiency, the QDs are promising candidates for future display applications.
QD-Based Color Conversion
QDs are compound semiconductor nanomaterials which are mostly from III-V (InP), II-VI (CdSe), or I-III-VI (CuInS 2 ). [153,154]QDs are normally developed using a chemical solution-based method. [155]QDs are a better replacement of phosphor as a conversion layer because of their tunable optical properties, color purity (narrow emission), shorter emission lifetime, high quantum yield, and strong absorption in the visible region.The emission wavelength of QDs can be controlled by the dimension and composition of QD particles. [153,156,157].Approximately, the CdSe QD with a diameter of 2 nm can emit blue light, whereas 8 nm QD can emit red color.Similarly, CdTe (with an energy bandgap of 1.5 eV) can emit a deeper wavelength (≈827 nm). [154,158,159]Numerous advanced techniques have been proposed for the patterning QDs on micro-LEDs, e.g., photolithography, [160] electron-beam lithography (EBL), [161] jet printing, [143] and 3D printing. [162]To select the QD patterning methodology, the various parameters such as resolution, throughput, and defect tolerance of the desired application (display) should be kept in mind.For example, the 55 in.TV with 4K resolution consists of the 10 μm × 10 μm micro-LED chip.This chip consists of ≈3 μm RGB subpixels.For 8K resolution, the subpixel size is less than ≈3 μm.Therefore, by using the EBL technique, the size of QD patterns can be scaled down to sub μm which is enough for micro-LEDs. [23]Manfrinato et al. reported nm resolution patterning using EBL. [163]By photolithography, the size of QD patterns of ≈2-5 μm in width and ≈50 μm in thickness, was achieved. [160]While, by utilizing printing technology, Richner et al. achieved QDs with the size of ≈250 nm. [164]he comparison of the various patterning techniques is given in Table 2. Consequently, it is possible to utilize any of the abovementioned techniques for the deposition of QDs for the LED with micrometer size.Nevertheless, each technique has its own limitations.For example, the transfer printing technique needs a template which is making the procedure more complicated and inkjet printing has its own problems such as coffee ring effect, nonuniform film thickness, and rough surface of printed film.
Similarly, for the photolithography and EBL, QD films can be easily damaged which will directly impact the production cost.
Table 3. Requirements for micro-LEDs in different applications [1] .Comparatively, so far, the printing technology can meet most of the market demands because the pixel sizes for display are still above micrometers.Therefore, from the perspective of simple and rapid fabrication steps, the printing technique should be mainstream in future industries.
The light from the excitation source (blue/UV LED) is guided with a light guide plate which is comprised of a diffuser/distributed Bragg reflector (DBR) for uniform distribution of light across the display area.Then, this light is converted into primary colors (RGB) with a wide color gamut after passing through color-conversion layer.Three geometries are normally used for the deposition of QD color conversion material (shown in Figure 5a).i) On-chip: in this geometry, QD is deposited on LED, ii) On the edge: in which QD material is deposited at the edge of the light guided plate, and iii) On the surface: in which thin layer of QD is placed on the whole surface of the light guided plate which is called quantum dot enhanced film (QDEF). [17,165,166]n the applications side, the requirement of micro-LEDs varies according to chip size, panel size, and pixel per inch (PPI).The details of a few applications are given in Table 3.
For the color conversion, the MQW structure (from the II-VI and III-V group) can also be used with micro-LED. [167]n 2015, Santos et al. showed the color conversion by MQW structure where they used 450 nm micro-LED to excite [143] Copyright 2015, Optical Society of America.
ZnCdSe/ZnCdMgSe MQW structure to successfully realize a hybrid micro-LED emitting 540 nm light. [168]n the beginning, Han et al. have reported the RGB QD-based micro-LED display using UV-LED (with 395 nm emission and chip size of 35 μm × 35 μm) as an excitation source. [143]Aerosol spray technology was used for the deposition of RGB QDs on top of the excitation source.The deposition was completed in three major steps; first pouring out the specific volume of QD particles which was regulated by applied voltage/gas pressure, then placing the QD particle at the required position, and finally spreading and drying it over the surface.The structural geometry of the proposed structure is shown in Figure 6a.DBR was used for the efficient utilization of UV light for light conversion which improved the luminous efficiency of blue by ≈194%, green by ≈173%, and red light by ≈183% (the relative PL intensity with and without DBR is also shown in Figure 6b).Moreover, the color gamut was 1.52 times of the standard NTSC gamut (shown in Figure 6c).However, the problem of optical crosstalk was there which needs to be addressed.
In 2017, Lin et al. have presented an idea to decrease the optical crosstalk by introducing a photoresist (PR) window which was prepared using lithographic technique. [169]This PR window acts as a light blocking wall from one pixel to another as shown in Figure 7a.Consequently, the optical crosstalk was significantly reduced in comparison to the conventional structure (without PR window) which is evident from Figure 7b.Moreover, they have used DBR to utilize the wasted light.As a result of both PR window and DBR, the light emission efficiency of QD was improved by ≈23%, ≈32%, and ≈5% for red, green, and blue light, respectively.
Gou et al. simulated micro-LED-based display structures by introducing funnel tube arrangement. [170]With the use of a funnel tube, the phosphor remains blocked in each pixel because it is deposited on a top surface of the LED with the tube area in a straight line to each sub-pixel.The internal region of the tube can be absorptive/reflective.For white light conversion, the phosphor was placed inside the tube.On the top surface, the RGB color filters were employed to separate the different colors.The schematic diagram of the arrangement is shown in Figure 7c.Consequently, the crosstalk was found to be effectively reduced in this arrangement.
In addition to crosstalk, the coffee ring is another problem that affects the uniform luminescence efficiency of QD-based micro display. [171]This is due to the formation of ring-shaped patterns at the edges.This problem is induced because of the differential rate of evaporation among the corner and inside of the QD droplet.This should be addressed to improve the uniform thickness of the QD color conversion layer.
The problem of the coffee ring can be solved by controlling the concentration of QD particles, dwell time and flow rate of gas, [169] Copyright 2017, Chinese Laser Press.C) Schematic diagram of full color micro-LED display with funnel tube array.Reproduced with permission. [170]Copyright 2017, Chinese Laser Press.
applied voltage, and size of the QD ejecting nozzle.The PR window, which was applied for the reduction of the optical crosstalk window, is also effective for minimizing the ring effect.By applying a resistant window, the QD solution within each window can be made uniform.Figure 8 shows the comparison of both cases with or without PR window.The coffee ring-shaped effect was also addressed by the addition of polymers (hydro-soluble) where the enhancement in viscosity and Marangoni effect was observed. [171,172]There are also some other reports where the problem of the coffee ring-shape effect was addressed, e.g., Liu et al. controlled some of the key parameters, namely, viscosity, three-phase contact line, and contact angle, by using a composite of QD ink.Consequently, they obtained a QD film with uniform thickness.Similarly, Sun et al. improved the viscosity, surface tension, and evaporation rate of perovskite (CsPbBr 3 )-based QD ink by changing the volume of dodecane and toluene (solvents) to form suitable Marangoni flow.As a result, they obtained uniform perovskite microarrays. [173,174]ser Photonics Rev. 2022, 2100427 [179] Copyright 2020, MDPI publishing.
Another problematic issue in the QD-based micro-LED display is the leakage of blue light from the excitation source.Different strategies such as DBF or color-conversion filter (consisting of primary colors: RGB) has been used to solve this problem. [169,175]owever, both DBF and color-conversion filters led to the fabrication complexity and also increases the overall cost of fabrication.The problem of blue light leakage can be also solved through the easiest approach, e.g., by using the different concentrations of the QDs.In the reported literature, the relation between the absorption and concentration of QDs is represented by the below equation (which is also called Beer's law) [176] where A is the absorption coefficient, is the extinction coefficient, C is the concentration of QD, and t is the length of the optical path.It has been reported that the green and red QD film with a thickness of ≈5 μm can absorb nearly 99% of blue light. [177]Similarly, Lee et al. have fabricated ≈10 μm thick InP/ZnS QD film using inkjet-printing technique [178] to achieve >95% absorption of blue light.Their film showed good stability under high humidity (i.e., 95% relative humidity) and temperature (65 °C).
There are many reports about the performance up-gradation of the micro-LED display, e.g., Chu et al. have reported the micro-LED-based RGB display using CdSe/ZnS QDs as color conversion materials and GaN-based blue micro-LEDs as excitation sources. [179]To improve the color contrast ratio, they confined each QD light using a black matrix (which have the property of nearly no transmission in the visible region) as shown in Figure 9a.The hybrid Bragg reflector (HBR) and DBR were deposited on the back and top side, respectively, to improve the reflection of light from the substrate side and also the color purity.As a result, the light output intensities and the color purity of both green and red QDs were improved (Figure 9b-d).In the traditional design of a QD-based micro-LED display, the QD layer is deposited on the top of the encapsulation layer.In this architecture, the portion of the light was trapped between the encapsulation and QD layer because of the total internal reflection.As a result, the radiative coupling between the MQW and QDs was reduced which directly impacted the efficiency of the display.In order to improve the radiative coupling efficiency, various strategies have been reported, e.g., surface roughening of LEDs and etched nanostructures like photonic crystals have been used. [180,181]Copyright 2016, Optical Society of America.
Krishnan et al. developed a hybrid photonic quasi-crystal LED to increase the radiative coupling between MQWs and QDs.In this device geometry, the QD emitters were placed in proximity to the active region (Figure 10). [182]As a result, the color-conversion quantum yield of a single QD (monochromatic conversion) was enhanced by ≈123% and for QD-based white display, it was increased by ≈110%.From color conversion efficiency (CCE) formula, energy transfer from MQW (active region) to QD via both radiative and nonradiative (resonant energy transfer) pathways can be estimated.The CCE is expressed as a ratio of the light emission intensity of QDs (from the hybridized structure) to the light emission intensity from MQWs without QDs. [183]Mathematically it can be expressed as In the above formula, the denominator shows the difference in luminescence intensity with (emission from MQW) and without QDs (hybrid structure).
[186] In 2017, Wang et al. have proposed a nanoring (NR) structure instead of nanoholes/nanorod structures to enhance FRET (shown in Figure 11a). [187]In both nanoholes/nanorod structures, only one sidewall either inner or outer is contacted with QDs.Albeit, in NR structure, there is a probability of both sidewalls to contact with QDs, which can effectively enhance the CCE.They fabricated NR-LEDs with different sidewall widths ranging from 40 to 120 nm using nanosphere lithography.With the reduction in width, the induced strain within the active region because of lattice mismatch was relaxed.This was confirmed by photoluminescence (PL) and time-resolved PL measurements.Further, they observed a considerable blue shift (from green to blue) and the magnitude of this shifting increased with the decrease in sidewall thickness (Figure 11b).Hence, this (NR) can be a promising structure for the realization of nanometer-scale RGB LEDs for display applications.In 2019, Chen et al. proposed a feasible route for RGB micro-LEDs display using NR structure. [188]They have fabricated NR onto the micro-LED wafer (having emission in green regime), then by using the concept of strain relaxation with a blue shift, the emission wavelength was tuned from green to blue.Moreover, the PL intensity was enhanced by ≈143%, with the deposition of Al 2 O 3 (passivation layer) within the sidewall of NR micro-LEDs. [66]Finally, red QDs were deposited on NR-LED.As shown in Figure 11c, without passivation layer, they achieved nonradiative resonant energy transfer (NRET) of ≈53% and with the passivation layer, NRET of ≈66% in QD-based NR micro-LED.NRET has a very important role because it is strongly correlated with the lifetime of exciton recombination. [189]Hence, they have presented a strategy for display with a wider color gamut (which overlap with ≈104% of the National television standards committee (NTSC) & ≈78% Rec.2020) as shown in Figure 11e.
The light absorption and emission property of QDs also need to be improved because the dried films of QD suffered from serious challenges of color purity and lower quantum yield (QY), and are also hard to pattern with stable structure. [190,191]194] Kang et al. introduced a nanoporous (NP) GaN to improve the light absorption as shown in Figure 12a. [195]They varied porosities (25%, 55%, and 75%) of NP GaN and studied its effect on light scattering and transport mean free path (TMFP).Normally, short TMFP is better because it allows multiple light scattering which increases its travelling path.The diffusion equation was used to estimate their light scattering and TMFP.[199][200] The [187] Copyright 2017, The Authors, published by Springer Nature.c) PL spectra of NR-LEDs with and without ALD passivation layer.d) EL spectra of RGB hybrid QD-based NR micro-LED.e) Color gamut of RGB hybrid QD-based NR micro-LED, NTSC and Rec.2020.Reproduced with permission. [188]Copyright 2019, Chinese Laser Press.
TMFP can be evaluated using the mathematical expression given below where T d , z e , L, and l s is diffusion transmittance, extrapolation length, total thickness of medium (scattering medium), and mean free length of scattering path, respectively.The detail is provided in refs.[ 197,201 ] In NP GaN with 75% porosity, the light extinction coefficient was increased by 11 times at 370 nm due to multiple scattering.As a result, the light CCE of green and red light was enhanced by 96% and 100%, respectively (Figure 12b).Kang and Han also reported another strategic way where they employed the QDs into NP-GaN which is composed of both vertical and horizontal nanopores (scanning electron microscope (SEM) images are shown in Figure 12d). [196]The probability of light absorbance of QD (on NP GaN) was much higher than the planar surface (non-NP GaN) due to the increase in length of the optical light path.The nanopores can effectively scatter the light in the perpendicular direction. [202]But in this structure (with both vertical and horizontal nanopores), at a low incident angle, most portion of the light was scattered by the horizontal nanopores.Therefore, this structure is more promising for attaining efficient light conversion.Recently, in 2020, Mei et al. have introduced a facile route, i.e., sacrificial layer-assisted patterning (SLAP) approach in comparison to previously photolithographic techniques for the fabrication of QD-based LEDs display. [203]In photolithographic technique, both the device performance and purity of emission color are severely suffered.However, in SLAP, the presence of a sacrificial layer (SL) enhances the contact and stability of QDs.They used both negative PR layer and SL for the deposition of QDs (Figure 13a) and reported 500 pixels per inch prototype of fullcolor QD-based LED.Using SLAP, they improved the color purities of the light and achieved a wider color gamut which is ≈114% of NTSC (Figure 11).This methodology can be applied for the mass production of a high-resolution display with reduced cost.Almost, to date, this is the first report of a high-resolution (500 ppi) full-color QD-based LED display.
In the photolithography process, the problem to fabricate fullcolor display (QD-based LED) mainly comes from the interface [195] Copyright 2020, American Chemical Society.d) Cross-sectional SEM images of vertical and hybrid NP GaN electrochemically etched at 22 V. Reproduced with permission. [196]Copyright 2019, The Society for Information Display.
(QD/substrate or QD/PR interface) in direct approaches.Because interfaces require both materials with the same characteristics.In SLAP (indirect approach), the problem is PR/sacrificial layer (SL) which can be solved through careful selection of the SL material.The SL materials are not necessary to be photo-responsive.This SLAP technology can be combined with various other patterning technologies, i.e., photolithography, transfer printing, laser writing, and so on to attain easy QD patterning.Hence, this innovative patterning technique will open a new horizon for future display technologies and may lead to a nondisruptive and innovative change in the display industry.
Conclusion and Future Directions
This article reviewed the recent development of display technologies ranging from LCD to RGB full-color LED-based display.Almost, from the beginning of the 21th century, highperformance displays are kept playing a key role from the generation of internet-enabled computers, to smartphones and present IoT (Internet of Things) devices.To meet the desired demand of the market, cost-effective, efficient, and environmental-friendly smart display products with high picture quality are needed.Several emerging technologies have been developed to replace the traditional technologies of LCD, OLEDs, and micro-LEDs.Among the traditional technologies, OLED-based products have been successfully commercialized and are mostly being used in mobile phone displays.Albeit, it is hard to attain high resolution (more than 600 ppi) using evaporation/inkjet printing techniques.On the other side, micro-LEDs have an exceptionally high manufacturing cost because of their low yield of chip transfer.Further, their efficiency is greatly degraded by decreasing their dimension from LED to micro-LED.We reviewed several technologies with remedies to overcome their negative impact on the realization of full-color display.Integration of various colors emitting micro-LEDs and materials on the same substrate through mass transfer and bonding is the main pathway for future RGB fullcolor micro-LED display.Therefore, we compare various mass transfer technologies in terms of their performance rate.Overall, it is needed to improve the yield of mass transfer with a decrease in its cost.Besides mass transfer technology, the other option is color conversion technology where there is no need for a separate wafer for each color.Only an excitation source (i.e., blue/ and d) color gamut of RGB subpixels.Reproduced with permission. [203]Copyright 2020, American Chemical Society.
UV) and color conversion layers for multi-color (RGB) emission are needed.QDs are the most prominent medium for color conversion and their deposition methodology is relatively mature.Albeit, their CCE is not best and decreased under the light.In this review article, we discussed the key features required for QDbased color conversion including I) structural geometries for the deposition of QDs, II) utilization of DBR and HBR, III) PR layer to minimize the optical cross-talk and also to reduce the coffee rings effect, IV) structures (nanorings, nanorods, and nanoholes) for efficient light coupling from the active region to QDs, and V) the development of new methodology (SLAP) compared to simple photolithography.In the nutshell, various strategical reports are reviewed for achieving the best solution for RGB full-color display by LED integration.It is logical to predict breakthroughs in upcoming years.Therefore, we believe that micro-LED will be used as future display technology because of its lifetime, low energy consumption, high resolution, and wider color gamut with high purity.
Figure 3 .
Figure 3. General mechanism of RGB-based micro-LEDs full color display.b) Illustration of InGaNOS substrate and PL spectra of InGaN-based LEDs grown on InGaNOS.Reproduced with permission.[1]Copyright 2018, MDPI publishing.
Figure 5 .
Figure 5. a) Schematic diagram of the various geometries of QD color converters: on-chip (top), on edge (middle), and on the surface (bottom).b) A typical LCD system with QDEF (QD on the surface of the light guided plate).
Figure 6 .
Figure 6.a) Flow process of full color QD-based μ-LED display.b) Comparison of relative PL intensity with and without DBR.c) The CIE 1976 color space chromaticity diagram of the QD display technology and NTSC.Reproduced with permission.[143]Copyright 2015, Optical Society of America.
Figure 7 .
Figure 7. a) The flow process used for the reduction of optical cross talk in the full color micro-LED display.b) Microscopic images showing the comparison of effect of PR mold on optical cross talk.Reproduced with permission.[169]Copyright 2017, Chinese Laser Press.C) Schematic diagram of full color micro-LED display with funnel tube array.Reproduced with permission.[170]Copyright 2019, MDPI publishing.
Figure 8 .
Figure 8. a) Mechanism of coffee ring effect (deposition at edges during evaporation) with/without PR mold.b) Comparison of the optical microscopic image with or without PR mold.Reproduced with permission.[169]Copyright 2017, Chinese Laser Press.
Figure 9 .
Figure 9. a) Schematic diagram of RGB micro-LED with HBR and DBR.b,c) EL spectra of green and red QD-based micro-LEDs.d) International commission on illumination (CIE) chromaticity coordinates of device structures with and without HBR and DBR.Reproduced with permission.[179]Copyright 2020, MDPI publishing.
Figure 10 .
Figure 10.a) Systematic diagram of photonic quasi-crystal LED hybridized with QD color converter.b) SEM image of cross-sectional area.Reproduced with permission.[182]Copyright 2016, Optical Society of America.
of the emitted light is denoted by .Moreover, the efficiency of color conversion, i.e., effective quantum yield (EQY) can be estimated by taking the ratio of photons emitted at QD wavelength (from the hybrid structure) to the quenched photons which are emitted at MQW wavelength (after the hybridization of structure)
Figure 11 .
Figure 11.a) SEM images of NR LEDs with different wall widths(120, 80, and 40 nm).b) PL spectra with an excitation power of 10 mW.Reproduced with permission.[187]Copyright 2017, The Authors, published by Springer Nature.c) PL spectra of NR-LEDs with and without ALD passivation layer.d) EL spectra of RGB hybrid QD-based NR micro-LED.e) Color gamut of RGB hybrid QD-based NR micro-LED, NTSC and Rec.2020.Reproduced with permission.[188]Copyright 2019, Chinese Laser Press.
Figure 12 .
Figure 12. a) Schematic diagram of QD-based micro-LED (NP GaN embedded with QDs).b) PL spectra of green and red NP QD LED.c) Visualization of light scattering by pristine GaN and NP GaN.Reproduced with permission.[195]Copyright 2020, American Chemical Society.d) Cross-sectional SEM images of vertical and hybrid NP GaN electrochemically etched at 22 V. Reproduced with permission.[196]Copyright 2019, The Society for Information Display.
Figure 13 .
Figure 13.a) Illustration of patterning QDs with different color via photolithographic technique.b) PL image of patterned RGB QDs.c) Emission spectraand d) color gamut of RGB subpixels.Reproduced with permission.[203]Copyright 2020, American Chemical Society.
Table 1 .
[ 63 ]ison of various mass transfer technologies.The detail about performance (transfer rate) for each mass transfer technology is given in refs.[27,86] and chip size in ref.[ 63 ].
Table 2 .
Comparison of different patterning techniques for QD deposition | 2022-04-09T15:19:37.844Z | 2022-04-07T00:00:00.000 | {
"year": 2022,
"sha1": "f9a17ae291b5d7a09633766b0212119df6414ac0",
"oa_license": "CCBY",
"oa_url": "https://openresearch.lsbu.ac.uk/download/b300ddda8aab9ef952ca8a6a3b54c017b33e5a80beb2b99d3036dcfec8fd6976/6730626/Laser%20%20%20Photonics%20Reviews%20-%202022%20-%20Anwar%20-%20Recent%20Progress%20in%20Micro%E2%80%90LED%E2%80%90Based%20Display%20Technologies.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7aa2c1d149f7f0cd84c3475ee6fb2cd1fd7a55d8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
24722857 | pes2o/s2orc | v3-fos-license | Viral infection in adults hospitalized with community-acquired pneumonia: prevalence, pathogens, and presentation.
Background The potential role of respiratory viruses in the natural history of community-acquired pneumonia (CAP) in adults has not been well described since the advent of nucleic amplification tests (NATs). Methods From 2004 to 2006, adults with CAP who were admitted to five hospitals were prospectively enrolled in the study, and clinical data, cultures, serology, and nasopharyngeal swabs were obtained. NATs from swabs were tested for influenza, human metapneumovirus (hMPV), respiratory syncytial virus (RSV), rhinovirus, parainfluenza virus 1–4, coronaviruses (OC43, 229E, and NL63), and adenovirus. Results A total of 193 patients were included; the median age was 71 years, 51% of patients were male, and 47% of patients had severe CAP. Overall, 75 patients (39%) had a pathogen identified. Of these pathogens, 29 were viruses (15%), 38 were bacteria (20%), 8 were mixed (4%), and the rest were “unknown.” Influenza (n = 7), hMPV (n = 7), and RSV (n = 5) accounted for most viral infections; other infections included rhinovirus (n = 4), parainfluenza (n = 3), coronavirus (n = 4), and adenovirus (n = 2). Streptococcus pneumoniae was the most common bacterial infection (37%). Compared with bacterial infection, patients with viral infection were older (76 vs 64 years, respectively; p = 0.01), were more likely to have cardiac disease (66% vs 32%, respectively; p = 0.006), and were more frail (eg, 48% with limited ambulation vs 21% of bacterial infections; p = 0.02). There were few clinically meaningful differences in presentation and no differences in outcomes according to the presence or absence of viral infection. Conclusions Viral infections are common in adults with pneumonia. Easily transmissible viruses such as influenza, hMPV, and RSV were the most common, raising concerns about infection control. Routine testing for respiratory viruses may be warranted for adults who have been hospitalized with pneumonia.
Community-acquired pneumonia (CAP) is one of the most clinically important diseases in adults, affecting 5 to 20 per 1,000 adults per year. 1 Of these, at least 20 to 40% will require hospitalization for the treatment of their pneumonia.f CAP management guidelines 3 have been influenced by older CAP etiology studies," which helped to direct empiric therapeutic antimicrobial choices for therapy against bacterial pathogens such as Streptococcus pneumoniae, Haemophilus influenzae, and "atypical" bacwww.chestjoumal.org teria, including Chlamydophila pneumoniae, Mycoplasma pneumoniae, and Legionella pneumophila. Although CAP guidelines 3 acknowledge respiratory viruses as a "cause" of pneumonia, few recommendations are made regarding management, largely due to the paucity of data regarding prevalence, clinical presentation, and outcomes. Furthermore, viral etiology studies in pneumonia are difficult to interpret as noninvasive viral detection methods are often considered to be only markers of infection rather than the cause of pneumonia." Clearly, much better knowledge of the potential role of respiratory viruses present in patients with pneumonia is needed.
Most published studies's? of respiratory viruses have relied on tests with relatively poor sensitivity such as serology and direct fluorescent antigen (DFA) tests. Such tests are limited in the sample type to which they can be applied and are not suitable for a broad range of respiratory viruses. More recently, the introduction of highly sensitive nucleic acid amplification tests (NATs) has dramatically improved our ability to detect multiple viral pathogens such as influenza, respiratory syncytial virus (RSV), rhinovirus, parainfluenza, and adenovirus. Such tests can be undertaken using a small Single sample of respiratory secretions with results available with rapid turnaround times. 7 -12 In addition, these tests have allowed us to detect emerging respiratory viruses such as human metapneumovirus (hMPV) and coronaviruses, viruses that are difficult to grow in cell culture. [13][14][15] To date, there have been few studies, 5,7,[9][10][11]16,17 reported in patients with pneumonia using NATs to detect viral infection, and these studies have either not included clinical data 7.9.11 or have not tested for all potentially important respiratory viruses in a comprehensive manner.lv-!?
Better knowledge of the role of infection with respiratory viruses in adults with pneumonia may lead to better management. Thus, we performed a prospective study in consecutive adults who had "From the Department of Medicine (Drs, Johnstone, Majumdar, and Marne), Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada; and the Department of Microbiology (Dr. Fox), Provincial Laboratory for Public Health, Calgary, AB, Canada. Dr. Majumdar was supported by the Alberta Heritage Foundation for Medical Research (Health Scholar) and the Canadian Institutes of Health Research (New Investigator). This project was funded in part by an establishment grant (to Dr. Marrie) from the Alberta Heritage Foundation for Medical Research. Funding sources had no role in study design, data collection, data analysis or interpretation, or writing of the report. All authors participated in the study conception, design, analysis, interpretation of results, and revision of the manuscript, and approved the final version of the manuscript. Dr. Johnstone drafted the initial manuscript. Dr. Marrie acquired the data, obtained funding for the study, and will act as guarantor, The authors have reported to the ACCP that no Significant conflicts of interest exist with any companies/organizations whose products or services may be discussed in this article, been admitted to the hospital with CAP, and sought to describe their pathogens, clinical presentation, and outcomes.
MATERIALS AND METHODS
From January 2004 to January 2006, consecutive adults (~18 years of age) who had been admitted to five hospitals in Edmonton, AB, Canada, with CAP were enrolled in a prospective study of pneumonia, Patients were excluded from the study if they had received antibioties or been hospitalized within the prior 2 weeks, were unable or unwilling to provide informed consent, or had the following conditions: immunocompromised (ie, had received > 10 mg of prednisone per day for > 1 month, other immunosuppressives, had cancer with reeent chemotherapy, or had HIV with a CD4 count of < 250 cells/u.L), tuberculosis; bronchiectasis, cystic fibrosis; or pregnancy. All patients gave written informed consent, and the Health Research Ethics Board of the University of Alberta approved the study. We did not record data on patients who were unable to provide consent or who did not meet the enrollment eriteria.
Data Collection
Pneumonia was defined as an acute lower respiratory tract illness with two or more of the following symptoms or signs: cough; productive cough; fever; chills; dyspnea; pleuritic chest pain; crackles, and bronchial breathing plus an opacity or inflltrate seen on a chest radiograph that was interpreted as pneumonia by the treating physician. To characterize the severity of the pneumonia itself, we calculated the pneumonia severity index (PSI) using the methods of Fine et al. IS Clinical, radiographlc, and laboratory data and short-term outcomes were collected by a trained research nurse; the nurse was masked to microbiology results at the time of data collection. Patients were followed up throughout their hospital stay until discharge,
Diagnostic Tests Undertaken
Routine blood culture, sputum specimens, nasopharyngeal (NP) swabs, and serum samples were processed for each patient according to the study protocol. NP swabs that were submitted for the detection of viral pathogens first underwent D FA testing for influenza A and B, RSV, and parainfluenza virus 1-3 (Imagen; Dakocytomation Ltd; Ely, UK), In addition, expanded testing of NP samples was undertaken for a range of respiratory pathogens by NATs using extraction and amplification methods that have been described previously'! I Briefly. NATs were designed to amplify and detect influenza A and B, hMPV, RSV, rhinovirus, parainfluenza 1-4, coronaviruses (OC43, 229E, and NIE3), and adenoviruses. All the NATs utilized in this study have been published, and the assay parameters evaluated. 1 1.I 2. 19 .2o Laboratory validation of these assays confirmed a limit of detection of :S 100 copies (cloned target or synthetic RNA) or one or fewer tissue culture infectious dose of 50% (for culturable viruses). The specificity of all assays was confirmed using samples and spiked materials 'containing high loads of alternative respiratory pathogens. (Further details on viral NATs are available from J.D.F. on request [also see references 11, 12, 19, and 20].) Bacterial infections were identified using standard laboratory protocols, Acute and convalescent serum samples were collected on the day of hospital admission and were repeated 4 to 6 weeks later, Serum samples were tested for the presence of C pneu-numiae and Chlamydia psittaci IgM and IgG by a mieroimmunofluorescence assay," M pneumoniae IgM enzyme immunoassay (Platelia, BioRad; Hercules, CA), 22 Coxiellabumetii phase I and phase II titers by indirect immunofluorescence.w and L pneumophila titers by indirect Immunofluorescence.s-M pneunumiae and L pneumophila were also tested using NATs, and were validated as above but with a limit of detection of :s; 100 copies (cloned target or synthetic RNA) or :s; 1 CfU. 11
Criteria to Establish Presence ofRespiratory Pathogens
A diabTJ10sis of respiratory viral infection was made if a virus was detected by NAT or DFA and a coexisting bacterial pathogen was not identified. A diagnosis of bacterial infection was made if a viral pathogen was not detected, and the following criteria were met: (1) isolation of a respiratory pathogen from purulent sputum (defined as an adequate quality sputum sample with > 25 leukocytes and < 10 epithelial cells per X 100 magnification field) or blood culture 1,2,,; (2) a fourfold rise in IgG titers for C pneumoniae (> 1:32) and C psittaci (> 1:32)1; (3) a single increased IgM titer for M pneutnoniae (> 1:64) or C pneunwniae (> 1:16)1; (4) an antibody titer of> 1:1,024to L pneumophila in a serum specimen obtained during either the acute or convalescent phase', (5) a fourfold rise in antibody titer to> 1:128 or a fourfold rise in antibodies to C bumetii', (6) a single titer of > 1:128 to a phase II C bumetii antigen'. or (7) the detection of M pneutnoniae or L pneumophila by NAT.1I A mixed infection was defined as the presence of both respiratory virus and bacteria, as defined above, Last, if no pathogens were detected, based on the tests used in the study protocol, we classified this as "unknown."
Statistical Analysis
Patient characteristics and outcomes according to pathogen were compared using X 2 test, Fisher exact test, Student t test, or Mann-Whitney U test, as appropriate. Although we present data for .JI pathogen categories, our primary analyses compare viral infection to bacterial infection. The few viral cases (n = 29) in our sample precluded attempts at multivariable analyses. All data were analyzed using a statistical software package (SPSS, version 15.0; SPSS Inc; Chicago, IL).
Patient Characteristics
Three hundred patients were enrolled into the study, and 193 patients (64%) had evaluable NP swabs. The reasons for nonevaluable NP specimens included insufficient sample (n = 68) and missed collection (n = 39). Because the primary purpose of the study was to evaluate for the presence of viral pathogens, we excluded those patients without NP swabs. There were essentially no differences between those with evaluable NP swabs and those without for either clinical characteristics or outcomes, with the following exceptions: impaired functional status (35% vs 21%, respectively; p = 0.02); lobar pneumonia seen on a chest radiograph (72% vs .57%, respectively; p = 0.009); and median length of stay (7 vs 6 days, respectively; p = 0.02).
www.chestjournal.org
We considered the 193 patients with evaluable NP swabs to be our final study sample. Sputum and blood cultures were requested for all patients, but these were not performed in some patients due to their inability to produce a sputum specimen (n = 106), their refusal of a blood draw (n = 61), or death (n = 5). Convalescent serum samples were not obtained in 153 patients because they did not return for follow-up blood work (n = 146) or died (n = 7).
Clinical Presentation of Pneumonia in Patients With Viral Infection
Patients with viral infections were older than those without viral infections (median age, 76 vs 64 years, respectively; p = 0.01), were more likely to have underlying cardiac disease (66% vs 32%, respectively; p = 0.006), and tended to be more frail (eg, 48% had severely limited ambulation vs 21% of those with bacterial pneumonia; p = 0.02). Other differences included the presence of chest pain, which was far less common in those patients with a viral infection than in those with a bacterial infection (7% vs 37%, respectively; p = 0.004) [ Table 2]. In terms of laboratory findings, those with viral, infections were far more likely to have a normal leukocyte count than those without viral infection (74% vs 14% leukocytes, respectively; p < 0.001). All cases of viral infection occurred between the months of October and May, with one exception (one episode of rhinovirus infection occurred in July), whereas bacterial infections occurred year round (Fig 1). Although the importance of S pneurrwniae and atypical bacterial pathogens is well understood in patients with CAP, in this prospective cohort study we have now demonstrated the significant potential contribution of respiratory viruses in patients presenting with pneumonia. Indeed, fully one sixth of all cases (15%) in this cohort of adults who were hospitalized with pneumonia had a respiratory virus identified; alternatively, more than one third of those
Outcomes According to Pathogen
There were no significant differences in outcomes according to the pathogens identified. Specifically, there were no differences in median length of hospital stay (patients with viral infection, 7 days [IQR, 6 to 10 days]; patients with bacterial infection, patients (39%) with a pathogen identified had a respiratory viral infection. Influenza, hMPV, and RSV comprised almost two thirds of all cases of viral infection and, in our study, were acquired during the influenza season. There were some differences in presentation between those patients with a viral infection and those without, including the following: older age; presence of cardiac disease (but in the near absence of chest pain on presentation); and greater frailty. Of note, patients with a viral infection were far more likely than patients with bacterial pneumonia to have a normal leukocyte count. Despite these apparent differences, given that the majority of our cohort never had a respiratory pathogen identified, it is obviously very difficult to distinguish the presence or absence of viral infection in patients with pneumonia. This is further borne out by the fact that outcomes were virtually identical irrespective of the pathogens involved.
Adult CAP etiology studies-" conducted prior to the use of NATs estimated viral involvement in 0.3 to 30% of all CAP cases. Testing was generally based on serologic conversion or positive DFA test results for influenza A or B; RSV; parainfluenza virus 1,2, and 3; or adenovirus. In our cohort, viral infection without evidence of bacterial coinfection was detected 15% of the time, which falls within the commonly reported range. 26 A study by Marcos et al.!? which used NATs, reported a similar prevalence of viral infection in Spain. However, the study by Marcos et al lO differed from ours in several noteworthy ways, as follows: they did not test for hMPV; and they included immunocompromised patients in their study. Our results also differ from those of Jennings et al,5 as follows: they documented a viral infection 29% of the time in their cohort of adults with CAP, but, surprisingly, more than a third of infections were attributed to rhinovirus, and almost one fifth were mixed infections. The impact of rhinoviruses in our study may be underestimated as data 27 have indicated that the picornavirus family of viruses is much more variable than originally thought. It is extremely difficult to design and validate assays to pick up all divergent rhinoviruses, and the original assay design that we utilized in this study would not identify all those that have been reported.F This is an inherent limitation in the type of study undertaken; as we identify more novel respiratory pathogens and variants, it is inevitable that some will have been missed.
Most older etiology studies-" have reported influenza infection in patients with pneumonia 4 to 19% of the time, followed by RSV. Influenza was the most common virus identified in our study, affecting 4% of patients; however, we found hMPV to occur as commonly as influenza, and more frequently than RSV. This important finding has not been widely documented as most respiratory virus studies 7 ,10,29,3o have not included testing for hMPV, largely due to the difficulty in its identification in the past. To our knowledge, only two previous etiology studiess-!? used NATs for detecting hMPV. One study.!? which was restricted to COPD patients with pneumonia, found hMPV as a pathogen in 4.1% of cases; another study" from New Zealand found no cases of hMPV.
Strengths and Limitations
The strengths of this study include its prospective nature and the thorough collection of data from a cohort of consecutive patients who had been admitted to the hospital with CAP. There are also several limitations to the study. First and foremost, despite our best efforts and a detailed study protocol, a number of bacterial investigations iie, blood culture, sputum culture, and convalescent serum specimens) www.chestjournal.org were missed, thereby potentially underestimating the number of cases of bacterial pneumonia and (potentially) underestimating the number of mixed infections. The number of missed bacterial investigations may be the reason for our 61% rate of unknown infections, although our rate of recovery is similar to other studies 22 ,3 1--34 that have reported 47 to 60% unknown infections, Second, we excluded patients without evaluable NP swabs from our analyses. Not obtaining specimens for conducting a NAT was a study protocol violation in 13% of patients (39 of 300 patients). We speculate that either NP swabs were not collected when patients transitioned from the emergency department to the wards, or that there was a miscommunication with the reference laboratory regarding when or where to send the study-related swabs, That said, there were few important clinical differences between patients with and without evaluable NP swabs, with two excep-CHEST/134/6/ DECEMBER, 2008 tions. Those patients without evaluable NP swabs were more likely to have lobar pneumonia, which, according to our data, would bias the results toward bacterial infection. Those patients without evaluable NP swabs were also more likely to be functionally impaired, which would bias the results toward viral infection. Third, there is potential for both falsepositive and false-negative NP results, although testing with NATs has been reported to have excellent sensitivity and speciflcity.P As noted above, sequence divergence for the rhinoviruses (and, potentially for the other virus groups) may also have led to some underestimation of the number of viral infections. Fourth, we detected viruses in the upper respiratory tract using NP specimens, which does not necessarily equate with the causation of pneumonia. However, the purpose of this study was to describe the potential role of respiratory viral infection in those patients with pneumonia; a study describing "confirmed" viral pneumonia would require lung tissue samples from all enrolled patients. Last, our overall sample size might be considered small by some, and our cohort was drawn from only one health region in Canada, which might limit the generalizability of the results to some degree.
Clinical Implications
In our study, patients with pneumonia and respiratory viral infection were older and more frail than 1146 those without evidence of viral infection. Differentiating between patients with viral infection and those without based on clinical findings and routine laboratory test results remains a challenge. Indeed, although we were unable to perform a multivariable logistic regression analysis due to the small sample size, it seems unlikely that any constellation of symptoms, signs, and routine laboratory findings will ever reliably differentiate between the presence or absence of a viruS. 3 , 1O,29 ,30 Current guidelines:] recommend empiric antibiotic therapy targeted against common bacterial pathogens for patients who are admitted to the hospital with pneumonia. How to manage patients with pneumonia and a respiratory viral infection, without a documented coexisting bacterial pathogen, is far less clear. Future research, similar to that found in the pediatrics literature, is needed to help answer whether empiric therapv with antibiotics can be discontinued in this clinic~1 seenario.F' Perhaps most importantly, the inability to identify patients with a respiratory virus without comprehensive respiratory viral testing is a concern from the perspective of infection control. The presence of a respiratory viral infection can result in nosocomial outbreaks. For instance, outbreaks due to influenza have been well documented.s" and cases of RSV and hMPV nosocomial transmission are increasingly recognized. 21i , 36.37 The nosocomial spread of respiratory viruses among adults poses the biggest threat to immunocompromised patients, including frail elderly patients: 3 ,31i-40 The current infection control guidelines recommend placing patients with a suspected respiratory viral infection in private rooms or cohorting them with patients with the same viral infection as a way to prevent transmission." Given the 15% prevalence of viral infection in adults in our study, and the indistinguishable presentation from typical bacterial pneumonia, our results suggest routine isolation (with droplet and contact precautions) of all adults with pneumonia, from the time of hospital admission until respiratory viral infection is ruled out, should be considered to help prevent the nosocomial transmission of respiratory viruses. This suggested approach should become logistically feasible when the turnaround time for NAT results is < 24 h and as the price of testing with NATs decreases over time. This will be facilitated by emerging commercial viral identification assays that are both accurate and relatively inexpensive.P CONCLUSION Infections with respiratory viruses are common in patients who are hospitalized with pneumonia, com-prising 39% of all identified pathogens and 15%of all patients in our study. Influenza, hMPV, and RSV were the most common respiratory viruses identified. In patients presenting with pneumonia, it remains difficult to differentiate patients with viral infection from those without viral infection. Our findings suggest that routine testing for common respiratory viruses may be warranted for all adults hospitalized with pneumonia. | 2018-04-03T05:23:02.665Z | 2008-12-01T00:00:00.000 | {
"year": 2008,
"sha1": "e9c65f04e88a69473c95eb16998428b17df1fa5c",
"oa_license": null,
"oa_url": "http://journal.chestnet.org/article/S0012369209600118/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "151a439b6428f86aa27817330f60e8577f7a5580",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248057888 | pes2o/s2orc | v3-fos-license | COVID-19 and the Elderly’s Mental Illness: The Role of Risk Perception, Social Isolation, Loneliness and Ageism
For almost two years, populations around the globe faced precariousness and uncertainty as a result of the COVID-19 pandemic. Older adults were highly affected by the virus, and the policies meant to protect them have often resulted in ageist stereotypes and discrimination. For example, the public discourse around older adults had a paternalistic tone framing all older adults as “vulnerable”. This study aimed to measure the extent to which perceived age discrimination in the context of the COVID-19 pandemic, as well as the sense of loneliness and social isolation, fear and perception of COVID-19 risks, had a negative effect on older adults’ mental illness. To do so, a self-report questionnaire was administered to 1301 participants (average age: 77.25 years old, SD = 5.46; 56.10% females, 43.90% males). Descriptive and correlational analyses were performed, along with structural equation modelling. Results showed that perceived age discrimination in the context of the COVID-19 pandemic positively predicts loneliness and also indirectly predicts mental illness. In addition, loneliness is the strongest predictor of mental illness together with fear of COVID-19 and social isolation. Such results highlight the importance of implementing public policies and discourses that are non-discriminating, and that favour the inclusion of older people.
Introduction
The "new coronavirus disease" better known as COVID-19 [1] originated in Wuhan, China at the end of 2019 and then expanded globally. On 11 March 2020, the World Health Organization declared it a pandemic status, a pandemic that is still ongoing. During these past two years, we have observed various phases of the pandemic with different degrees of propagation, virulence and lethality, due, on the one hand, to the multiplication and spread of different variants of the virus and, on the other hand, to the roll out of vaccines and the implementation of sanitary measures. The various measures to contain the spread of the virus, such as social isolation, the adoption of appropriate hygiene measures, the use of masks, physical distancing, temperature measurement, the suspension of activities in cultural centres, etc., were used at varying times and with varying rigor. From the outset, it was understood that to reduce the propagation of COVID-19, contact between people had to be reduced as much as possible, so much so that the Italian government, in the first phase of the virus's spread, decided to adopt a strict containment measure, i.e., the closure of all non-essential or strategic production activities [2]. From 9 March to 4 May, only supermarkets, pharmacies, shops selling necessities and essential services remained open. Almost two months of complete physical isolation led to many psychological problems such as increased stress and depression [3].
The segment of the population most affected by the virus was the older individuals; because of the high morbidity and mortality rates in old age, the COVID-19 pandemic was considered a 'geropandemic' [4]. Concurrently, the older people were also those most affected by the restrictions. In fact, a message that has constantly been transmitted, in Italy and other countries, in the communiqués of heads of government and by the media in general, has been the recommendation to the elderly or those suffering from chronic or immunodepression pathologies to go out of the house only if necessary and to avoid frequenting crowded places [5]. Thus, the COVID-19 pandemic accentuated the social isolation of older individuals, resulting in their exclusion from social life [6]. While the restrictions were intended to be protective, such policies have often resulted in paternalistic public communication describing all older people as "vulnerable" members of society [7]. This has led to the re-emergence of already existing stereotypes of the elderly who were treated as one homogeneous segment of the population. Governments, health professionals, the media and social networks have portrayed the elderly as a burden on society; ageing has been associated with decline, worthlessness and taking charge. Such attitudes have reinforced intergenerational conflicts, prejudice and discrimination [8].
The general objective of the present study is to investigate the mental illness of older people during the COVID-19 pandemic. Mental illness refers not only to the presence or absence of disease but to all conditions that affect cognition, emotion and behaviour. Furthermore, we want to investigate the role that different factors may have on mental illness in older people, such as the perception of being discriminated against because of one's age or/and the feeling of loneliness, social isolation, the fear of COVID-19 and the perception of risks related to COVID-19. It is very important to understand the role played by these factors to adopt sanitary measures, related messaging that will be transmitted in the later phases of this pandemic, as well as in future emergencies.
Theoretical Background
One of the most important goals facing society today is the ageing of the population, a phenomenon that is most evident in industrialised countries where it is most pronounced. The demographic revolution that has taken place in recent decades has generated a strong focus on the bio-psycho-social factors associated with the ageing processes.
Numerous studies have shown that social isolation and loneliness have strong negative consequences for physical and mental health [9][10][11][12]. Physical problems include cardiovascular diseases [13], lung diseases and arthritis [14], Alzheimer's [15], as well as an association with mortality after many years [16,17]. Among the mental problems, there are higher levels of depression and psychological distress [18], despair [19] and thoughts of suicide [20]; more generally, it interferes negatively with quality of life of the older people [21]. In addition, an association with poorer cognitive functioning has also been established [22]. While social isolation and loneliness have often been used and reported in an undifferentiated way, they refer to different aspects [23] and, as numerous studies have shown, are also only weakly related [24,25]. Social isolation refers to the structure of the social network, which reflects the objective state of lack of social relationships [26,27]. Loneliness, however, is the feeling of lack or loss of a companion; it is, therefore, a subjective phenomenon that reflects the quality of a person's social interactions [28]. Therefore, loneliness develops when one's social relationships are not accompanied by the desired degree of intimacy [27,28]. According to the Discrepancy perspective of loneliness, loneliness occurs when there is a discrepancy between the quality and/or quantity of social relationships that people have [29]. Consequently, a person may feel lonely despite having a dense network of social relations, as well as, on the contrary, he or she may be socially isolated and not feel lonely [30]. At the same time, social isolation does not lead to loneliness when the desired level of social relationship is low [31]. Several risk factors are associated with loneliness, such as: Not being married/not having partners and loss of partner; a limited social network; a low level of participation in social activities; poor perceived health; and depression/depressed mood [32]. While little investigated, there is another important risk factor for loneliness in old age: Ageism [30].
The term ageism refers to stereotypes, prejudice and/or discrimination against people based on their age. Most studies have focused on the manifestations affecting older people, although it can manifest itself against younger people as well. Iversen, Larsen and Solem [33] defined ageism as "negative or positive stereotypes, prejudice and/or discrimination against (or to the advantage of) older people on the basis of their chronological age or on the basis of a perception of them as being 'old' or 'elderly'. Ageism can be implicit or explicit and can be expressed on a micro, meso, or macro-level" (p. 15). Middle-aged adults possess higher levels of ageism [34] and have a greater sense that life is coming to an end, as well as higher levels of anxieties related to ageing, dying and death [35]. In Italy [36], males and young people have higher levels of ageism than women and older people. Moreover, recent studies have shown that the basis of negative stereotypes is mainly the poor knowledge of the ageing process. Lack of knowledge and a high level of anxiety about ageing are antecedents of stereotypes, which in turn, together with age, influence ageism [37].
Regarding the relationship between mental health and ageism, there are different studies available. A review [38] of the relationship between ageism and health revealed a significantly positive association between perceived ageism and mental health, physical/functional health and quality of life. Perceived age discrimination is generally described as one of the major experiences of discrimination in life [39]. This perception is subjective as people's behaviour can be interpreted differently depending on the situation. An individual is more likely to perceive discrimination in a situation where they expect to be stereotyped negatively than in a situation where they expect to be stereotyped positively. Consequently, particularly negative age stereotypes provide a guideline for situations in which age-discriminatory behaviour is likely to occur [40]. The perceived age discrimination is also correlated with negative age stereotypes, as these can influence the behaviour of older people themselves, who perceive themselves to be the object of such stereotypes. According to Levy's Stereotype Embodiment Theory [41], stereotypes present in culture are internalised, resulting in self-definitions which, in turn, influence functioning and health. The self-perceptions of ageing, that is, the endorsement of stereotypes about older people by people as they age, significantly predict negative effects on physical and mental functioning, sometimes even decades later [41]. Similarly, positive self-perceptions of ageing are correlated with better functional health more than two decades later [42]. This means that the stereotypes that people assimilate from the surrounding culture and identify with can act as self-fulfilling prophecies [17]. Furthermore, Sutin et al. [43], as part of the American Health and Retirement Survey, investigated the relationship between perceived discrimination and subsequent loneliness in a nationally representative sample of adults aged 50 years and over. The 7622 participants (mean age 67.5) took part in a longitudinal study and responses found that perceptions of age discrimination significantly predicted feelings of loneliness at five-year intervals. It has also been shown that stereotypes of loneliness in later life can become self-fulfilling prophecies, with the study by Pikhartova et al. [44] showing that both expectations and stereotypes of loneliness in old age predicted feelings of loneliness several years later.
In addition to ageism, loneliness and poor social relationships, other factors that may contribute negatively to the mental health of older adults include fear of contracting COVID-19 and perceived COVID-19 risk.
Fear of COVID-19 and feeling personally at risk of contracting the virus may also contribute to the mental stability of older people during this pandemic period. The available literature on past viral epidemics has already highlighted the role played by fear and its negative psychosocial consequences in exacerbating the damage of an infectious disease [45]. Indeed, fear can lead people to denial or phobia, as well as stigmatisation of citizens perceived as the source of the disease [45,46]. Moreover, an association between fear and other psychological disorders such as anxiety and depression has emerged, further affecting people's quality of life in a negative way [47]. These consequences are even more relevant in the context of the current pandemic, where social isolation caused by government policies to contain the virus has already been the cause of a strong increase in symptoms of anxiety and depression, including amongst older populations [48,49]. Related to the fear of COVID-19 is the perception of COVID-19 risk [50]. Risk perception refers to people's intuitive assessments of the dangers to which they are or might be exposed [51]. It involves subjective judgements that individuals develop based on the characteristics, severity and manner in which risk is managed and includes individuals' psychological assessments of the likelihood and consequences of an adverse outcome [52]. Recent studies have shown that older people estimated the risk of COVID-19 to be less dangerous than younger people [53,54]. In addition, public understanding of risk could be a determinant of community mental and physical health [55,56].
The Current Research
The current study aimed to investigate the relationships between mental illness, social isolation, loneliness, perceived age discrimination in the context of the COVID-19 pandemic, fear of COVID-19 and COVID-19 risk perception. Specifically, the following hypotheses were formulated: Hypothesis 1. We expected that the restrictions imposed by the government resulted in an impoverishment of the network of meaningful social relationships (H1a); that the impoverishment of the network of meaningful social relationships was only partially mitigated by the possibility of using technology to contact family and friends (H1b); and that the social isolation index would be correlated with the loneliness measure precisely because of the impoverishment of social relationships (H1c).
Hypothesis 2.
We expected that mental illness was positively correlated with measures of perceived age discrimination in the context of the COVID-19 pandemic, loneliness, social isolation, COVID-19 risk perception and fear of COVID-19 (H2).
Hypothesis 4.
Finally, we expected that in addition to the direct effect of perceived age discrimination in the context of the COVID-19 pandemic on illness, there was also a mediating effect of loneliness (H4).
Procedure of Recruitment and Participants
Participants were contacted through snowball sampling. A group of university students were asked to contact their older people acquaintances and through word of mouth other participants were reached. The only criterion for inclusion was being over 65 years old. Participants were asked to respond to a questionnaire about their perceptions during the pandemic from COVID-19. Data collection was initiated in the middle of May and ended at the start of June 2021. Participants completed a self-report survey that took approximately 30 min of their time. They were invited to fill out an online questionnaire connecting to a weblink associated to the Google Forms platform. Participation in the study was voluntary and anonymous and participants could drop out of the study at any time.
The study protocol was approved by the Local Ethics Committee of the institution of the principal investigator, and the study was conducted according to APA ethical standards. The study also conformed to the ethical principles of the 1995 Helsinki Declaration. The first page of the survey asked for informed consent. The next page of the survey consisted of a presentation of different instruments: Loneliness Scale, General Health Questionnaire, Age discrimination in the COVID-19 management, Social Isolation Index, COVID-19 Risk Perception and Fear of COVID-19 (for a detailed description of the instruments, see the following section). The survey also included a short demographic section in order to collect information regarding the participant's age, sex, marital status, religious faith and practice, educational level as well as a series of questions concerning the extent and frequency of the social relations network and participation in recreational activities.
Catholic participants made up 94.40% and stated that they practice their faith always (44.20%) or sometimes (44.00%); only 11.80% stated that they never practise their religious faith. The level of education was low, with 49.80% with a primary school qualification, 25.00% with a junior high school qualification and 18.70% with a high school qualification; a small percentage had an undergraduate (5.70%) or postgraduate (0.80%) degree.
Instruments
The various measures used in the survey are described below: General Health Questionnaire_12 (GHQ) [57,58]. The General Health Questionnaire is aimed at detecting, even in the older population, common symptoms which are indicative of the various syndromes of mental illness (e.g., "Have you recently been thinking of yourself as a worthless person?"), differentiating individuals with psychopathology from those considered normal. The scale consists of 12 items with a four-point rating scale, ranging from 1 (not at all) to 4 (much more than usual). Subscales are somatic symptoms and social dysfunction, but for this study, only the overall measurement was considered. The internal reliability was 0.84 in this study.
Age discrimination scale in the context of the COVID-19 pandemic (ADCo). Perceived age discrimination about the management of COVID-19 was measured using five items assessed on a Likert scale from 1 (strongly disagree) to 5 (strongly agree). The items were adapted from Garstka's scale [59] and are as follows: "I feel I am victim of the government's Coronavirus policies because of my age"; "In this pandemic period, members of my age group have been discriminated against more than members of other age groups"; "During this pandemic period, members of my age group did not receive the same care as members of other age groups because of their age"; "In this pandemic period I feel that I have been deprived of the opportunities for others because of my age"; "I feel that social media (news, newspapers, etc.) has discriminated against me and members of my group because of the way the pandemic and its effects have been presented". The mean of these five items was calculated, with higher values representing greater perceived discrimination. In the current study, the Cronbach's α of the scale is 0.83.
UCLA Loneliness Scale-version 3 (UCLA) [60,61]. The UCLA Loneliness Scale consists of 20 items for the global measurement of loneliness (e.g., "I feel isolated from others"). The items are evaluated on a four-point Likert scale, ranging from 1 (never) to 4 (always). Nine items are positively formulated and reversed. The scale consists of three dimensions (Isolation, Relational Connectedness and "Trait" Loneliness), but for this study, only the overall measure was considered. In this study, the internal consistency reliability of the scale is 0.88 (Cronbach's α).
Social Isolation Index (SII). In line with the work of Wister et al. [62], to obtain a measure of social isolation, an index was calculated by averaging the following variables: Social network quantity, attendance in presence in the last month, attendance through technology in the last month and community participation (Table 1). In turn, these variables were calculated as follows: Social network quantity corresponds to the average of the answers given to items a, b, c, d, e, f and g (for frequencies see (a) in Table 1); the variable attendance in presence in the last month is the result of the average of the answers to items h, i, l, m, n and o (see (b) in Table 1); the variable attendance through technology in the last month derives from the average of items p, q, r, s, t and u (see (c) in Table 1); and the variable community participation was calculated by averaging items v, w, x, y and z (see (d) in Table 1). The scores were inverted so that high values correspond to high levels of social isolation. [63]. The FCV-19S is a seven-item self-report questionnaire for which respondents must provide their degree of agreement on a five-point Likert-type scale (from 1 = "strongly disagree" to 5 = "strongly agree"). The scale, with a single-factor structure, is designed to assess the emotional, cognitive, physiological and behavioural manifestations of fear related to COVID-19 (e.g., "I am very afraid of coronavirus-19", "When I look at news and stories about coronavirus-19 on social media, I get nervous or anxious"). Higher values indicate a greater fear of COVID-19. In this study, FCV-19S has a high internal consistency (in this study, Cronbach's alpha, = 0.89). [50]. The perception of COVID-19 risk was measured using four items (e.g., "Are you worried about getting diseased with COVID-19 yourself?"; "Are you worried about your family getting infected with COVID-19?") assessed on a five-point Likert scale (from 1 = strongly worried to 5 = not worried at all). The items were then reversed so that high scores correspond to high levels of risk perception. In the current study, Cronbach's alpha is 0.78.
Statistical Analysis
The survey data were entered into SPSS 22.0 databases and M-Plus 6.1 software. Cronbach's alpha was used to calculate the reliability of the scales. For the psychological scale, an internal consistency should be greater than 0.70 (however, an alpha between 0.60 and 0.69 is considered acceptable [64]).
Descriptive statistics were first performed on data, including the Wilcoxon test for paired samples to assess differences between groups. Pearson's correlation coefficients were also calculated to determine associations between variables under study. Second, Structural Equation Modeling (SEM) [65] was conducted to test the hypothesised model. To assess goodness-of-fit of that model, the following indicators were used: Chi-squared distribution and the degrees of freedom (χ 2 /df ≤ 3), although this index is sensitive to sample size; Standardised Root Mean Square Residual (SRMR ≤ 0.09); Comparative Fit Index (CFI > 0.90); and Tucker-Lewis index (TLI > 0.90). If the results of the Root Mean Square Error of Approximation (RMSEA) are ≤ 0.05 they are considered to be good, and reasonable if they are ≤ 0.08, with a cut-off value of 0.06. In order to assess the goodness of the model, several indices will be considered simultaneously as the different indices assess different aspects of the goodness-of-fit [66][67][68][69]. Satisfactory models should show consistently good-fitting results on many different indices.
The Network of Social Relations
Participants were asked to indicate how often they saw members within their network of significant relationships before COVID-19 and during the last month, that is, during the COVID-19 outbreak period (Table 2). Comparison of the averages using the Wilcoxon test showed that there was a significant reduction in attendance, especially of friends (before COVID = 2.40 vs. last month = 1.46), brothers/sisters (before COVID = 1.92 vs. last month = 1.21) and nieces and nephews (before COVID = 3.41 vs. last month = 2.77).
Interactions in presence during the last month and attendance through the use of technology were then compared and again significant differences emerged (Table 3).
Means, Standard Deviations and Correlation Analysis
Means, standard deviations and correlations are shown in Table 4.
Testing of the Hypothesised Conceptual Model
We used structural equation modeling to test the hypothesised relationships between variables under study (see Figure 2). Results suggest an acceptable fit between the theoretical and the empirical models: χ 2 (df) = 269.651 (53)
While perceived age discrimination in the context of the COVID-19 pandemic does not directly affect mental illness (dashed lines in Figure 2), two indirect effects emerged. Specifically, perceived age discrimination in the context of the COVID-19 pandemic affects mental illness through the mediation of loneliness (H4; β = 0.14 **) and fear of COVID-19 (β = 0.08 **).
Discussion
The COVID-19 pandemic has been keeping humanity in a precarious and uncertain situation for close to two years. Based on the different phases that this pandemic has undergone and the scientific knowledge that has accumulated, governments have defined various measures that they consider to be the most appropriate to contain the spread of In contrast to the hypothesis, social isolation does not predict COVID-19 risk perception and fear of COVID-19. Furthermore, perceived age discrimination in the context of the COVID-19 pandemic and COVID-19 risk perception do not predict mental illness.
While perceived age discrimination in the context of the COVID-19 pandemic does not directly affect mental illness (dashed lines in Figure 2), two indirect effects emerged. Specifically, perceived age discrimination in the context of the COVID-19 pandemic affects mental illness through the mediation of loneliness (H4; β = 0.14 **) and fear of COVID-19 (β = 0.08 **).
Discussion
The COVID-19 pandemic has been keeping humanity in a precarious and uncertain situation for close to two years. Based on the different phases that this pandemic has undergone and the scientific knowledge that has accumulated, governments have defined various measures that they consider to be the most appropriate to contain the spread of the virus. The persistence of the emergency has made it increasingly necessary to find a balance between containment measures and the need to continue the social and economic life of citizens. In this respect, the studies that are trying to capture the psychological effects of these measures are of fundamental importance so that this too is taken into account when defining the future actions of governments both in this pandemic and in any subsequent emergencies. The aim of this study was precisely to investigate the effects on the mental illness of older people of certain variables, such as social isolation, loneliness, perceived age discrimination in the context of the COVID-19 pandemic, fear of COVID-19 and risk perception of the COVID-19.
The hypotheses formulated have been partially confirmed. According to the first hypothesis, it emerged that the COVID-19 outbreak measure has generated an impoverishment of the network of significant social contacts for older people. Living in a technological era has only partially attenuated this condition of distancing from social life, accompanying a feeling of loneliness already particularly felt in this section of the population. In fact, in line with the existing literature [24,25], social isolation is moderately correlated with loneliness, although to a greater extent than in previous studies [70,71]. Concerning the second hypothesis, both the positive correlation between mental illness, loneliness and social isolation [11,72] and the positive correlation between mental illness, COVID-19 risk perception and fear of COVID-19 [73] were confirmed. In this regard, a particularly interesting finding is the positive correlation between mental illness and the measure of perceived age discrimination in the context of the COVID-19 pandemic, as well as between the latter and loneliness. The tested model has good indices of adaptation. The hypotheses we formulated regarding the relationships were partially confirmed. The model shows that the strongest antecedent of mental illness is loneliness, confirming a link that has already emerged in the literature [19]. Social isolation is also an antecedent of mental illness, but to a lesser extent than loneliness, confirming that these are two different but related constructs [26]. Most of the contacted older people do not live in social isolation as they mostly live with other family members, but at the same time, the impoverishment of extra-familiar relationships caused by the anti-COVID-19 norms may have contributed to exacerbating a feeling of loneliness which, already normally is strongly felt among older people. In addition, the feeling of loneliness and not the lack of social relationships puts older people in a more vulnerable position, with increased fear of contracting the virus and a higher perception of the risks associated with it. Analyses also show that feelings of loneliness are stronger in those who have perceived policies and discourses to protect the older population as discriminatory [30]. These perceptions did not directly affect people's mental health but indirectly did, through the mediation of loneliness and fear of COVID-19. The perception of having received differential treatment due to one's age also contributed to an increase in the perception of being at greater risk of contracting the virus, which then correlates with a greater fear of falling ill with COVID-19. Therefore, as in previous pandemics, fear hurt people's mental health in the case of the coronavirus pandemic [45]. However, the same cannot be said of the perception of risk, which does not appear to be an antecedent of mental illness. This finding needs to be better investigated because, on the one hand, the perception of risk is correlated with fear and, on the other hand, it does not predict mental illness either negatively, as had been hypothesised, or positively in its traditional protective function for health with the concomitant implementation of self-protective behaviours.
One of the merits of this study is that it simultaneously considered several variables that had previously been considered separately, providing an overview of the relationships between the constructs investigated. Furthermore, it provided a clear picture of the factors that affected the mental health of older people during the COVID-19 pandemic, providing useful indications for policymakers. At the same time, there are limitations. Firstly, it is a cross-sectional study and this prevented the examination of causal relationships over time. Second, it is a single-method study, since only self-report instruments were used, and this may have led to the inflation of an observed association. Thirdly, the snowball sample, in addition to not being representative of the population, may have led to finding too homogeneous a group of subjects.
Conclusions
The COVID-19 pandemic confronted all humanity with an unknown evil, and all available resources were used to combat it. Sometimes it was necessary to take drastic decisions such as social isolation, which served to limit the spread of the virus and contain the number of deaths, especially among the most fragile people. At the same time, however, this measure also had negative repercussions on people's mental health [3]. Indeed, older people with higher levels of mental illness were those who experienced higher levels of loneliness, fear of COVID-19 and social isolation. The invitation to reduce social contacts was perceived by the elderly as detrimental to their person, a real form of discrimination linked to the generalisation of the condition of frailty to all those over 65. While there is a link between the presence of chronic illnesses and age, being chronologically old is not the same as being vulnerable and in a precarious state of health simultaneously [74]. The other discrimination perceived by older individuals relates to not having had the same opportunities for care as younger people, especially when the rate of hospitalisation was particularly high, and the health system was not able to meet all the demands for care from the population. In addition to this, we need to consider the role of social media (news, newspapers, etc.) in the presenting the pandemic and its effects. The media played a central role in propagating age stereotypes and negative attitudes towards older people [75] and older people's perception of stereotypes and prejudices significantly predicted their feelings of loneliness [43]. An important contribution to the mental illness of older people was also made by the feeling of fear generated by an unknown disease that has claimed so many lives, especially among older people. Doctors, politicians and the media have continually emphasised the risk of COVID-19 for older individuals in particular, sowing an almost paralysing fear. The perception of being discriminated against because of one's age affects COVID-19 risk perception, which normally plays a central role in motivating health protection behaviour in general [76,77] and during pandemics [78]. At the same time, however, it can also feed the feeling of fear through a process of reciprocal influence, which affects mental illness.
Therefore, it is clear that, while the COVID-19 outbreak strategy served to reduce the spread of the virus by protecting the physical health of older people, it also greatly undermined their mental health. The feeling of loneliness caused partly by social isolation and partly by the perception of being discriminated against in the management of COVID-19 because of their age is the main contributor to their mental health, along with fear. Future policies should take this into account and provide for interventions aimed at preventing loneliness both in normal and emergency conditions, including interventions aimed at developing social skills, enhancing social support, expanding opportunities for social interaction and recognising maladaptive social cognition [79]. A strategy can also be to promote inclusivity in social media and to make older people's voices heard [80]. The participation of older people in social media, on the one hand, can increase their involvement in social interactions and, on the other hand, can be a way to recognise and give voice to the diversity that exists in this age group. Finally, reliable and relevant information about older people should be conveyed and stigmatisation and labelling of older people should be avoided [81]. Combating ageism requires a concerted effort by all stakeholders to convey positive messages associated with ageing and to create an environment of respect, empathy and solidarity toward older people, especially during the COVID-19 pandemic [80]. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study is available from the respective author upon request. The data is not publicly available due to the continuation of the project in this regard. | 2022-04-10T15:16:15.508Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "83c77e6791940ea413fe07c86db0751a7af4bd3e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/8/4513/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cc0e0d4e82cc9853626783a61b859a684534ecdd",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257447492 | pes2o/s2orc | v3-fos-license | Fungal Secondary Metabolites/Dicationic Pyridinium Iodide Combinations in Combat against Multi-Drug Resistant Microorganisms
The spread of antibiotic-resistant opportunistic microbes is a huge socioeconomic burden and a growing concern for global public health. In the current study, two endophytic fungal strains were isolated from Mangifera Indica roots and identified as Aspergillus niger MT597434.1 and Trichoderma lixii KU324798.1. Secondary metabolites produced by A. niger and T. lixii were extracted and tested for their antimicrobial activity. The highest activity was noticed against Staphylococcus aureus and E. coli treated with A. niger and T. lixii secondary metabolites, respectively. A. niger crude extract was mainly composed of Pentadecanoic acid, 14-methyl-, methyl ester and 9-Octadecenoic acid (Z)-, methyl ester (26.66 and 18.01%, respectively), while T. lixii crude extract’s major components were 2,4-Decadienal, (E,E) and 9-Octadecenoic acid (Z)-, and methyl ester (10.69 and 10.32%, respectively). Moreover, a comparative study between the fungal extracts and dicationic pyridinium iodide showed that the combination of A. niger and T. lixii secondary metabolites with dicationic pyridinium iodide compound showed a synergistic effect against Klebsiella pneumoniae. The combined formulae inhibited the bacterial growth after 4 to 6 h through cell wall breakage and cells deformation, with intracellular components leakage and increased ROS production.
Introduction
The emergence of multidrug-resistant microorganisms has increased the urgency of finding effective new antimicrobials to treat bacterial, fungal, and viral illnesses in humans and animals [1]. β-lactam resistance emergence in Gram-negative bacteria has been a major concern that has become an obstacle in the treatment of infectious diseases, especially those caused by Klebsiella pneumoniae. Miryala et al. [2] studied the role of the SHV-11 gene in drug resistance mechanism patterns in a K. pneumoniae strain. It was concluded that the SHV11 gene, along with the functional partners, were not only responsible for the drug resistance mechanism, but also helped in maintaining the genomic integrity through the DNA damage repair mechanism.
On the other hand, endophytic fungi are a broad collection of microorganisms that live either entirely or partially inside the cells of their host plants, invading healthy tissues with no outward sign of illness [3]. More and more chemicals with diverse biological functions are being extracted from endophytic fungi [4]. Natural products with biological functions which are called secondary metabolites and endophytic filamentous fungi are among the most prolific producers [4]. Many useful bioactive chemicals with antibacterial, insecticidal, cytotoxic, and anticancer activities have been isolated in the last two decades from endophytic fungi [5]. Alkaloids, terpenoids, steroids, quinolones, isocoumarins, lignans, phenylpropanoids, phenols, and lactones are some of the most common classes of fungal bioactive chemicals [6]. Antimicrobial action against a wide variety of microorganisms has been shown by Trichoderma sp., a fungal species present in many habitats [7]. Moreover, Aspergillus niger is one of the most well-known fungi, and has been isolated from several niches (soil, nuts and food). Extracellular enzymes and citric acid produced from A. niger are known as Generally Recognized As Safe for human consumption (GRAS) by the FDA because of their usage in several industrial settings [8]. Hence, A. niger has been considered a valuable resource for the biotechnological sector due to the abundance of secondary metabolites with immunomodulatory and cytotoxic properties against cancer cells [9].
On the other hand, chemically synthesized compounds, namely ionic liquids (ILs), are one of the most interesting scientific and technological advancements for their various applications over the last few decades. There have been significant developments regarding the relevance of these types of unique molecules with adjustable biological and industrial properties [10,11]. Initially, ionic liquids were identified as a combination of inorganic counter anions and organic counter cations. During the synthesis of ionic liquids, the generation of nitrogen-containing heterocyclic molecules contributes significantly [12]. In contrast, hydrazones have become significant molecules in modern chemical synthesis, garnering considerable interest. They were used in a variety of pharmaceuticals and chemotherapeutic drugs [13]. Their attachment to organic molecules plays a crucial role in essential biological processes and in the formulation of medications with a wide range of biological characteristics, including antibacterial [14], anticancer [15], anti-inflammatory [16], antifungal [17], and antitubercular [18] activities. Recently, dicationic ionic liquids (DiILs), a new category of the ILs family, has attracted a great amount of researchers' attention as it represents an interesting variation of the cationic partner. DiILs consist of two head groups (cations) linked by a rigid or flexible spacer and two anions [19].
Hence, the aim of the present study was to synthesize a dicationic pyridinium iodide compound, and characterize and combine it with a biologically active natural product for its potential synergistic effect.
Molecular Identification of Fungal Isolates
In the current study, two endophytic fungal strains were isolated from Mangifera Indica roots. The isolates were identified using ITS4 and ITS5 rRNA sequencing. The sequences obtained were compared with the nucleotide sequences of the international database. The isolated fungal strains were Aspergillus niger with GenBank accession number MT597434.1 (100% similarity) and Trichoderma lixii with GenBank accession number KU324798.1 (98.18% similarity). Furthermore, the phylogenetic tree was generated by performing a distance matrix analysis (Figure 1).
Antibacterial Activity of Fungal Bioactive Secondary Metabolites
Data in Table 1 revealed that the inhibition zones (IZ) diameter of T. lixii and A. niger crude extracts ranged from 8.0 to 20.0 mm and from 7.5 to 21.0 mm, respectively, against the tested pathogens. Staphylococcus aureus and E. coli were the most susceptible organisms against A. niger and T. lixii crude extracts, respectively.
Quang et al. [20] stated that A. niger metabolites have been considered as a promising source of antibiotics that inhibit the growth of the Gram-positive bacterium E. faecalis, with MIC values ranging from 32 to 64 mM, and of Candida albicans, with MIC values ranged from 64 to 128 mM. Meanwhile, Padhi et al. [21] revealed that A. niger metabolites showed antifungal activity against Candida albicans with IC 50 31 mg/mL, and antibacterial activity against Pseudomonas aeruginosa, Escherichia coli and Staphylococcus aureus with IC 50 of 160 mg/mL, 47 mg/mL and 135 mg/mL, respectively. Chigozie et al. [22] reported that the fungal extract of Aspergillus sp. isolated from fresh leaves of Mangifera indica. exhibited antibacterial activity against P. aeruginosa and E. coli.
Antibacterial Activity of Fungal Bioactive Secondary Metabolites
Data in Table 1 revealed that the inhibition zones (IZ) diameter of T. lixii and A. niger crude extracts ranged from 8.0 to 20.0 mm and from 7.5 to 21.0 mm, respectively, against the tested pathogens. Staphylococcus aureus and E. coli were the most susceptible organisms against A. niger and T. lixii crude extracts, respectively. Quang et al. [20] stated that A. niger metabolites have been considered as a promising source of antibiotics that inhibit the growth of the Gram-positive bacterium E. faecalis, with MIC values ranging from 32 to 64 mM, and of Candida albicans, with MIC values ranged from 64 to 128 mM. Meanwhile, Padhi et al. [21] revealed that A. niger metabolites showed antifungal activity against Candida albicans with IC50 31 mg/mL, and antibacterial activity against Pseudomonas aeruginosa, Escherichia coli and Staphylococcus aureus with IC50 of 160 mg/mL, 47 mg/mL and 135 mg/mL, respectively. Chigozie et al. [22] reported that the fungal extract of Aspergillus sp. isolated from fresh leaves of Mangifera indica. exhibited antibacterial activity against P. aeruginosa and E. coli.
GC-MS Analysis of Fungal Secondary Metabolites
Data in Figure 2 proved that the A. niger crude extract was mainly composed of Pentadecanoic acid, 14-methyl-, methyl ester and 9-Octadecenoic acid (Z)-, and methyl ester (26.66 and 18.01%, respectively). However, the T. lixii crude extract's relatively major components were 2,4-Decadienal, (E,E) and 9-Octadecenoic acid (Z)-, and methyl ester (10.69 and 10.32%, respectively) ( Table 2). Venice et al. [23] stated that a GC-MS analysis of T. lixii crude extract identified the presence of 1,3,3-Trimethyl-Diepoxyhexadecane and 3-Octadecenoic acid compounds. An analysis of endophytes' diversity has determined relationships among host plants and the endophytic fungi, through determining various secondary metabolites biosynthesized from the culture extract of the endophytic fungal isolates [23]. lixii crude extract identified the presence of 1,3,3-Trimethyl-Diepoxyhexadecane and 3-Octadecenoic acid compounds. An analysis of endophytes' diversity has determined relationships among host plants and the endophytic fungi, through determining various secondary metabolites biosynthesized from the culture extract of the endophytic fungal isolates [23].
Molecular Docking Study
Among the most common mechanisms, the Extended-spectrum β-lactamases (ESBLs) were widely reported [24]. One of the main concerns is that resistance caused by these enzymes may result in an efficacy reduction of antimicrobial therapy, or in failed treatment [25]. The reported findings demonstrated that ESBL-variants of SHV-type were the most frequent mechanisms of resistance in ESBL-producing K. pneumoniae isolates implicated in bacteremia. Hence, the SHV enzyme was chosen in the present investigation to assess the possible mechanistic action of the synthesized dicationic pyridinium iodide compound, as well as the naturally extracted compounds.
In the current study, molecular docking was performed to predict the binding affinity of the naturally extracted and chemically synthesized compounds toward the target ESBL enzyme SHV-1 ( Table 3). The results of the docking studies showed an excellent binding
Molecular Docking Study
Among the most common mechanisms, the Extended-spectrum β-lactamases (ES-BLs) were widely reported [24]. One of the main concerns is that resistance caused by these enzymes may result in an efficacy reduction of antimicrobial therapy, or in failed treatment [25]. The reported findings demonstrated that ESBL-variants of SHV-type were the most frequent mechanisms of resistance in ESBL-producing K. pneumoniae isolates implicated in bacteremia. Hence, the SHV enzyme was chosen in the present investigation to assess the possible mechanistic action of the synthesized dicationic pyridinium iodide compound, as well as the naturally extracted compounds.
In the current study, molecular docking was performed to predict the binding affinity of the naturally extracted and chemically synthesized compounds toward the target ESBL enzyme SHV-1 ( Table 3). The results of the docking studies showed an excellent binding manner with the active site of the target macromolecules, in comparison to the reference drug co-crystallized ligand LN1-255. The naturally extracted compounds showed higher binding scores when compared to the dicationic pyridinium iodide compound (−6.38 kcal/mol), where Pentadecanoic acid, 14-methyl-, methyl ester and 9-Octadecenoic acid (Z)-, methyl ester (extracted from Aspergillus niger) had −6.51 and −6.50 kcal/mol. Trichoderma lixii's most potent compounds were 9-Octadecenoic acid (Z)-, methyl ester, 1,2-15,16-Diepoxyhexadecane and Heptadecane, 9hexyl, showing −6.96, −6.56 and −6.99 kcal/mol binding affinity, respectively. The binding interactions of the dicationic pyridinium iodide compound revealed that the dicationic pyridinium iodide compound was well oriented inside the enzyme pockets and showed hydrophobic interaction with Arg244, Ala280 and Tyr105. Table 3. Binding scores (Kcal/mol) of the investigated molecule with the target SHV-1 enzymes.
Co-crystallized ligand LN1-255 −6.44 derma lixii's most potent compounds were 9-Octadecenoic acid (Z)-, methyl ester, 1,2-15,16-Diepoxyhexadecane and Heptadecane, 9hexyl, showing −6.96, −6.56 and −6.99 kcal/mol binding affinity, respectively. The binding interactions of the dicationic pyridinium iodide compound revealed that the dicationic pyridinium iodide compound was well oriented inside the enzyme pockets and showed hydrophobic interaction with Arg244, Ala280 and Tyr105. dinium iodide compound revealed that the dicationic pyridinium iodide compound was well oriented inside the enzyme pockets and showed hydrophobic interaction with Arg244, Ala280 and Tyr105. Table 4 revealed that the combined action of A. niger crude extract with the dicationic pyridinium iodide compound was synergistic against all the tested pathogens except S. aureus, while the combined action of T. lixii crude extract with dicationic pyridinium iodide was synergistic only against K. pneumoniae (Figure 3). Hence, K. pneumoniae was selected for further analyses. Table 4 revealed that the combined action of A. niger cr dicationic pyridinium iodide compound was synergistic against all t except S. aureus, while the combined action of T. lixii crude extract w dinium iodide was synergistic only against K. pneumoniae (Figure 3). H was selected for further analyses.
Checkerboard Dilution Technique
Data in Table 5 proved that the combined actions of the dicationic pyridinium iodide compound with A. niger and T. lixii crude extracts were synergistic against K. pneumoniae, with FICI 0.35 and 0.4, respectively. The observed antibacterial effect was further investigated against K. pneumoniae based on the FICI and MIC values.
Mechanistic Action of the Combined Formula
Transmission electron microscopic (TEM) study was applied to the treated cells of K. pneumoniae against the combined drugs (A. niger/dicationic pyridinium iodide and T. lixii/dicationic pyridinium iodide). Figure 4 revealed a breakage in the cell wall and deformation of the cells, with leakage of the intracellular components that lead to cell death. Moreover, A. niger/dicationic pyridinium iodide and T. lixii/dicationic pyridinium iodide combined drugs showed potent antibacterial activity by inhibiting the bacterial growth after 6 and 4 h, respectively ( Figure 5). Moreover, the reactive oxygen species (ROS) study of the treated bacterial cells revealed that by increasing the formula concentration the ROS increased, which elaborated the cell membrane damage and reduced bacterial cells' viability ( Figure 6).
Mechanistic Action of the Combined Formula
Transmission electron microscopic (TEM) study was applied to the treated cells of K. pneumoniae against the combined drugs (A. niger/dicationic pyridinium iodide and T. lixii/dicationic pyridinium iodide). Figure 4 revealed a breakage in the cell wall and deformation of the cells, with leakage of the intracellular components that lead to cell death. Moreover, A. niger/dicationic pyridinium iodide and T. lixii/dicationic pyridinium iodide combined drugs showed potent antibacterial activity by inhibiting the bacterial growth after 6 and 4 h, respectively ( Figure 5). Moreover, the reactive oxygen species (ROS) study of the treated bacterial cells revealed that by increasing the formula concentration the ROS increased, which elaborated the cell membrane damage and reduced bacterial cells' viability ( Figure 6). Guo et al. [26] studied the antibacterial activity of Aspergillus niger crude extract (fraction B10) against Agrobacterium tumefaciens T-37 with an inhibition percentage of 98.22%, and the dose required to achieve 50% inhibition was 0.035 0.018 mg/mL. The antibacterial mechanism was evaluated by using electric conductivity, the release of proteins and nucleic acids, sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE), and the detection of reactive oxygen species (ROS). An increase in the relative electric conductivity of the supernatant was noticed with the addition of Aspergillus niger crude extract B10, which indicated that there was electrolyte transfer from the intracellular to the extracellular matrix. In contrast to the control group, the B10 fraction treated group showed a high increase in the amount of extracellular nucleic acid and protein between 0 and 18 h, and damage in the cytoplasmic membranes was noticed. SDS-PAGE analysis showed that the amounts of extracellular protein and nucleic acid were consistent with the lower levels of total protein inside the cells. It was demonstrated that B10 was responsible for the rise in ROS. Guo et al. [26] studied the antibacterial activity of Aspergillus niger crude extract (fraction B10) against Agrobacterium tumefaciens T-37 with an inhibition percentage of 98.22%, and the dose required to achieve 50% inhibition was 0.035 0.018 mg/mL. The antibacterial mechanism was evaluated by using electric conductivity, the release of On the other hand, Qiao et al. [27] revealed that aspermerodione extracted from the endophytic fungus Aspergillus sp. TJ23 showed a synergistic effect with β-lactam antibiotics oxacillin and piperacillin as potent antibacterial combinations against MRSA. It was reported that combination therapy can be used as a promising strategy for combatting MRSA through extending the lifespan and efficacy of the currently employed antibiotics. The present investigation may pave the way into combating microbial infections through natural/synthetic combinations.
Tested Pathogens
Pseudomonas aeruginosa, Acinetobacter baumannii, Proteus vulgaris, Staphylococcus aureus, Escherichia coli, Klebsiella aerogenes and Klebsiella pneumoniae were kindly provided and identified by El-Shatby pediatric hospital using the Vitek 2 automated system (bioMerieux, Marcy l'Etoile, France) at the Medical Research Center, Faculty of Medicine, Alexandria University. The tested pathogens were kept in brain-heart infusion glycerol broth at −4 • C for further investigations, with monthly transfer into fresh media. The tested pathogens were identified as multi-drug resistant according to CLSI guidelines (Table S1).
Endophytic Fungal Isolation
Fungal samples were isolated from the roots of fully matured and healthy plants of Mangifera Indica at El Nubaria, Alexandria (30 • 41 57" N, 30 • 40 1" E), with firmed leaves and well-formed fruits, leaves and root systems. The root samples were rinsed with running tap water followed by deionized water and subsequently dipped in 70% ethanol (1-2 min), followed by sterilization in 0.1% sodium hypochlorite (2-3 min). They were further dipped in 70% ethanol and finally rinsed with distilled water. The roots were allowed to dry, and were cut aseptically into small pieces (1 cm 2 ) and patched onto potato dextrose agar (PDA) (Himedia, Mumbai, India) plates containing streptomycin (SRL, Mumbai, India) at a concentration of 250 µg/mL to prevent bacterial contamination [28].
Molecular Identification of the Fungal Isolates
Fungal isolates were identified through ITS based DNA sequencing using the conserved ITS region of fungal gDNA amplified by general primers ITS4 (5 -TCCTCCGCTTA TTGATATGC-3 ) and ITS5 (5 -GGAAGTAAAAGTCGTAACAAGG-3 ). ITS sequences of the identified fungi were submitted to GenBank for the retrieval of their accession numbers [28]. The study of percentage identity of the aligned sequences was carried out using a Kolmogorov-Smirnov statistical test in GeneDoc (version 2.7). Using the obtained sequences, phylogenetic analysis was performed and a phylogenetic tree was constructed through MEGA (v10.1.8) by the maximum likelihood Bootstrap (MLBS) method.
Seed Culture Preparations and Extraction of Fungal Secondary Metabolites
Spore suspension seed cultures were prepared according to CLSI guidelines [29]. Fungal isolates were inoculated (mycelial plugs (1 × 1 cm 2 )) into 300 mL potato-dextrose broth then incubated for 21 days at 25 • C under shaken conditions (140 rpm). At the end of the incubation period, the mycelia were harvested through filtration and the filtrate was extracted with chloroform/methanol (2:1, v/v) for 4 h. The crude fungal extract containing the bioactive compounds was stored at 4 • C for further experimental processes [28].
Antibacterial Activity of Fungal Secondary Metabolites
Antibacterial activity was carried out using the disc-diffusion method; the discs were saturated with 25 µL of each fungal extract (20 mg/mL) and placed on the surface of inoculated Müeller-Hinton agar plates [30]. Further antibacterial activity evaluation was carried out by assessing the minimal inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) values [31]. MIC and MBC evaluations were performed through mixing 80 µL of sterile Müeller-Hinton broth, 20 µL tween 80, and 100 µL of the fungal secondary metabolites one at a time. The mixture was then diluted serially using a two-fold dilution in a 96-well microtiter plate. Then, 100 µL of 0.5 McFarland of the tested bacterial suspensions was inoculated in each well. MIC is the minimum concentration of the tested drugs that inhibited the bacterial growth, while the MBC is the minimum concentration needed to completely kill the microbial cells [32].
GC-MS Analysis of Fungal Secondary Metabolites
For the GC-MS analysis, 2 µL of samples were injected into the GC-MS device equipped with a spitless injector and a PE Auto system XL gas chromatograph interfaced with a Turbo-mass spectrometric mass selective detector system. The MS was operated in the EI mode (70 eV) with helium as the carrier gas (flow rate 1 mL/min) and an analytical column HP (length 30 mm−0.20 mm, 0.11 mm film thickness). The MS was operated in the total ion current (TIC) mode, scanning from m/z 30 to 400. The bioactive compounds were identified by comparing their retention time (RT in min) and mass spectra with the library of the National Institute of Standards and Technology (NIST), USA [33].
Chemical Synthesis of Dicationic Pyridinium Iodide Compounds
A mixture of 4-pyridinecarboxaldehyde (10 mmol) in ethanol (30 mL) and isonicotinic acid hydrazide (1) (10 mmol) with a few drops of hydrochloric acid was heated under reflux for 1 h. The solid obtained after the solvent evaporation under pressure was recrystallized from ethanol to furnish the desired Schiff base 2.
The 4(2-iodoethoxy)benzene (10 mmol) was added under stirring to a solution of dipyridine Schiff base 2 (5 mmol) in acetonitrile (30 mL). Then, the reaction mixture was heated under reflux for 8 h, until the consumption of the starting material was indicated by TLC (silica gel, hexane-ethyl acetate). The solvent was reduced by evaporation under reduced pressure; the product formed was collected by filtration to afford the desired dicationic pyridinium iodide (3) (Scheme 1) [19].
GC-MS Analysis of Fungal Secondary Metabolites
For the GC-MS analysis, 2 μL of samples were injected into the GC-MS device equipped with a spitless injector and a PE Auto system XL gas chromatograph interfaced with a Turbo-mass spectrometric mass selective detector system. The MS was operated in the EI mode (70 eV) with helium as the carrier gas (flow rate 1 mL/min) and an analytical column HP (length 30 mm−0.20 mm, 0.11 mm film thickness). The MS was operated in the total ion current (TIC) mode, scanning from m/z 30 to 400. The bioactive compounds were identified by comparing their retention time (RT in min) and mass spectra with the library of the National Institute of Standards and Technology (NIST), USA [33].
Chemical Synthesis of Dicationic Pyridinium Iodide Compounds
A mixture of 4-pyridinecarboxaldehyde (10 mmol) in ethanol (30 mL) and isonicotinic acid hydrazide (1) (10 mmol) with a few drops of hydrochloric acid was heated under reflux for 1 h. The solid obtained after the solvent evaporation under pressure was recrystallized from ethanol to furnish the desired Schiff base 2.
The 4(2-iodoethoxy)benzene (10 mmol) was added under stirring to a solution of dipyridine Schiff base 2 (5 mmol) in acetonitrile (30 mL). Then, the reaction mixture was heated under reflux for 8 h, until the consumption of the starting material was indicated by TLC (silica gel, hexane-ethyl acetate). The solvent was reduced by evaporation under reduced pressure; the product formed was collected by filtration to afford the desired dicationic pyridinium iodide (3) (Scheme 1) [19]. The crystal structure of SHV-1 β-lactamase (Pdb: 3D4F), available at RCSB Protein Data Bank, was used as a template for constructing the 3D models [34].
Molecular Docking
The crystal structure of SHV-1 β-lactamase (Pdb: 3D4F), available at RCSB Protein Data Bank, was used as a template for constructing the 3D models [34].
Database Generation and Optimization
The ChemDraw application was used to draw the test compounds, and the MOE software database was utilized to gather these compounds once they had been drawn. Displaying hydrogen, computing partial charges, and using the default energy minimization were the three methods that were used in the optimization of the database. After the triangular matcher algorithm ligand was applied to the setting of the ligand placement, the default scoring function was employed to obtain the top five non-redundant poses that had the lowest binding energy of the test compound. In order to record the most effective potential molecular interactions, the docking of the optimized database was carried out using the induced fitting methodology. The docking score, expressed in Kcal/mol, was determined by combining the results of two different scoring functions-namely, alpha hydrogen bonding and London dG forces. The acquired results were organized into a list based on the S-scores that had an RMSD value of less than 2. The correctness of the employed software is heavily reliant on the training set, and the results of the molecular docking may be confirmed using a training set of experimental ligand-protein complexes. In order to guarantee a genuine and dependable docking strategy, the software that is being used needs to be able to reproduce the binding mode of an established reference inhibitor for the enzyme that is being targeted. The co-crystallized ligand LN1-255 was chosen as the comparison standard for the docking study in the experiment as a positive control (reference values). In the end, conformers that had the greatest binding scores and the best ligand-enzyme interactions were detected and examined [35].
Combination Study between the Fungal Extracts and the Synthesized Dicationic Pyridinium Iodide Compound
Combination studies were carried out according to White et al. [36]. The disc diffusion method was used to assess the possible differences in the inhibition zone diameter upon mixing the fungal extracts and the synthesized dicationic pyridinium iodide compound (1:1 w/w). Furthermore, the broth microdilution checkerboard technique was employed to study the synergistic effect between the fungal extract (Agent A) and dicationic pyridinium iodide compound (Agent B). Two-fold serial dilutions of the fungal extract and dicationic pyridinium iodide compound were dispensed in a 96-well microtiter plate with sub-MIC concentration. A 100 µL quantity of the bacterial suspension (1.5 × 10 6 CFU/mL) was dispensed into each well and incubated for 24 h at 35 ± 2 • C. The fractional inhibitory concentration index (FICI) was computed, with the following equation: On the basis of FIC and FICI values, the most susceptible bacterial strain (K. pneumoniae) was treated with the combined drugs. Samples were fixed using a universal electron microscope fixative. A series of dehydration steps were followed using ethanol and propy-lene oxide. The samples were then embedded in labeled beam capsules and polymerized. Thin sections of cells exposed to extracts were cut using LKB 2209-180 ultra-microtome and stained with a saturated solution of uranyl acetate for half an hour and lead acetate for 2 min [31]. Electron Micrographs were taken using a Transmission Electron Microscope (JEM-100 CX Joel).
Time-Kill Curve
A time-kill curve was investigated to estimate the optimum time required to inhibit the bacterial vegetative cells. Fungal secondary metabolites combined with the dicationic pyridinium iodide compound (FIC and FICI values of each) were added one at a time to 10 mL Müeller-Hinton broth containing 1 × 10 6 CFU/mL bacterial cells. Aliquots were withdrawn to assess the bacterial growth through different incubation time (0, 2, 4, 6, 8, 12 and 24 h) at OD 600 nm [37].
Reactive Oxygen Species (ROS) Study
The reactive oxygen species (ROS) generation assay was measured according to Almotairy et al. [38] and Bhuvaneshwari et al. [39] using 2,7-dichlorofluorescin diacetate (DCFH-DA) dye by comparing the extracellular ROS of the treated and control bacterial cells.
Conclusions
In the current study, two endophytic fungal strains with antimicrobial activities were isolated from Mangifera Indica roots and identified as Aspergillus niger MT597434.1 and Trichoderma lixii KU324798.1. A dicationic pyridinium iodide compound was synthesized and then evaluated for its potential synergistic effect with the extracted fungal crude extract. The molecular modeling study revealed that the synthesized dicationic pyridinium iodide compound and the extracted fungal secondary metabolites showed promising inhibitory effects against the SHV-1 enzyme. The combination of A. niger and T. lixii secondary metabolites with the dicationic pyridinium iodide compound showed a synergistic effect against K. pneumoniae. Fungal secondary metabolites combined drugs inhibited the bacterial growth after 6 and 4 h through cell wall breakage and cells' deformation with intracellular components leakage and increased ROS production, which led to bacterial cell death. This study proved the importance of the combination of fungal secondary metabolites and some synthetic drugs against multi-drug resistant microbial cells through several modes of action, which may pave the way to more available naturally derived options. | 2023-03-12T15:48:11.492Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "2f897ca01590e5da2383ac3a691b8fb108e281ff",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/28/6/2434/pdf?version=1678183268",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c22c04f7b077dd826368de12c875b299d4c05e17",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
53001471 | pes2o/s2orc | v3-fos-license | Meals , Preparation Environment and Hands of Food Handlers-Microbiological Status in Hospital Kitchens
The aim of this study was to assess the microbiological conditions of ready-to-serve meals, the preparation environment and the hands of food handlers at two public hospital allocated in São Paulo State, Brazil. Were evaluated 480 samples of food, equipment, utensils, drains and hands of the staff from three kitchen sectors of two public hospitals in Brazil. Total coliforms were found in 116 (24.17%) samples, Escherichia coli in nine (2.08%) samples and coagulase-positive Staphylococcus in 17 (3.54%) samples, two of these strains showed gene encoders for classic entero toxins production. Coagulase-Negative Staphylococcus (CNS) occurred in 98 (20.41%) samples, in which 19 gene encoders for classic enterotoxins production were detect, and Listeria monocytogenes occurred in four (0.83%) samples. Salmonella were not detect. The microbiological quality of most samples evaluated was considered satisfactory; however, the presence of L. monocytogenes and other microorganisms, even at low frequency and with low counts, represents a risk of cross contamination of the food items, which can transmit pathogens to the patients, as well as forming bio film. The great concern is that Listeria and CNS were not included at sanitary micro biological standards for foods in Brazil.
Introduction
Enteric pathogens are harmless to most of the population, never the less they may cause illness and even death in susceptible individuals, particularly those immune compromised.A total of 1022 outbreaks of no socomial infections in the United States, United Kingdom, France, Canada, Germany, the Netherlands and Spain were report from 1966 to 2002 and only 3.32% of these were identified as food borne diseases [1].
However, the data in the literature do not reveal the true incidence of diseases originated from food in hospital units since most cases are not reported.The main factors that contribute to the occurrence of food borne diseases are poor personal hygiene habits of the food handlers, the cooking and storage of food at inappropriate temperatures, the acquisition of raw materials from unreliable sources and the use of poorly sanitized equipment.Since patients have a greater risk of becoming ill when exposed to potential food borne pathogens and given that the food services need to provide a wide variety of foods, it is essential that appropriate food handling practices are maintained [2].
The aim of this study was to evaluate the hygiene-sanitary quality of food prepared in two kitchens at public hospitals in São Paulo State, Brazil, as well as to study the dynamics of the contamination of ready-to-serve meals from utensils, equipment, the environment and the food handlers involved in the process.
Hospital kitchens
Sample collections were carried out at two public hospitals.The Hospital A (HA) showed 467 beds and served approximately 2000 meals/day, the Hospital B (HB) showed 318 beds and served around 1000 meals/day.At each hospital, three specific areas of food preparation were assessed: • Milk dispensary: an isolated area, with restricted access to staff using special clothing and hand sanitation.Utensils and equipment were restricted to this environment; • General kitchen: the patients without dietary restrictions, staff and students had the meals prepared here.At HA the equipment and utensils were of exclusive use at this area, while at HB it has shared with the Special Diet Kitchen.The salads served at HA are acquired ready-to-serve, only being divided into portions in this environment, while at HB, the salads are sanitized, prepared and divided into portions in this environment; and • Special Diet Kitchen, where the meals were prepared for patients with some dietary restriction -diabetic, hypertensive, etc.
Study protocol
A prospective study was carried out over 10 months, with a monthly collection at each hospital, totalizing 240 samples/hospital, amounting 480 samples.Around 100 ml/g of each food product (meat, rice, soup, beans, chicken meat, potato, and other cooked food available for patients) were collected.For the equipment and utensils a smear of the surface delimited by molds, carried out with swabs, which were transfer to tubes containing 10 ml of Letheen broth.It was collected smears from the drains using sterile sponges soaked in 10 ml of Letheen broth and transferred to Whirl-Pak® bags containing 100 ml of Letheen broth.
The staff hands area was measured as shown in figure 1, so rinsed for around 1 min.in Whirl-Pak® bags containing 100 ml of Letheen broth.At each visit at the Milk Dispensary, was collected a sample of milk or substitute at feeding bottle, two swabs of utensils, two swabs of equipment and two rinses of the hands of the staff.At the General Kitchen a hot meal, a cold meal, two swabs of utensils, two swabs of equipment, two rinses of the hands of the staff and a swab of a drain was collected; and in the Special Diet Kitchen the same protocol was designed, except for the cold meal (that was the same).At HB the General and the Special Diet Kitchens shared the utensils and equipment, so the total swabs were sampled from the same place.The samples were stored in isothermal plastic boxes containing recyclable ice and transported to the laboratory where it was analyze on the same day as the collection.
Microbiological analysis
Salmonella spp.[3] and Listeria monocytogenes [4] were assessed in select samples of food, utensils, equipment, drains and hands of the food handlers.These samples (with the exception of the drains) were either tested for coagulase-positive and coagulase-negative Staphylococcus [5,6], total coliforms and Escherichia coli by Petrifilm TM EC (3M®).
To test for Salmonella spp.variable volumes of 1% buffered peptone water was added to test bags or tubes, observing a ratio of 1:9 (sample/diluent).The same procedure was used to test for L. monocytogenes, in this case, employing Listeria broth enrichment, and for the enumeration of coagulase-positive and negative Staphylococcus, total coliforms and E. coli tenfold dilution were carried out in 0.85% saline solution.
All culture media were of the brand Difco®, with the exception of Petrifilm TM EC (3M®) and the agar Cromo Cen Listeria (base Agar Listeria Ottaviani Agosti; Biocen®).
Complementary analysis
Testing for entero toxin-encoding genes of Staphylococcus A, B, C, D and E were carried out through the Polymerase Chain Reaction (PCR).For the extraction and purification of the genetic material was used Ilustra blood genomic Prep Mini Spin Kit (GE Healthcare®) and the PCR reactions were performed with 500 nMol of each primer (Table 1), one unit of Taq DNA polymerase (Invitrogen®), 2 µMol of MgCl 2 , 200 nMol of dNTP (Invitrogen®), 2.5 µL of PCR buffer (10×) (Invitrogen®), 3 µL of DNA sample, and the required quantity of water free of nucleases (USB®) to give a volume of 25 µL.
Oligonucleotides and their properties used in the detection of coagulase-positive and coagulase-negative Staphylococcus genes, producers of toxins A, B, C, D and E.
For the amplification, the Gene Amp PCR System 9700 (Applied Bio systems®) was used, with the following program: 94ºC/7 min (initial denaturation), followed by 30 cycles of 94ºC/30s, 50ºC /30s and 72ºC/30s, with a reduction of 1ºC per cycle in the annealing phase until reaching 45ºC.For the final period, 72°C had applied for 5 min.In all of the reactions the strains ATCC 13565 (sea), ATCC 14458 (seb), ATCC 19095 (sec), FRI 361 (sed), and ATCC 27664 (see) were used as positive controls and ultrapure water free of nucleases was used as the negative control.Universal primers originating from 16S rRNA forming a product of 371bp has used as the internal control [9].
The products of the PCR reactions were submitted to electrophoresis (Electrophoresis Power Supply Model EPD 600 -Amersham-Pharmacia Biotech Inc.) in 1.5% agarose gel (Prodinasa®) in Tris-boric acid-EDTA 1X (TBE 1X) buffer and developed with 1 µL of SYBR® safe (Invitrogen®)/10 ml of agarose gel.Comparative analysis were carried out with a label of 50 bp (LGC Biotecnologia®), and photographs of the DNA fragments has taken with an image analyzer (Alphaimager -AlphaEase FC Software -AlphaInotech Corporation®).The amplification of the internal control verified the good performance of the PCR and the absence of inhibitory agents in the reaction and extraction.
The L. monocytogenes strains were tested in API Listeria® (Biomérieux®) and serotyping [10], through pulsed-field gel electrophoresis (PulseNet protocol) at the Pharmaceutical Sciences Department of University of São Paulo (Faculdade de Ciências Farmacêuticas da Universidade de São Paulo).
All the results were interpreted according to the limits established by Brazilian legislation (Table 2).The foods contaminated by L. monocytogenes were classified as unsafe products.
Ethical aspects
Considering that the hands of the food handlers were rinsed with culture broth, before collecting the samples the subjects read the "free and informed terms of consent" which highlighted the test conditions and the physical risks to which they would be subjected on participating in the project.After the reading and the clarification of any queries, if the handler agreed to participate in the study, the terms were signed by the food handler and by the researcher.The Ethical Committee in the University approved this procedure.
Results and Discussion
Total coliforms were found in 116 (24.17%) samples.At HA 52.58% of the samples were contaminated, the Milk Dispensary showed 13.79% of the samples at the range 1.0×10° to 8.8×10³ CFU/ml or cm².A feeding bottle was inappropriate [11] for consumption by premature babies or by children both under and over one year of age, since it represents an infant food, which contains 1.3×10² CFU/ml of total coliforms.At General Kitchen 19.83% of the samples showed 1.0×10¹ to 1.1×10 4 CFU/g or cm², and from the Special Diet Kitchen 18.97% of the samples showed 7.0×10° to 7.0×10³ CFU/g or cm².At HB Milk Dispensary 8.62% of the samples were contaminated with 2.0×10° to 2.0×10³ CFU/ml or cm²,from General Kitchen, 11.21% showed <1.0×10¹ to 2.3×10³ CFU/g or cm², from the Special Diet Kitchen, 7.76% of the samples showed 1.0×10¹ to 9.2×10³ CFU/g, and from the utensils and equipment used both in the General and Special Diet Kitchens, 19.83% showed 1.0×10° to 1.7×10 4 CFU/cm².Nine (2.08%) samples were contaminated by E. coli, and none of these was originated from HA or from the Milk Dispensary of HB.At HA, the general kitchen had 33.33% of the samples with counts by <1.0×10¹ to 4.0×10¹ CFU/g, the Special Diet Kitchen 22.22% with 1.1×10¹ to 2.0×10¹ CFU/g, one of that, a hot meal was inappropriate for consumption, since it was contaminated with 2.0×10¹ CFU/g of E. coli [11].Furthermore, 44.44% from the utensils and equipment used in the General and Special Diet Kitchens at HB had 2.0×10° to 2.5×10¹ CFU/cm².
The results obtained for the total coliforms and E. coli counts of the samples collected from the hands of the food handlers reveal that good hygiene practices were adopting in relation to the hands at both units.In another study, of 180 samples analyzed, 8% was contaminate with E. coli [12].
Coagulase-positive Staphylococcus occurred in 17 (3.54%)samples, two of these strains showed gene encoders for classic enterotoxins production, 12.5% of these samples came from hands of food handlers at the Milk Dispensary (1.0×10¹ to 3.0×10² CFU/cm²), 18.75% from meals and hands of food handlers at the General Kitchen (1.0×10¹ to 1.0×10² CFU/g or cm²) and 18.75% from the Special Diet Kitchen (9.5×10¹ to 1.1×10² CFU/g or cm²).
At HB the samples contaminated by coagulase-positive Staphylococcus was 12.5% originated from the Milk Dispensary (it was a hand of food handler with 1.0×10² CFU/cm²), 12.5% from the General Kitchen (<1.0×10² to 1.0×10² CFU/g or cm²), 18.75% from the Special Diet Kitchen (4.3×10¹ to 2.0×10² CFU/g or cm²) and 6.25% from the utensils and equipment used in the General and Special Diet Kitchens (an equipment with 4.3×10³ CFU/cm²).
The strains with gene encoders for production of classic enterotoxins has detected at a blender (sea e sec) from General Kitchen of HA and, at the hands of a food handler (seb e sec) from General Kitchen of HB.
Regarding the presence of coagulase-positive Staphylococcus, a low frequency of this microorganism was found and when it was present, it did not reach a significant count, considering national legislation [11] and the number of viable cells required for the production of toxins, which is over 10 5 CFU/g of food [13].
Other authors, who evaluated 70 samples of salads to be served to hospitalized individuals in Turkey, have obtain results of greater concern, eight (11%) of the samples being contaminated by coagulase-positive Staphylococcus, with counts ranging from 1.0×10³ to 1.0×10 4 CFU/g 12 .Coagulase-Negative Staphylococcus (CNS) occurred in 98 (20.41%) samples, in which 19 gene encoders for classic enterotoxins production were detect (Table 3).At HA 14.17% of the contaminated samples with CNS came from the Milk Dispensary, none of that was by feeding bottles and the count was by 3.1×10¹ to 1.3×10 4 CFU/cm².The General Kitchen had 18.9% of the contaminated samples (1.8×10¹ to 1.6×10 5 CFU/g or cm²), and the Special Diet Kitchen18.11% of the samples (5.1×10¹ to 6.8×10 4 CFU/g or cm²), totalizing 51.18% of the samples contaminated by coagulase-negative Staphylococcus.At HB 14.96% of the contaminated samples had been originated from the Milk Dispensary, in the same way observed in HA none of that was by feeding bottles, and the counts were by 1.1×10¹ to 1.4×10 4 CFU/cm².From the General Kitchen were 11.02% of the samples (<1.0×10² to 4.1×10 4 CFU/g or cm²), 11.81% from the Special Diet Kitchen (3.1×10¹ to 5.0×10 4 CFU/g or cm²), and 11.02% from the utensils and equipment used in the General and Special Diet Kitchens (3.3×10¹ to 1.4×10 4 CFU/g or cm²).
The high counts of coagulase-negative Staphylococcus should not been disregarded, since these micro organisms are potential producers of SE.It has created a great concern, especially because this pathogen was not included at national legislation [11] for the hospital food, hands of staff or kitchens environment, which hinders the implementation of corrective/preventative actions.However, the production of toxins has not tested in this study.
Listeria monocytogenes occurred in four (0.83%) samples.It was detected at a drain from HA (serotype 1/2b, 3b, 7), and at two samples of drains (serotype 4a, 4c and 1/2b, 3b, 7) and at an equipment (a blender-serotype not identified) at HB.In addiction an equipment (a vase) from HA and a drain from HB were contaminated with L. innocua.
There were none ready-to-serve meal contaminated by L. monocytogenes.In another study, 29 of 950 sandwiches prepared in a hospital in the United Kingdom were contaminate by L. monocytogenes, and for one sample, the count was 1.2×10³ CFU/g [14].
The detection of Listeria at the equipment and environment created a great concern, especially because the absence of this pathogen was not included at national legislation (Table 2) for the hospital kitchens environment.An improvement in relation to the sanitation and disinfection of the equipment and drains was poignantly recommend, since the presence of L. monocytogenes and yours marker (L.innocua) is unacceptable particularly at an environment that prepares meals for hospitalized individuals.
There was none sample contaminated by Salmonella spp.The absence of pathogens in the final meals, as well as the low counts for E. coli, detected only in the cold meals and within the standards established by legislation [11], lead to the conclusion that the units present good hygiene-sanitary control in relation to their operations.So, the microbiological quality of most samples evaluated has considered satisfactory; however, the presence of L. monocytogenes and other microorganisms, even at low frequency and with low counts, represents a risk of cross contamination of the food items, which can transmit pathogens to the patients, as well as the possible formation of a biofilm.
The detection of Listeria at the equipment and environment created a great concern, especially because this pathogen was not included at national legislation (Table 2) for the hospital kitchens environment.
An improvement in relation to the sanitation and disinfection of the equipment and drains was poignantly recommend, since the presence of L. monocytogenes and yours marker (L.innocua) is unacceptable particularly at an environment that prepares meals for hospitalized individuals.
Figure 1 :
Figure 1: Measure the area of the hands of food handlers (cm²).
and their properties used in the detection of coagulase-positive and coagulase-negative Staphylococcus genes, producers of toxins A, B, C, D and E. Citation: • Page 3 of 5 • J Food Sci Nutr ISSN: 2470-1076, Open Access Journal DOI: 10.24966/FSN-1076/100018 Volume 3 • Issue 1 • 100018 | 2018-08-09T23:00:10.886Z | 2017-06-21T00:00:00.000 | {
"year": 2017,
"sha1": "1a23abcde7cc3512a89a1e249972602a1248f3fb",
"oa_license": "CCBYSA",
"oa_url": "http://www.heraldopenaccess.us/fulltext/Food-Science-&-Nutrition/Meals-Preparation-Environment-and-Hands-of-Food-Handlers-Microbiological-Status-in-Hospital-Kitchens.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1a23abcde7cc3512a89a1e249972602a1248f3fb",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
109801431 | pes2o/s2orc | v3-fos-license | THE FRENCH CONNECTION: A MULTI-LINGUAL LITERATURE REVIEW OF OHS IN THE SMALL BUSINESS SECTOR
There is the growing recognition that Occupational Health and Safety (OHS) regulations and initiatives applied in the small business sector are frequently ineffectual. However, most of the discourse on OHS in the small business sector is ethnocentric with little or no insight into the relationship between the Anglo and French literature. This has meant that models of OHS intervention and prevention adopted by and/or adapted to the New Zealand small business sector are those that have been, by and large, written up in English. By excluding OHS intervention and prevention models or research on the basis that they are reported in a language other than English means that there is a potential of a myopic view on the subject. Moreover, given that workplaces are becoming more ethnically diverse, it is imperative that OHS models and research located in non-English speaking discourses are given due consideration. The purpose of this paper, therefore, is to provide a multilingual bridge between the English-speaking and the French-speaking research. In particular, the OHS literature wrillen French, English and other languages will be reviewed in order to identify the major determinants of OHS risks in small businesses, thus providing a starting point for more comprehensive, preventative programmes aimed at all small businesses.
Introduction
The weight of the Small and medium-size companies (SMEs) has continued to grow in the economic setting since the end of 1970s. On average, more than 50% of the workforce is employed by these kinds of organisations in the industrialised countries. As the result ofthe externalisation of some their activities by the large companies and of a creed around the ''small is beautiful", these organisations seem to provide the virtues of being flexible and innovative in a context of international competitiveness.
Over the same period, eff011s with regards to risk prevention related to health and safety at work (OHS) increased and the management of these risks, which have been undertaken in these kinds of organisations, became a major concern for the services of the prevention.
In 1996, The Health and Safety Executive declared it almost impossible to improve health and safety at work in British companies without taking into account the specific issue of the small and medium-size enterprises (SME). The fact that there is clearly a gap between these organisations and large organisations in terms of occupational injuries, diseases and exposure of the workers to the OHS risks cannot, indeed, go unnoticed. For all the member states of the European countries in 2000 (European Commission, 2003), companies of less than 250 employees represented 77.7% of the occupational injuries and 81% fatal injuries. In France, 98% of the occupational injuries and 40% of the occupational diseases occurred in organisations of less than 200 employees (DARES, 2004 ). This over-'accidentability' in SMEs and the over-exposure of the workers to differed risks, such as the dangerous chemical agents and the Carcinogenic, Mutagen products and Reprotoxiques ( CMR), caused a strong growth of occupational diseases in these organisations, are thus at the origin of a main concern for the institutions of prevention.
Paradoxically enough, while S:MEs has become a great focus of attention as an object of study in the sciences of management in the 80s (Julien, 1997), the problems of safety and health in these organisations remained a marginal subject until very recently. Hasle and Limborg (2006), i;1 a review of the literature on OHS prevention in SMEs, counted 284 relevant references among which 184 were in English, with a vast majority of these references (71%) having been published between 1995 and 2004. Although the literature on the subject is not very abundant, it, nonetheless, cannot be said that there is a deficit of the scientific research. It appears, however, that there are few French studies and none explicitly referring to Anglo-Saxon literature and, more particularly, to New Zealand and Australian research. Moreover, as it has been noted (Favaro, 1996;Hasle and Limborg, 2006), research on health and safety in SMEs has suffered many repetitions and antagonisms. This article, thus, aims to present the problems S:MEs must face with regards to OHS in the international context of growth of these companies. It, also, aims to classify the notions and concepts. which can constitute a theoretical basis for future empirical research as well as for the implementation of suitable preventive actions.
The problem research
The problems SMEs have to face with regards to OHS issues are interdependent from their economic contexts. This first part presents on the one hand, the main causes which have led to the growth of SMEs in industrialised countries as well as the features which mainly characterize them and, on the other, their poor performance with regards to OHS.
The authors will talk about the rebirth of SMEs from the 1970s (Julien. 1997). This rebitth took place in industrialised countries but extends also to the exsocialist countries in transition towards the market economy (Koradecka,200 I), which. for a long time, has privileged the large organisations. It further extends to the developing countries where their economy is largely based on the informal sector. In the United States and Japan, more than 2/3 of the employees are employed by organisations of less than 250 people. In Europe. SMEs represented 99.8% of all organisations of the 25 member states in the European countries in 2003 (EC 2002). that is to say 2/ 3 of the workforce. By refining these figw·es according to the branch of industry, 10% of total organisations represents manufactured goods. 14% of total employment, the construction sector represents 13% of the total organisations and 9% of total employment. The retail sales sector represents 20% of total organisations and 14% of total employment.
With regards to the commercial non-financial economy. SMEs could generate more than half of the total of the gross-domestic product (GDP), indexed as follows: the ·micro-businesses· 20.5%, small companies 19.1% and the mediwn-sized companies 17. 8% (E.C, 2002). ln France. independent SMEs employing 0 to 249 people represented 55% of the total workforce in 2004 (employees and self employed-workers). Two-thirds of the workforce are employed in SMEs of less than 20 people and contribute to 66% of the GDP.
The reasons of this rebirth results, first of all, from a questioning of what has founded the myth of the large organisation: economies of scale, the results of the experience and training. Until very recently. the large organisations were regarded as the economic model to reach. SMEs were seen as small ·large companies· intended to gro''. The 70s and the oil crises modified the perception one had of this model.
The reasons are, however. not only economic, but also political and sociological.
Facing the employment crisis, the creation of organisations is a belief which one finds in many industrialised countries. The creation of small organisations was, in this context, supported by a legislative easing and suitable financings. However, as it will be seen later, the work contracts in small organisations are more precarious than in large ones. On a sociological scale, the questioning of the large organisations coming from research of a more convivial work environment can also explain this passion for these organisations (Torres, 1999).
It would be, however, debatable to draw up a report based only on the model of "small is beautiful". Many authors are severe about small organisations and curb their enthusiasm towards the size of these organiastions. They are economically vulnerable and depend too much on their environment. A minor fact for a large company will become a considerable hindrance or an important problem in a SME (the loss of a customer which represents most of the sales turnover for example). This is the case for sub-contracting SMEs of a large organisation. Therefore, this economic vulnerability plays an impOttant role in the level of prevention of OHS hazards.
SMEs show very different characteristics according to their contexts, raising problems and debates over the definition of the object of the study. Traditionally, one distinguishes the quantitative from the qualitative definitions.
When we talk about SMEs, the first criterion to be taken into account is the independence of the company. A small establishment controlled by a large company is not a SME (Julien, 1997). The problem particularly lies in certain forms of subcontracting.
The second criterion to be taken into account is the size of the workforce or the sales turnover. The notion of size can be very different according to the localisation of the company: less than 499/500 employees in some countries like the United States and France, and less than 100 people in other countries such as Austria and Norway.
In a recommendation dated May 6th, 2003 the European Union provides a quantitative definition of SMEs and classifies them according to the sales turnover and the size of the workforce. A SME is a company where manpower is lower than 250 people and sales turnover is lower than 50 million euros) This quantitative definition proves insufficient when taking into account the complexity of SMEs as an object of research, and a certain nwnber of authors have highlighted allowing the qualitative criteria to specify the quantitative definitions mentioned above. Julien ( 1997) has synthesised them in six criteria (table 1) Labour, Employment and Work in New Zealand ?008 The employee's high degree of polyvalence leads to a social Implementation of an intuitive strategy due to the short sightedness of SMEs whose visibility is reduced in the long term. The internal information network gives way to an informal way of communication The external information network is made up of market contacts, for e.g main : clients.
These characteristics thus highlight as many criteria of reliabilities as of vulnerability inherent in SMEs when facing hazards and must be taken into account to implement suitable preventive actions.
Concerning the vulnerability of these organisations, the statistics of the occupational diseases and injuries reveal a weak performance of SMEs compared to the large organisations (Favaro, 1996;Lamm 1999;Waiters, 2001 ;Fabiano, 2004;Sorensen and Has le, 2007) While all the authors admit that it is difficult to establish comparisons between the large and the small organisations (Sorensen and Hasle, 2007), the international and French studies all state a higher risk of serious occupational injuries and fatal injuries in SMEs. The European and French statistics highlight the relation between the size of the organistions and the frequency of occupational injuries and fatal injuries. The more the size decreases, a higher frequency rate is excepted for very small companies, which shows a lower frequency rate of occupational injuries but a higher percentage of fatal injuries. This is true for every country or group of countries which have been considered.
One notes, however, significant differences according to the studies among the risks which have been met. If the risks related to the handling and the exposure to chemicals are more often highlighted in the small organisations, it seems that they are less subjected to the psychosocial risks. This analysis is found in English and French studies on the subject. According to the ISAST Prevention of risks in SMEs: internal and external factors The approach to the prevention of the risks in SMEs can be looked from both the internal and external factors of the organisation. The comparison of these various factors makes it possible to draw up a general representation of the determinants of the level of prevention of the occupational hazards in S:MEs.
As seen in a preceding paragraph, S:MEs are strongly conditioned by the actors of the organisation -the manager and the employees, but it also largely depends on its economic, social and legal environment. Research concerned with the undertaking and the implementation of measures related to occupational hazards in SMEs has, firstly, focused their attention on the actors of the organisation, more pru1icularly on the manager, before highlighting the bonds between the level of prevention and the environment of the company.
The main focus of attention when considering the internal factors with regard to prevention is the manager. As seen previously, the manager's personality plays a key role in the management of the organisation. With regards to health safety at work, the manager's perception of hazards as well as the answers they bring about, is at the core of every author 's preoccupation. Eakin ( 1992) has carried out one of the major studies on the matter. Its first observation is that the majority of the managers of SMEs invest little in the management of health and safety at work. Most of the time, they consider that occupational hazards are related to the behaviours of employees and, more pat1icularly, with their lack of prudency, vis-a-vis certain substances or dangerous situations However, this behaviour of the employer is not monolithic and the results of the interviews reveal that the manager's position hinges around two very contrasting poles.
A minority pole (l /5 of the questioned leaders) which includes owners who state to take many efforts to take into account health and safety at work and claim they do not hesitate to take action against illegal or dangerous behaviour. This is what is called ' the coming down hard' approach. A second pole, which is most widely spread, considers the prevention of the risks as the worker's problem. This is the "leaving it up to the workers" approach.
Knowing that this stance relates to two-thirds of the managers who have been interviewed, Eakin has tried to understand their reserved behaviow· and reluctance to handle OHS matters. Two main explanations may justify this behaviow·.
Labour, Employment and Work in New Zealand ?008 One explanation lies in the way social relations are woven in SMEs. which appears completely different from what may be observed in large organisations. The employer-employee relationship has a family dimension and is less polarised than in large organisations.
The other explanation lies within the perception of the manager's responsibility which could either be considered as personal or merely bureaucratic. Employers are not really aware (or very little) that they are legally responsible. They are also not aware of their duties as far as OHS is concerned. They tend to think they cannot call some of their employees· practices into question without casting doubts over their skills. Therefore, as stated by the author, the peculiarity of the context in SMEs calls for a management of occupational health and safety as the employee is considered as being at the core of the problem. perhaps the problem in itself.
This vision of the small organisation of less than 50 people is confinned by a more recent sw-vey by Champoux and Brun. (2000), who proposed to describe the representations of OHS management by the employers in SMEs of less than 50 employees. This study made it possible to carry out three principal observations.
First is that contrary to widespread belief. there exists a management of occupations hazards in SMEs. even if this one is not at the level expected by the legal authorities.
The second observation was about the manager's perception of the level of prevention in their company. On the whole. the manager is said to be satisfied v.rith the level of prevention in their company and after having crossed all the information. the survey shows that the managers are cut off from the prevention supply and do not know the statutOJy regulations. The third links the conclusions of Eakin ( 1992): managers impute blame for their employees for two third of the accidents.
Taking into account the internal factors of SME and more particularly of the head of unde11aking is thus an important way of research. It cannot, however. exclude the organisation's environmental factors, which plays a determining role of the level of prevention.
The approach by the e>..1:ernal factors is found in Favaro ( 1997) and Waiters's research.
Far from denying the role of decision making. Favaro starts from the hypothesis that the practices concerned \\ith the prevention depend mostly on factors different to those linked tO OHS. such as SMEs' independence, their financial and juridical structure. but also their economic situation and the bonds they can have with quality management. A very detailed questionnaire taking into account numerous variables allowed Favaro to have a very accw·ate overview of the situation in SMEs and their dealings with occupational and safety risks.
The first observation made is that of a chasm between the organisations that have implemented different practices with regards to OHS management and those that have shown a total lack of preventive actions. The most widespread group is the one in which organisations are inactive. The effect of size combined with the organisation's inactivity has clearly shown that inactivity is the rule for organisations of less than 40 workers and that the situation improves for organisations of about one hundred employees.
As for the bond between the degree on independence of a organisation and its inactivity with regards to hygiene and safety matters, the result of the study have shown that structw·al independence is in close correlation with a higher rate of inactivity. The contrary, too holds cotTect, the less independent a company is, the more active it is. Therefore the degree of independence strengthens inactivity in terms of hygiene and safety as demonstrated by Favaro.
''A company with little worliforce, technically noncomplex and a fortiori structurally independent shows a configuration particularly unfavourable to the implementation of preventive actions" (Favaro, 1997:79).
With regards to the relationship between the economic context of the organisation and its professional environment, the study has shown that the more favourable the position of the organisation was, the better the level of safety became. The distance between the relationship with a quality management team also plays a significant role in an organisation's commitment to the process of prevention of occupational risks. This may be explained through the similar practices between the method of quality certification and the assessment of occupational risks. The authors, however, remain evasive about this last statement.
The general conclusion of the study is that the external determinants which hold with the context of the company (econom ic situation in pa11icular) are more determining than the internal factors (technology, profile of the leader, type of organisation) to understand the vanous levels of the measw-es undertaken in OHS matter.
The determination of a level of prevention of the risks by contextual elements is an assumption also raised by Waiters (2001 ). This latter in an European investigation presents the various ways of prevention in SME.
After having pointed out the internal factors which suppon prevention, Waiters presents the contextual elements which play, according to him, a fundamental role to exert an external pressw-e on SMEs. Namely, the large organisations of the sector, the organisation's customer, the conswners, the services of prevention, the consultants in health and safety and wage earners Labour, Employment and Work in New Zealand 2008 associations which both put support and pressure on the employees. He confirms research which had pleaded for an approach of organisations complementing the services of prevention through the role of the intermediaries. This operates as a role of translation of the regulation for the leaders of S:MEs, it is in particular the case of the chartered accountants in the side door approach (Lamm, 1997) and of the chambers of commerce as an actors' relay (Grosjean, 2003).
In addition, many preventive actions are based on the proximity between large organisations, which have already implemented tools and methods of prevention of occupational hazards and the small organisations. Waiters (2001) gives examples of this type in the various countries of the European Union, pw1icularly in the United Kingdom, where a program entitled "Good neighbourhood" was launched in the 1990s by the Health and Safety Executive (HSE). This was to encourage the large organisations to deal with their counterparts by the means of common seminw·s, sharing of experiments and provisioning of expertise in this field.
This positive role of the large organisations on the small ones partly confirms of Favaro's analysis (1997). However, it is not shared by all the community.
In France, the low level of compliance of sub-contractors is deplored by many clients particularly in the building industry. The latter gives a report on the little influence that they have on the small organisations in a sector where a vast majority of the activities are carried out by subcontracting. This observation is confirmed by many researchers who rep011 the negative relation existing between the level of prevention of the OHS risks in small organisations and their dependence with respect to large organisations (Mayhew and Quintan, 1997). This deterioration of health and safety at work finds its sow·ce in the economic pressure which prevails in this type of relations. One of the reasons of subcontracting is, indeed, for a reduction in production costs.
The corporate network, thus, has a positive impact on the manager, the level of prevention of the OHS risks when the institutional context supports the information exchanges. However, the dependence of the subcontractors with respect to the clients can be at the origin of a degradation of the prevention when the large companies outsourcing their risks.
The resew·ch presented above makes it possible to draw up a general representation of the determinants of health and safety with work in this type of organisations.
Internally, the manager whose profile of management is determined by their level of studies, their expectation as a contractor, their insertion in a network and their financial resources adopts an attitude which can go from the lowest level of conformity to the fullest (the most general attitude being to leave it to the employee) (Eakin, 1992;Lamm 2001 ).
The attitude of the employees varies according to their qualifications, their skills and their situation toward employment. Precarious employment is a key factor to understanding the level of prevention of risks, but this precarious employment also depends on the economic context of the organisation (Quintan, 1997).
In addition, the specific context of the employers/employees social relations in small organisations must be taken into account and does not allow the same thinking on prevention as in large organisations. The positioning of the employers with respect to their employees does not enable them to use the means of pressure which can be required of a leader of a large organisation (Eakin, 1992).
Externally, the weak bond with the institution and the relation between dependence and the organisation's customer can maintain a level of low prevention. In other cases, the relation of dependence can lead to an appropriation of risk measurement management implemented by the client and deciding the best way of taking into account OHS matters (Waiters, 2001 ).
Herein, the legal, social and economic environments play a determining role and, therefore, must be taken into account to include and understand how they interact with these internal determinants. These interactions are different according to the identity card from the company (Favaro, 1997) (effective, legal independence and the market in which it operates) and the attitude of the manager facing regulation (Lamm, 2001 ).
Conclusion
The different international research presented in this article gives an overview of the results which, independently of the economic and cultural contexts in which Stvffis are bathed and developed, may be regarded as international factors on the level of prevention in SMEs. All the research, which has been can·ied out so far, is nonetheless largely localised in the Anglo-Saxon countries and in Northern Europe, and it seems in the light of the publication that the international collaborations have been fertile.
The relative French insulation in this field and the poor references on the subject should urge a greater interest towards international resew·ch. Moreover, all the resew·ch which has been carried out in France confirms the results of the international research (Favaro, 1997;ISAST, 2006). A recent qualitative research project carried out during the assessment of an experimental preventive action by Martin and Guarnieri, 2007 has authenticated in French SMEs, the international invariants presented in this paper. The invariants are classified into five categories; the attitude of the Labour, Employment and Work in New Zealand 2008 manager, the perception of the role of the institutional dealing with prevention, the impact of the organisation's environment and its stage of economic development. They should be used as a basis to develop other preventive actions which take into account the ways which have been opened by (Lamm, 1997;Hasle, 2000;Antonsson;2001;Waiters, 2001 ). | 2019-04-12T13:54:19.412Z | 1970-01-01T00:00:00.000 | {
"year": 1970,
"sha1": "7e03eb2b35b7ec70dd7443c0a647962c7338904f",
"oa_license": null,
"oa_url": "https://ojs.victoria.ac.nz/LEW/article/download/1649/1492",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ccf5ba67ab59cfdcd24b7fb6783baf36e99b15f5",
"s2fieldsofstudy": [
"Business",
"Linguistics"
],
"extfieldsofstudy": [
"Engineering"
]
} |
251287960 | pes2o/s2orc | v3-fos-license | Fungi associated with dead branches of Magnolia grandiflora: A case study from Qujing, China
As a result of an ongoing survey of microfungi associated with garden and ornamental plants in Qijing, Yunnan, China, several saprobic fungal taxa were isolated from Magnolia grandiflora. Both morphological and combined SSU, LSU, ITS, tef1, and rpb2 locus phylogenetic analyses (maximum-likelihood and Bayesian analyses) were carried out to identify the fungal taxa. Three new species are introduced in Pleosporales, viz., Lonicericola qujingensis (Parabambusicolaceae), Phragmocamarosporium magnoliae, and Periacma qujingensis (Lentitheciaceae). Botryosphaeria dothidea, Diplodia mutila, and Diplodia seriata (in Botryosphaeriaceae) are reported from Magnolia grandiflora for the first time in China. Angustimassarina populi (Amorosiaceae) is reported for the first time on M. grandiflora from China, and this is the first report of a member of this genus outside Europe. Shearia formosa is also reported for the first time on M. grandiflora from China.
Introduction
Discovering missing taxa in the fungal tree (or in Kingdom Fungi) is one of the popular topics among taxonomists. Recent species estimation studies have predicted that tropical regions harbor higher fungal diversity than previously expected (Hawksworth and Lücking, 2017;Hyde et al., 2020). Several studies (which were based on DNA sequence analyses) described numerous fungal species during the last decade from tropical countries such as, Thailand and India (Chaiwan et al., 2020;Calabon et al., 2021;Rajeshkumar et al., 2021). A large number of fungal species have also been described .
/fmicb. . from subtropical China, especially in Yunnan and Guizhou provinces (Luo et al., 2018;Lu et al., 2021;Ren et al., 2021;Wang et al., 2021;Wijayawardene et al., 2021). These studies mentioned that a large number of fungal species are waiting to be discovered in Southwestern China, including the Guizhou and Yunnan Provinces. Magnolia grandiflora (Southern magnolia) is an evergreen tree that is widely used as an ornamental plant in landscaping (Liu et al., 2019) and a fungal-rich host plant genus and is reported with over 1,000 records of taxa in Farr and Rossman (2022). Seventy-two (72) records have been mainly listed from different substrates of Magnolia alba, M. delavayi, M. denudate, and M. grandiflora from China (Farr and Rossman, 2022). Recently, Wanasinghe et al. (2020) studied fungi associated with Magnolia species in the Kunming Botanical Garden and predicted rich fungal diversity.
In this study, we collected ascomycetous fungi (both sexual and asexual morphs) that occur on different substrates of M. grandiflora from Qujing Normal University garden, Qujing, Yunnan province, China.
Materials and methods
Sample collection, isolation, and identification Samples were collected from aerial and ground litter (i.e., leaves, branches, and stems) of Magnolia grandiflora from September 2019 to June 2021 from Qujing Normal University garden in Yunnan, China. The specimens were stored in paper bags and transferred to the laboratory. The samples were examined with a stereomicroscope, and microscopic images of the samples were taken using a Canon EOS700D digital camera (Canon Inc., Ota, Tokyo, Japan) with a Nikon ECLIPSE Ni (Nikon Instruments Inc., Melville, NY, United States) compound microscope. Microcharacters were observed using a digital camera fitted onto a Nikon ECLIPSE 80i compound microscope. Measurements were per-formed with Tarosoft (R) Image Frame Work (v.0.9.7). More than 20 asci and ascospores (in sexual fungi) and more than 30 conidia and conidiogenous
Gene region Primers References
ITS ITS5/ITS4 White et al. (1990) LSU LR0R/LR5 Vilgalys and Sun (1994) SSU NS1/NS4 White et al. (1990) tef1 EF1-728F/EF-2 Rehner (2001) 983F/2218R Rehner (2001) rpb2 fRPB2-5F/ fRPB2-7cR Liu et al. (1999) cells (in asexual fungi) were measured. Plates were prepared using the Adobe Photoshop CS6 (Adobe Systems, San Jose, CA, United States) software. A single-spore isolation was carried out to isolate the taxa as described in Chomnunti et al. (2014), and we used water agar as the medium. A spore suspension was prepared using conidiomata or ascomata, and then the suspension was transferred with a sterile pipette onto the surface of a Petri dish with water agar. Germinated spores (approximately 12 h later) were transferred to a new potato dextrose agar (PDA) medium for purification. Dried specimens and living cultures were deposited at the herbarium and culture collection of Guizhou Medical University, Guizhou Province, China.
DNA extraction, polymerase chain reaction amplification, and sequence analysis
The total genomic DNA of microfungi was extracted from fresh mycelia grown on PDA at 25-27 • C using the Biospin Fungus Genomic DNA Extraction Kit (BioFlux R , Hangzhou, People's Republic of China) according to the manufacturer's instructions (Dai et al., 2019).
The primers used for amplification are listed in Table 1. PCR amplification conditions are those as followed by Dai et al. (2017). PCR products were sent for sequencing at Shanghai Sangon Biological Engineering Technology & Services Co. (Shanghai, People's Republic of China). All newly generated sequences are deposited in GenBank, and accession numbers are obtained (Table 2).
Sequencing and sequence alignment
Sequences generated from different primers of nontranslated loci and protein-coding regions are analyzed with other sequences retrieved from GenBank (Table 2). Sequences with high similarity indices were determined by a BLAST search to find closest matches with taxa in Dothideomycetes and from recently published data (e.g., Thambugala et al., 2015; . /fmicb. . Tibpromma et al., 2017;Wanasinghe et al., 2018Wanasinghe et al., , 2020Hyde et al., 2020). The multiple alignments of all consensus sequences, as well as the reference sequences, were automatically generated with MAFFT v. 7 (Katoh et al., 2017), and were improved manually when necessary using BioEdit v. 7.0.5.2 (Hall, 1999).
Phylogenetic analyses
Analysis (SSU, LSU, ITS, tef , and rpb multi-sequence analyses of Amorosiaceae, Botryosphaeriaceae, Lentitheciaceae, Longiostiolaceae, and Parabambusicolaceae) Single-locus data sets were examined for topological incongruence among loci for members of the relevant families. Conflict-free alignments were combined in to the final multigene dataset for analyses using BioEdit and concatenated into a multi-locus alignment that was subjected to maximumlikelihood (ML) and Bayesian (BI) phylogenetic analyses. The CIPRES Science Gateway platform (Miller et al., 2010) was used to perform RAxML and Bayesian analyses. ML analyses were performed with RAxML-HPC2 on XSEDE v. 8.2.10 (Stamatakis, 2014) using a GTR + I + G model with 1,000 bootstrap repetitions. Evolutionary models for Bayesian analysis were selected independently for each locus using MrModeltest v. 2.3 (Nylander et al., 2008) under the Akaike Information Criterion (AIC) implemented in both PAUP v. 4.0b10, and GTR + I + G was selected as the best-fit model for all three analyses. MrBayes analyses were performed setting GTR + I + G, 2 M generations, sampling every 100th generation and ending the run automatically when the standard deviation of split frequencies dropped below 0.01 with a burn-in fraction of 0.25.
Analysis (ITS and tef sequence analyses of Botryosphaeria sensu stricto)
The ITS and tef1 data sets were examined for topological incongruence among loci for selected members of Botryosphaeria. Conflict-free alignments were concatenated into a multi-locus alignment that was subjected to maximumlikelihood (ML) phylogenetic analysis. The CIPRES Science Gateway platform (Miller et al., 2010) was used to perform RAxML. ML analyses were performed with RAxML-HPC2 on XSEDE v. 8.2.10 (Stamatakis, 2014) using the GTR + I + G model with 1,000 bootstrap repetitions. A Bayesian analysis was performed using SYM + I + G for ITS and GTR + I for tef1 in the final command with 1 M generations. Sampling was conducted on every 100th generation, ending the run automatically when the standard deviation of split frequencies dropped below 0.01 with a burn-in fraction of 0.25.
Analysis (ITS and tef sequence analyses of Diplodia sensu stricto)
The ITS and tef1 data sets were examined for topological incongruence among loci for selected members of Diplodia. Conflict-free alignments were concatenated into a multilocus alignment that was subjected to maximum-likelihood (ML) phylogenetic analysis. The CIPRES Science Gateway platform (Miller et al., 2010) was used to perform RAxML. ML analyses were performed with RAxML-HPC2 on XSEDE v. 8.2.10 (Stamatakis, 2014) using the GTR + I + G model with 1,000 bootstrap repetitions. A Bayesian analysis was performed using GTR + I + G for ITS and HKY + G for tef1 in the final command with 1 M generations. Sampling was conducted on every 100th generation, ending the run automatically when the . /fmicb. . standard deviation of split frequencies dropped below 0.01 with a burn-in fraction of 0.25.
Analysis (ITS, LSU, SSU, and tef sequence analyses of Amorosiaceae)
The ITS, LSU, SSU, and tef1 data sets were examined for topological incongruence among loci for selected members of Amorosiaceae. Conflict-free alignments were concatenated into a multi-locus alignment that was subjected to maximumlikelihood (ML) phylogenetic analysis. The CIPRES Science Gateway platform (Miller et al., 2010) was used to perform RAxML. ML analyses were performed with RAxML-HPC2 on XSEDE v. 8.2.10 (Stamatakis, 2014) using the GTR + I + G model with 1,000 bootstrap repetitions. MrBayes analyses were performed setting GTR + I + G, 1 M generations, sampling every 100th generation, ending the run automatically when the standard deviation of split frequencies dropped below 0.01 with a burn-in fraction of 0.25.
Analysis (SSU, LSU, tef , and ITS sequence analyses of Lentitheciaceae)
Raw sequences were combined using SeqMan and subjected to BLAST in GenBank. The SSU, LSU, tef1, and ITS sequence data closely related to our taxa were retrieved from the NCBI GenBank and are listed in Table 1. Single gene sequence alignment was generated with the MAFFT v. 7 online program (http://mafft.cbrc.jp/alignment/server/) (Katoh et al., 2017). FASTA alignment formats were changed to PHYLIP and NEXUS formats with Aliview 2.11. The single-gene datasets were examined for topological incongruence among loci and the conflict-free alignments were concatenated into a multi-locus alignment that was subjected to ML and BI phylogenetic analyses. The CIPRES Science Gateway platform (Miller et al., 2010) was used to perform RAxML. ML analyses were conducted with RAxML-HPC2 on XSEDE v. 8.2.10 (Stamatakis, 2014) using GTR + I + G model with 1,000 bootstrap repetitions. MrBayes analyses were performed setting GTR + I + G, two parallel runs were conducted, using the default settings, six simultaneous Markov chains were run for 1 M generations, and trees were sampled every 100th generation. The run ended automatically when the standard deviation of split frequencies dropped below 0.01 with a burn-in fraction of 0.2. Phylograms were visualized with the FigTree v1.4.0 program (Rambaut, 2012)
FIGURE
RAxML tree based on a combined dataset of partial SSU, LSU, ITS, tef , and rpb DNA sequence analyses. Bootstrap support values for ML equal to or > %, Bayesian posterior probabilities (BYPP) equal to or > . , are shown as ML/BI above the nodes. The type of strains is in bold, and new isolates are in blue. The scale bar represents the expected number of nucleotide substitutions per site.
The Shearia formosa (GMBCC1172) isolated in this study nested in a well-supported clade (100% ML/1 BYPP) with other isolates of S. formosa (MFLUCC 20-0017, MFLUCC 20-0018, and MFLUCC 20-0019), which were used by Wanasinghe et al. (2020) to describe the species, therefore confirming the identification of the studied species. The family Amorosiaceae is composed of four clades, which correspond to known genera Alfoldia, Amorocoelophoma, Amorosia, and Angustimassarina. Our new strain, GMBCC1177, grouped with another 12 Angustimassarina strains with 97% ML and 1 BYPP statistical Analysis 2 (ITS and tef1 sequence analyses of Botryosphaeria sensu stricto): The concatenated dataset (ITS and tef1 loci) contained 33 isolates, and the tree was rooted to Macrophomina phaseolina (CBS 227.33). The final alignment contained 761 characters used for phylogenetic analyses, including alignment gaps. The RAxML analysis of the combined dataset yielded a best-scoring tree with a final ML optimization likelihood value of −1,861.242225. The matrix had 161 distinct alignment patterns, with 6.84 % undetermined characters or gaps. Parameters for the GTR + I + G model of the combined amplicons were as follows: estimated base frequencies; A = 0.210798, C = 0.292947, G = 0.259679, and T = 0.236576; substitution rates AC = 0.435178, AG = 1.502643, AT = 1.277448, CG = 0.440383, CT = 4.352625, and GT = 1; proportion of invariable site I = 0.701032; gamma distribution shape parameter α = 0.87371. In the combined sequence data analyses of the ITS and tef1 loci, our new strain, GMBCC1179, clustered with 16 other strains of Botryosphaeria dothidea (Supplementary Figure 1). Two strains of Botryosphaeria auasmontanum (MFLUCC 15-0923 and MFLUCC 17-1071) also grouped in B. dothidea. However, the Botryosphaeria dothidea clade is statistically not well-supported.
Analysis 3 (ITS and tef1 sequence analyses of Diplodia sensu stricto): The concatenated ITS and tef1 loci contained 67 isolates, and the tree was rooted to Lasiodiplodia lignicola (MFLUCC 11-0656). The final alignment contained 882 characters used for the phylogenetic analyses, including alignment gaps. The RAxML analysis of the combined dataset yielded a best-scoring tree with a final ML optimization likelihood value of −3,628.405258. The matrix had 321 distinct alignment patterns, with 12.65 % undetermined characters or gaps. Parameters for the GTR + I + G model of the combined amplicons were as follows: estimated base frequencies; A = 0.206552, C = 0.298475, G = 0.261435, and T = 0.233538; substitution rates AC = 1.148622, AG = 3.343902, AT = 1.039052, CG = 1.718372, CT = 4.817617, and GT = 1; proportion of invariable site I = 0.441349; gamma distribution shape parameter α = 0.65846. In our analysis of selected Diplodia species, the new strain GMBCC1173 clustered with Diplodia mutila (MFLUCC 15-0918, CBS 230.30, CBS 112553, CBS 136014, and MFLUCC 15-0917). Particularly, GMBCC1173 has a close phylogenetic affinity to MFLUCC 15-0917, which was introduced by Dissanayake et al. (2017) from Italy on Acer negundo. The two collections (GMBCC1175 and NW04) isolated in this study formed a basal terminal lineage in the Diplodia seriata clade that includes thirteen strains. This Diplodia seriata clade also did not receive a strong phylogenetic support (Supplementary Figure 2).
Analysis 4 (ITS, LSU, SSU, and tef1 sequence analyses of Amorosiaceae): Twenty-five strains are included in the sequence analysis and comprise 2,106 characters with gaps. A single gene analysis was carried out and compared with each species to compare the topology of the tree and clade stability. Botryosphaeria dothidea (CBS 115476 and AFTOL-ID 946) was used as the outgroup taxon. The tree topology of the ML analysis was similar to the BYPP. The best-scoring RAxML tree with a final likelihood value of −5144.567766 is presented. The matrix had 247 distinct alignment patterns, with 18.63% of undetermined characters or gaps. Estimated base frequencies were as follows: A = 0.243552, C = 0.248710, G = 0.270808, and T = 0.236930; substitution rates AC = 0.682136, AG = 1.407781, AT = 1.275132, CG = 0.809575, CT = 6.927148, and GT = 1; gamma distribution shape parameter alpha = 0.654596 (Figure 2). The family Amorosiaceae is composed of four clades, which correspond to known genera Alfoldia, Amorocoelophoma, Amorosia, and Angustimassarina. Our new strain, GMBCC1177, grouped with another Angustimassarina strains with low statistical support values ( Figure 2). However, the interspecific relationships of these Angustimassarina species have not received a clear phylogenetic resolution. The strain GMBCC1177 showed a close phylogenetic affinity with Angustimassarina populi strains clustered together with A. arezzoensis (MFLUCC 13-0578) and A. sylvatica (MFLUCC 18-0550).
Analysis 5 (SSU, LSU, tef1, and ITS sequence analyses of Lentitheciaceae): Twenty-nine strains are included in the sequence analysis and comprise 3,086 characters with gaps. A single gene analysis was carried out and compared with each species to compare the topology of the tree and clade stability. (Table 4), but the new collections GMBCC1180 and GMBC1041 clustered with ex-types of P. hederae and P. platani with low bootstrap values (which are indicated in blue in Figure 3). The ex-types of P. hederae and P. platani are lacking ITS loci in the GenBank; thus, we suggest that including more gene regions of these two species will facilitate a better understanding of intraspecific segregation. However, here, we follow the morphological evidence to introduce novel species (i.e., P. magnoliae) (see below under the Section Taxonomy).
GMBCC1176 and GMBCC1044, are grouped as the basal clade to the clade that comprises ex-types of P. hederae, P.
. /fmicb. . magnoliae, and P. platani (Figure 3). Both strains were generated from morphologically similar collections and thus introduced as a new species, i.e., Phragmocamarosporium qujingensis. The strain named Phragmocamarosporium hederae (KUMCC 18-0165) clustered with the ex-type of P. rosae (MFLUCC 17-0797) but was distinct from the extype of P. hederae. This strain was named mistakenly as Phragmocamarosporium hederae and thus needs an extensive study to confirm if it warrants a novel species.
Taxonomy
In this section, we introduce three new pleosporalean species from M. grandiflora in Qujing Normal University,
Notes
Botryosphaeriaceae (in Botryosphaeriales) is an important family that comprises a broad range of life modes such as saprobes, pathogens, and endophytes and shows a worldwide distribution (Phillips et al., 2013). Wijayawardene et al. (2022) accepted 22 genera in Botryosphaeriaceae. During our collecting FIGURE Phylogram generated from maximum likelihood analysis based on combined SSU, LSU, tef , and ITS partial sequence data. Bootstrap support values for ML equal to or > % and BYPP values equal to or > . are given above the nodes. The newly generated sequence is in red.
Culture characteristics
Ascospores germinating on PDA within 24 h and germ tubes produced from both sides. Colonies growing fast on PDA, reaching 6 cm in 1 week at 28 • C, effuse, velvety to hairy, circular, white in the first week, and brown to dark brown after 1 week from above and below.
Notes
Our new collection of B. dothidea from M. grandiflora morphologically resembles the type collection described in Phillips et al. (2013). In phylogenetic analyses, our collection (GMBCC1179) groups with B. dothidea s. str. (Figure 1). According to Deng (1963), Tai (1979), and Farr and Rossman (2022), B. dothidea has not been previously reported from Magnolia species in China. Zlatkovic et al. (2018) reported B. dothidea from M. grandiflora as a pathogenic species (a causal agent of stem and shoot dieback) from Serbia. However, we did not notice any disease symptoms in the host plant that we collected. Nevertheless, it is essential to collect more samples to confirm the impact of B. dothidea on Magnolia species, since it is an important ornamental plant in China. Here, we report B. dothidea from M. grandiflora in China for the first time.
Notes
Montagne (1834) introduced Diplodia with D. mutila (Fr.) Mont. as the type of species. Currently, 28 species are accepted in Wu et al. (2021). Members of Diplodia are distributed worldwide and occur as different life modes such as pathogens, saprobes, and endophytes (Phillips et al., 2013). Approximately, over 60 records of Diplodia species have been reported from China according to Xiao et al. (2021) and Farr and Rossman (2022). Nevertheless, Diplodia species have not been reported from Magnolia species in China. Here, we report Diplodia mutila and D. seriata from M. grandiflora for the first time in China. According to our knowledge, Diplodia species have not been reported from M. grandiflora so far.
Culture characteristics
Conidia germinating on PDA within 24 h and germ tubes produced from rear side. Colonies growing fast on PDA, reaching 9 cm in 1 week at 28 • C, effuse, velvety to hairy, circular, white in the first week, brown to dark brown after 1 week from above, and black in the central and outermost circles with dark brown in the middle from below.
Culture characteristics
Ascospores germinating on PDA within 24 h and germ tubes produced from one side. Colonies grow on PDA at 28 • C under normal light, reaching 2.5 cm diam. after 2 weeks, dense, irregular, umbonate, surface smooth, with edge entire, cottony, white to gray from above; white at the margin and dark brown at the center from below, and do not produce pigmentation in PDA.
Notes
In morphology, our new collection closely resembles Angustimassarina populi (holotype MFLU 14-0588). Angustimassarina populi was introduced by Thambugala et al. (2015), was collected from Italy on dead branches of Populus sp., and was characterized by erumpent, globose to subglobose ascomata with crest-like ostiole, cylindrical asci with fusiform ascospores, 1(−3)-septate, constricted at the central septum, and surrounded by a mucilaginous sheath; asexual morph is hyphomycetous. Based on our phylogenetic analyses of combined SSU, LSU, tef1, and ITS sequence data (Figure 2), our strain (GMBCC1177) clusters with the strains of A. populi at the basal clade of A. arezzoensis and A. sylvatica. Thus, we identify our fresh collection as A. populi. This is the first report of A. populi from China and on M. grandiflora. We also compared the morphology of the sexual morph of our new collection with Angustimassarina species that are morphologically and phylogenetically closely related (Table 3). However, the phylogenetic affinities of Angustimassarina species are not well-resolved ( Figure 2). Therefore, to resolve the current status of Angustimassarina, further research is needed together with asexual morph and protein-coding genes.
Culture characteristics
Ascospores germinating on PDA within 24 h and germ tubes produced from all sides. Colonies growing slowly on PDA, reaching 4 cm in diam. after 1 week at 28 • C, dense, irregular, umbonate, surface smooth, with edge entire, cottony, white in the first week, white to gray from above; white at the margin and black in the center, and dark brown in the middle from below. Mycelium semi-immersed in PDA, with branches, septate, smooth-walled, and hyphae brown.
Notes
Currently, the genus comprises three species (including the new collection), and all species were reported from Yunnan, China. Interestingly, both Lonicericola hyaloseptispora and L. fuyuanensis have been reported from the same host family, i.e., Caprifoliaceae. Nevertheless, the new collection was made from decaying branches of M. grandiflora (Magnoliaceae). In . /fmicb. . our phylogenetic analyses (Figure 1), our new collection formed a distinct clade in Lonicericola s. str. with high bootstrap values (99% and 1 in ML and Bayesian analysis, respectively). This result is also supported by morphological characters (see the taxonomic key). Based on current data, we assume that Lonicericola species are restricted to subtropical regions in China but could be distributed in different host families. The taxonomic key below can be used to distinguish the Lonicericola species based on ascospore and asci morphology. Notes Zhang et al. (2009) introduced this genus with Lentithecium, Katumotoa, and Keissleriella. Currently, Lentitheciaceae comprises 14 genera (Wijayawardene et al., 2022). Members of the family occur in both terrestrial and aquatic environments and are common as saprobes.
Culture characteristics
Conidia germinating on PDA within 24 h and germ tubes produced from the middle. Colonies growing slowly on PDA, reaching 2.5 cm in diam. after 1 week at 28 • C, circular, zonate, uneven margin, cottony, white from above, with thin mycelium, dark brown at the margin, and yellowish-brown at the center from below. Mycelium semi-immersed in PDA, with branches, septate, smooth-walled, and hyphae brown.
Notes
In the phylogenetic analyses (Figure 3), Phragmocamarosporium magnoliae groups with P. hederae and P. platani (Figure 8), but in morphology they are different (Table 4). Besides, P. hederae and P. platani are lacking sequences of ITS and tef1 loci. Species resolution of this subclade will be higher with more genes and more collections.
Culture characteristics
Ascospores germinating on PDA within 24 h and germ tubes produced from both sides. Colonies growing slowly on PDA, reaching 3 cm in 1 week at 28 • C, effuse, velvety to hairy, oval, uneven margin, white in the first week, white at the margin and gray at the center from above, and white at the margin and dark brown at the center from below.
Notes
In conidial morphology and dimensions (in mean values), the new collection resembles the neotype of Shearia formosa (which was reported on Magnolia denudate) but is distinct from conidiomatal and conidiogenous cell characters (Table 5). However, in phylogenetic analyses, the new collection accommodated in Shearia s. str. and clustered with MFLUCC 20-0019, the ex-neotype (Figure 1). Hence, we confirm our collection as Shearia formosa. S. formosa has previously been reported on M. grandiflora from the United States (Miller, 1990;Schubert, 1991;Farr and Rossman, 2022). However, according to our knowledge, S. formosa has not been reported on M. grandiflora from China. Hence, this is the first host record of S. formosa on M. grandiflora from China.
Discussion
Tropical and subtropical regions are rich in biodiversity. Several studies concluded that some regions in Asia have not been properly studied; thus, a large number of fungal species are yet to be discovered . In China, the southwestern region has higher biodiversity including higher floral, faunal, and microbial diversity . The Guizhou and Yunnan provinces are important in this region as a large number of research studies confirmed rich fungal diversity (Farr and Rossman, 2022).
In southwest China (i.e., Guizhou and Yunnan Provinces), Magnolia species are widely used in gardening (as an ornamental plant), horticulture, and Chinese traditional medicine. In this study, we focused on M. grandiflora in Qujing Normal University Garden, Qujing city, Yunnan province. We recognized that this species has been widely used in Qujing for gardening purposes. Thus, we selected the university garden as a preliminary collecting site to assess the fungal diversity of M. grandiflora.
Botryosphaeriaceae species are common in southwest China and other regions as well (e.g., Wijayawardene et al., 2016;Xiao et al., 2021). Our new collections of Botryosphaeriaceae taxa from M. grandiflora resided in Botryosphaeria sensu stricto and Diplodia sensu stricto (Figure 1). Among the taxa, one species is confirmed as B. dothidea while two other strains are confirmed as D. mutila and D. seriata. This is the first report of both Diplodia species on M. grandiflora. Botryosphaeria dothidea has been reported as a pathogen of broad range of . /fmicb. .
Conidiomata dimensions Conidiogenous cells Conidia dimensions
Shearia formosa (neotype) on Magnolia denudata hosts in China, including gardening plants (e.g., causal agent of trunk and extended up to branches of Acer platanoides fide; Wang et al., 2015) and agricultural crops (e.g., causal agent of apple ring rot of apple fide; Tang et al., 2012). According to Farr and Rossman (2022), Botryosphaeria dothidea has not been reported from Magnolia species from China; thus this is the first report. Nevertheless, Botryosphaeria dothidea was reported as a pathogen of M. grandiflora from Serbia (Zlatkovic et al., 2018).
However, none of the new collections have been observed associated with any diseased symptoms such as cankers or leaf spots. Besides, in a recent genomic study by Yan et al. (2018), they predicted that some Botryosphaeriaceae species (e.g., Lasiodiplodia theobromae) could be opportunistic pathogens of woody plants with changes in the environment. Hence, it is essential to expand the sample number to confirm whether their life modes are adversely impacted by M. grandiflora populations in Qujing.
Angustimassarina populi was introduced as a saprobe of dead branches of Populus sp. from Italy (Thambugala et al., 2015). The genus Angustimassarina comprises twelve species epithets (Index Fungorum 2022) including A. populi, and all species have been reported from Europe. In this study, we reported Angustimassarina populi, which occurred on M. grandiflora from Qujing, Yunnan. This is the first report of a member of Angustimassarina reported outside Europe. Moreover, this is the first report of Angustimassarina populi from China and on M. grandiflora and, thus, the first country and host records, respectively. This collection confirms that Angustimassarina species could have a broader distribution and, apparently, are not host-specific. It is necessary to promote biogeographic studies of this type of genus, which was previously reported only in one geographic region but recently found in other countries. Based on this result, we predict that more novel species can be reported from China as Angustimassarina was originally reported in temperate countries, i.e., Italy.
The novel species of Lonicericola, L. qujingensis is the third member of the genus. Interestingly, all the species have been reported only from Yunnan Province, China. However, previous species (i.e., L. hyaloseptispora and L. fuyuanensis) have been reported from the host family Caprifoliaceae. Since the new collection was made from decaying branches of M. grandiflora, we predict that members of Lonicericola could occur in a broad range of host families. However, geographical distribution is not clear and thus needs further collections from other regions in Yunnan.
Currently, the genus Phragmocamarosporium comprises three species that were reported from Germany, The United Kingdom, and Guizhou Province, China. In this study, we introduce two more species of Phragmocamarosporium from M. grandiflora viz., Phragmocamarosporium magnoliae and P. qujingensis. Phragmocamarosporium platani, the type of species of Phragmocamarosporium, was reported from Platanus species, in Guizhou. Species resolution in the subclade in which Phragmocamarosporium magnoliae, P. hedeare, and P. platani are included is not clear as the latter species are lacking ITS and protein loci in the GenBank (Wijayawardene et al., 2015). Hence, here, we used morphological characteristics to differentiate the species as a supporting factor. Besides, we predict that the Guizhou-Yunnan region could be harboring more Phragmocamarosporium species. Moreover, the strain named KUMCC 18-0165 (of Phragmocamarosporium hederae) in the GenBank must represent a novel lineage in Phragmocamarosporium s. str. (Figures 1, 10; Table 4). Wanasinghe et al. (2020) reported S. formosa on Magnolia denudate from Yunnan, China. In this study, we report S. formosa on M. grandiflora for the first time.
Our findings suggest that it is essential to check for fungal diversity on extensively studied host genera that occur in biodiversity-rich regions. Hence, we suggest expanding future studies on extensively studied host genera that occur in Yunnan such as Eucalyptus, Magnolia, and Quercus. Besides, these host genera could be species-rich and thus could harbor different fungal taxa. Hence, precise host identification is also important in this type of broad future study. This is essential to reveal hidden fungal diversity in biodiversity hotspots.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding authors.
Author contributions
NW and D-QD designed the study, performed the morphological study, and wrote the manuscript. DW and ST performed the phylogenetic analyses. JK, H-HC, and D-QD reviewed and edited the manuscript. T-TZ, G-QZ, and M-LZ made the plates. L-SH performed the fungal DNA extraction. All authors approved the final version of the manuscript.
Funding
The research was supported by the National Natural Science Foundation of China (Nos. NSFC 31860620, 31950410558, 31760013, and 32150410362), High-Level Talent | 2022-08-04T13:29:26.290Z | 2022-08-04T00:00:00.000 | {
"year": 2022,
"sha1": "303a0a063c82aba87fdece2f3fbb1052bbc894c6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "303a0a063c82aba87fdece2f3fbb1052bbc894c6",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.