id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
237555387
|
pes2o/s2orc
|
v3-fos-license
|
Achenbach syndrome as a rare cause of painful, blue finger
Paroxysmal finger hematoma, also known as Achenbach syndrome, is an underdiagnosed condition that causes apprehension in patients owing to the alarming appearance. It usually presents as a blue-purple discoloration of the volar aspect of one or more digits and can be associated with pain and paresthesia. This condition is benign and is usually self-limiting.
A purple or blue finger or fingers can present a diagnostic dilemma for the vascular medicine and surgery specialist. Raynaud disease is a common cause of pale and blue digits. Unusual causes such as vasculitis, acrocyanosis, trauma, and subclavian artery aneurysm with thrombi should be considered in the differential diagnosis. Achenbach syndrome, also known as paroxysmal finger hematoma, is exceedingly rare and can be diagnosed by clinical suspicion, confirmed with a dampened arterial waveform. In this case series, we described three patients with Achenbach syndrome. All three included patients provided written informed consent for the report of their case. (Fig 2). A cold immersion test was performed, which was negative for any changes. The patient was prescribed 75 mg clopidogrel (Plavix) and was followed up 3 weeks later. Her symptoms had resolved at that time (Fig 3).
The clinical presentation of our three patients was not suggestive of thoracic outlet syndrome or vasculitis. The erythrocyte sedimentation rate, antinuclear antibody, antithrombin III, anticardiolipin antibodies, C-reactive protein, complete blood count, platelet count, and basic metabolic panel were normal in all three patients, negating any concerns for vasculitis. Also, none of our three patients had a history of trauma or signs of thoracic outlet syndrome.
DISCUSSION
Paroxysmal finger hematoma (Achenbach syndrome) is an underdiagnosed vascular syndrome. Achenbach syndrome usually presents as recurring episodes of unexplained, sudden onset of painful swelling associated with deep ecchymosis of the volar aspect of the finger. 1,2 First described by Achenbach in 1955, it usually affects women aged 40 to 60 years. The exact etiology remains unclear. However, the hypothesis of a vasomotor disorder is widely favored. An association exists between Raynaud phenomenon and a history of chilblains. 3 Hormonal factors can also play a role in the causation of this underreported condition. 4 Most patients will seek medical attention owing to the alarming purple discoloration that occurs with this condition. Owing to the selflimiting nature of this disease, invasive studies are not necessary, although would be negative if preformed. 5 Multiple capillary hematomas have been described with this condition. An association with acrocyanosis, gastrointestinal disorders, migraines, and gallbladder disease has been reported. 2 Achenbach syndrome is a selflimiting condition in which the skin discoloration resolves within an average of 4 days. 6 Achenbach syndrome can be confused with other conditions, including embolic digital artery occlusion, dermatitis artefacta, and Raynaud disease. Embolic digital artery occlusion will frequently involve multiple digits. A common cause is arterial thoracic outlet syndrome in a relatively healthy young patient. It is usually associated with post-stenotic dilation or aneurysm formation in the subclavian artery, commonly due to a cervical rib. The diagnostic workup of a patient presenting with purplish discoloration of a finger or fingers with or without a diminished radial pulse, should include a Doppler arterial study and duplex ultrasound scan of the subclavian artery to exclude a subclavian artery aneurysm in association with thoracic outlet syndrome, as demonstrated in Khaira et al. 7 Dermatitis artefacta will usually present as a superficial erosion or a hyperpigmented macule on the face or hand in patients with chronic dermatitis such as acne or alopecia. 8 These patients will frequently have an underlying depression or anxiety disorder, and the lesions will be self-inflicted by the patient. The appearance of a digit with intact skin is not consistent with the diagnosis of dermatitis artefacta.
Raynaud disease will typically present with episodes of the patient's fingers turning pale, followed by a purple discoloration. It will be accompanied by a cold sensation and numbness and will usually be bilateral. The fingers will then turn warm and red as the vasospasm subsides. We have described three patients with a similar presentation of Achenbach syndrome. Two of our three patients did not have a history of smoking. Physiologic testing was performed for each patient at their presentation, with each showing dampened digital waveforms corresponding to the respective fingers. In the third patient, a cold immersion test was performed, with negative findings. Achenbach syndrome can be confused with Raynaud disease; however, implementation of a cold immersion test can help differentiate the two disorders. Each of our patients was treated using an antiplatelet agent, and all symptoms pertaining to finger discoloration subsequently resolved. Although not previously reported, a short course of an antiplatelet agent, such as clopidogrel or aspirin, could be beneficial for patients presenting with a purple or blue finger, possibly improving arterial flow and preventing thrombosis to the respective digits. Because the symptoms will resolve spontaneously, the benefit of antiplatelet agents could not be ascertained with certainty. Two of our patients were treated with clopidogrel, and one was treated with aspirin owing to lack of coverage of clopidogrel by insurance. The diagnosis requires a high clinical suspicion and a knowledge of other diagnoses with presentations similar to that of Achenbach syndrome. A possible diagnostic approach to a patient with a purple digit or digits is shown in Fig 4. 9
CONCLUSIONS
Achenbach syndrome is a rare condition that can be confused with other more concerning disorders. 10 It is important to be familiar with this syndrome because most cases will be diagnosed clinically, and invasive studies are not necessary for diagnosis.
|
2021-09-19T05:13:44.542Z
|
2021-07-20T00:00:00.000
|
{
"year": 2021,
"sha1": "1e88fd5cd5bf2ff10478e19fac668179af2f5772",
"oa_license": "CCBYNCND",
"oa_url": "http://www.jvscit.org/article/S2468428721001301/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e88fd5cd5bf2ff10478e19fac668179af2f5772",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225207030
|
pes2o/s2orc
|
v3-fos-license
|
Length Based Growth Estimation of Sea Cucumbers ( Holothuria verrucosa and Holothuria pardalis ) (Holothuroidea:Echinodermata) Collected from Coastal Areas of Karachi, Pakistan (northern Arabian Sea)
: Non-seasonal von Bertalanffy and Hoenig seasonal von Bertalanffy models were fitted to the length frequency data of Holothuria pardalis and Holothuria verucosa sampled from the coastal areas of Karachi between January and December 2018 for estimating the growth parameters. The Hoenig seasonal von Bertalanffy growth parameters were estimated as L ∞ = 18.0 cm total length (TL), K = 1. 00 year -1 for H. pardalis and as L ∞ = 18.0 cm TL, K = 0.86 year -1 for H. verrucosa . H. verrucosa individuals reached 68.9% of their maximum total length at the one year old age class. For H. pardalis it was calculated as 54.2%. The seasonal oscillation in growth rate for H. pardalis ( C = 0.90) was larger than it was for H. verrucosa ( C = 0.18). The slowest period of growth corresponded to June in H. verrucosa and February in H. pardalis may be the result of the extended both reproduction and poor nutrition periods due to high rainfall regime. The relatively high calculated values of growth rate parameters for both species may have an important state for their survival rate under the condition of biological stress, but may also increase their potential as a candidate species for aquaculture.
INTRODUCTION
Sea cucumbers belong to the class Holothuroidea and so are also referred to as holothurians. Holothurians are found throughout all oceans and seas, at all latitudes, from the shore down to abyssal plains (Purcell et al., 2012). The adult stages are benthic (living on the sea bottom); some species live on hard substrates, rocks, coral reefs. Most of the species inhabit soft bottoms, on the sediment surface or buried in the sediment (Purcell et al., 2012;WoRMS, 2020). Holothurian species such as Holothuria pardalis and Holothuria verrucosa is commercially important, distributed at some localities in the Western Pacific, parts of Asia and the Indian Ocean, including the Red Sea and the Comoros, and along the Pacific coast of Central America (Purcell et al., 2012;WoRMS, 2020). This species live in rocky, sandy and muddy bottoms from shallow to deeper waters (Pawson, 1976), and also in crevice between boulders (Ahmed et al., 2020). According to Lane et al. (2000), H. pardalis and H verrucosa live up to 306 m and 30 m water depth, respectively.
Knowledge on fisheries biology and population dynamics of marine fauna such as sea cucumbers is important tools for marine biologist. Thus, the crucial biological information including reproduction biology and growth parameters on the commercially important species is necessary for management of global sea cucumbers fisheries. In the scientific literature, there are some works about the growth of sea cucumbers; including the weightlength relationships (WLRs) and condition factor (CF) based growth features have been reported in Holothurian species such as Ohshimella ehrenbergii, H. arenicola, H. atra, H. pardalis and H. verrucosa from the northern Arabian Sea coasts of Pakistan (Siddique et al., 2014;Ahmed et al., 2018a,b). In addition to these, a detailed study on the population dynamics of sea cucumbers has been carried out on the H. arenicola stocks in Manora and Buleji rocky shores in the northern Arabian Sea, Pakistan so far (Siddique & Ayub, 2015).
The growth parameters can be use as a tool in stock assessment studies, fish biology, fish population dynamics and also fisheries research studies. von Bertalanffy growth function curve parameters mostly effect by biotic and a-biotic factors such as sea water temperature, salinity, primer and/or secondary productivity as phyto and zoo plankton abundance, reproduction and/or spawning time or spawning season, food and feeding activities etc. These parameters under the influence of different factors such as animal size and age, gonad activities and maturity stages, quantitative and quality of food and feeding activities and also seasons etc. In Pakistan, monsoon winds carry moisture from the Indian Ocean and bring heavy rains during the monsoonal period between May and September. More than fifty percent of annual rainfall occurs in the monsoon season, mostly from July to August (Hussaina et al., 2010). Mobilized sea life by pre and post monsoon seasons affect directly or indirectly to the life of marine flora and also fauna species such as sea cucumber, H. pardalis and H. verrucosa. In the scientific literature, there is no knowledge on seasonal and/or non-seasonal growth parameters of H. pardalis and H. verrucosa. Our aim was to obtain first growth parameters from length-frequency data of the two species, H. pardalis and H. verrucosa, inhabiting the Sunehri and Buleji coast (north Arabian Sea, Pakistan), by fitting different growth models: the non-seasonal von Bertalanffy and the Hoenig seasonal von Bertalanffy models. Collected specimens were kept alive in water filled containers and than were transported to the laboratory and shifted in well aerated aquaria. For taxonomic studies and identification, morphological A )
B )
features were examined and microscopic studies were conducted. Ossicles were taken from three positions (dorsal and ventral body walls, and tentacles); wet mounts were prepared by placing a small piece of skin tissue on slide and adding few drops of 3.5% bleach, the slides were then rinsed with drops of distilled water. The slides were examined under the Nikon LABOPHOT-2 microscope at 10x10 magnifications. Microphotography was also performed through Fujifilm 16 MP digital camera (see Ahmed et al., 2018b for more details). Length (cm) data were collected for each sea cucumber after allowing the sea cucumber to relax in water for 5 min. Total length from mouth to anus was measured to the flexible ruler.
Von Bertalanffy Growth Function Parameter
Estimation: Growth in length has been described using the von Bertalanffy (1938) growth function, based on either observed or back calculated length at ages. The length frequency distribution analysis (LFDA) package is also a PC based computer package for estimating growth parameters from length frequency distributions. Version 5.0 of LFDA includes methods for estimating the parameters of both non seasonal and seasonal versions of the von Bertalanffy growth curve (Kirkwood et al., 2003).
The standard (three parameters) or non-seasonal von Bertalanffy (1938) growth function (VBGF) is: Seasonal growth or five parameter von Bertalanffy growth model (5 Parameters VBGF) was described using the Somers's (1988) version of the VBG equation: where, Lt is length at age t, L∞ is the asymptotic length to which the sea cucumber growth, K is the growthrate parameter, t0 is the nominal age at which the length is zero, C is the relative amplitude (0 < C < 1) of the seasonal oscillations, tS is the phase of the seasonal oscillations (-0.5 < ts < 0.5) denoting the time of year corresponding to the start of the convex segment of sinusoidal oscillation.
The time of the year when the growth rate is slowest, known as the winter point (WP), was calculated as: WP = tS + 0.5. Seasonal VBG curves were fitted to the length distributions after first indicating a range of values of L∞ and K and reducing iteratively the range to maximize the goodness of fit (Rn) of the curves to the data. Rn was calculated as: where ASP is the available sum of peaks, computed by adding the best values of the available peaks, and ESP is the explained sum of peaks, computed by summing all the peaks and troughs hit by the VBGF curve.
Analysis of the length data were fitted to length frequency distributions grouped in 2 cm total length size classes using the electronic length frequency analysis (ELEFAN) procedure in the PC-based computer package Version 5.0 of Length -Frequency Distribution Analysis (Kirkwood et al. 2003).
The ELEFAN procedure first restructures length frequencies and than fits a VBGF curve to the restructured data. Both seasonal and non-seasonal VBGF curves were fitted to the seasonal length distribution after providing a range of values for the parameters to be estimated and than iteratively reducing the range until the goodness of fit of the curve to the data is maximized.
Reliability of growth parameter estimates: Having estimated a set of growth parameters, one would like to evaluate their reliability. A possible test is the socalled phi-prime test (Φ') known as growth performance index. This test is based on the discovery by Pauly and Munro (1984) that Φ' values are very similar within related taxa. So, the growth performance comparisons were made using the growth performance index (Φ') which is preferred rather than using L∞ and K individually (Pauly and Munro 1984) and is computed as: Φ' = log (K) + 2 log (L∞).
Age stracture and von Bertalanffy Growth Parameters:
The seasonal and non-seasonal VBGF curves parameters obtained from the LFDA for each species are summarized in Table 1. The Rn value of the non-seasonal growth curve for H. verrucosa did not improve when a seasonal growth curve was fitted (Table 1), suggesting that, at least for our data, H. verrucosa do not exhibit a seasonal growth pattern. This was also apparent in th results of the relative amplitude values of the seasonal oscillations (C = 0.18) and in Figure 3B where no sinusoidal pattern could be observed in the seasonal von Bertalanffy growth curve. In H. pardalis, on the other hand, the Rn value of the nonseasonal VBGF curve improved by 36.18% after fitting the seasonal VBGF curve (Table 1). This result was also supported by the relative amplitude values of the seasonal oscillations (C = 0.90) and in Figure 4B where sinusoidal pattern could be observed in the seasonal VBGF curves for H. pardalis.
The slow growth period started at the begening of June for H. verrucosa (WP = 0.43; Figure 5). For H. pardalis, however, the start of slow growth period was at the end of February (WP = 0.14; Figure 5). The calculated growth performance index (Φ´; Estimated age -length key calculated from the seasonal VBGF curves parameters both H. pardalis and H. verrucosa individuals are showed in Figure 6. Length for the one year old class was estimated as 10.3±0.39 cm for H. verrucosa and 13.1±0.49 cm for H. pardalis by LFDA method. The calculated mean total length in the ages showed that the H. verrucosa individuals reached 68.9% of their maximum total length (Lmax = 19 cm) at the one year old class (mean: 10.3±0.39 cm). For H. pardalis it was also calculated as 54.2%. This fast growth characteristic of small individuals of these two species was also apparent in the growth curves in Figure 3-5, where it could be observed that the slight slope in the larger individuals compared to smaller sea cucumber leading to small individuals (< mean 4.6-6.2 cm) grew more fastly than large ones.
DISCUSSION
To the best of our knowledge this is the first study to calculate non-seasonal and the Hoenig seasonal VBGF curve parameters and age -length key of two holothurians, H. pardalis and H. verrucosa. When there is a seasonal growth pattern for holothurian species belonging to the same family in a geographical region, the estimations of L∞ and K may differ significantly between the seasonal and non-seasonal models. The first function, the non-seasonal VBG model, provided realistic results. However, when seasonality was included (with the Hoenig model), more reliable values were obtained, which confirmed the seasonality in the growth of H. pardalis and H. verrucosa. Our results including L∞ and K obtained both seasonal and non-seasonal models showed that examined two sea cucumber have fast growth characteristics and also young and/or juvenile individuals (e.g. 0 and 1 years old) grow faster than olders. These mentioned growth rates for two species indicate that these species achieve asymptotic size quickly, even faster than other holothurians, such as Isostichopus badionotus (K = 0.2), Isostichopus fuscus (K = 0.18), Stichopus vastus (K = 0.55), Stichopus quadrifasciatus (K = 0.34 year -1 ), Holothuria arguinensis (K = 0.88 year -1 ), Holothuria atra (K = 0.11 year -1 ), Holothuria scabra (K = 0.52 year -1 ) and Holothuria pulla (K = 0.24 year -1 ) (Poot-Salazar et al., 2015;Herrero-Pérezrul et al., 1999;Sulardiono et al., 2012;Sulardiono & Muskananfola, 2019;Olaya-Restrepo et al., 2018;Ebert, 1978;Pauly et al., 1993). Furthermore, the growth parameters (L∞ and K) reported here for two holothurians are not similar to those reported for other species (Table 2) (Table 2). These growth differences of the different sea cucumber species belonging to different family may not only be affected by latitude but also by other biotic (e.g. prey availability, predators, genetic variation) and abiotic factors (e.g. salinity, habitat structure). The Rn and C values with visual growth curves ( Figure 3B, 4B) evidenced that H. pardalis exhibited marked seasonality in growth than H. verrucosa. Seasonal growth pattern was also reported for different holothurians such as H. arguinensis from South Portugal (Olaya-Restrepo et al., 2018) and Isostichopus badionotus off the northwest coast of Yucatan state, Mexico (Poot-Salazar et al.,2015), Isostichopus fuscus at Espiritu Santo Island, Gulf of California, Mexico (Herrero-Perezrul & Reyes-Bonilla, 2008) and Cucumaria pseudocurata at Shell Beach, Sonoma County, California (USA) (Rutherford, 1973). Since there is no information on either seasonal or nonseasonal VBGF curve parameters for H. pardalis and H. verrucosa population along Karachi coast, Pakistan (northern Arabian Sea), we were unable to compare our findings with other studies. However, the major factors affecting the seasonal growth of marine organisms such as marine invertebrate were reported to be photoperiod, variation in water temperature and salinity fluctuating over the year, seasonal change in nutrient quality/availability, energy input into reproduction during the breeding season (Bilgin et al., 2009ab;Poot-Salazar et al., 2015;Olaya-Restrepo et al., 2018). Unfortunately, detailed studies neither of productivity along the northern Arabian Sea coasts nor of the sea cucumber species reproduction biology such as maturation and spawning time, size at sexual maturity in these regions yet exists (except for the spawning time of H. arenicola). The coast of Pakistan is for most of the year influenced by high-salinity surface water (36 to 38 ppt) and the sea surface temperature (SST) during summer (May to September) is 28 -30ºC while during winter (November to February) it is 21 -24ºC and also rainfall < 150 mm annually (Siddique & Ayub, 2015). Furthermore, there is variations in nutrient concentration in pre-monsoon (January to May), monsoon (June to August) and post-monsoon (September to December) due to fluctuations of rainfall. Such variations may also be related to productivity and avai1ability of food and to the reproductive cycle and growth of sea cucumber species such as H. verrucosa and H. pardalis and others in the north Arabian Sea, Pakistan. The variations in the period of the WP known as slowest growth time generally related to environmental factors, physiological conditions of the marine animal, fullness of stomach and gonads stages (Bilgin et al., 2009a(Bilgin et al., , 2009bAhmed et. al., 2016bAhmed et. al., , 2016b. The temperature also plays an important role in the reproductive events and the abundance of food and therefore it indirectly effective on the WP of the marine animals. As mentioned above, maturity and reproduction time based on gonad examination of the sea cucumber has not yet known along the northern Arabian Sea coasts (except for H. arenicola (Siddique & Ayub, 2015)). But, the fluctuations of the condition index, which relaed to reproduction time have been reported for different sea cucumbers from these regions. For example, the seasonal variations in the mean condition factor (CF) of different sea cucumber species such as Ohshimella ehrenbergii, H. arenicola, H. atra, H. pardalis and H. verrucosa was reported from the coasts of Karachi, Pakistan (Siddique et al., 2014;Ahmed et al., 2018aAhmed et al., , 2018b and the fluctuation of the gonad index (GI) of these species deduced from the seasonal distribution of the condition factor as: higher during summer (monsoon) and lower during winter (postmonsoon). These GI fluctuation of the sea cucumbers are also compatible with the studies of Siddique & Ayub (2015). Namely, the GI of H. arenicola was observed during spring and early summer, followed by a decrease in autumn and winter, which showed the spawning followed by resting phase. Moreover, the GI values were reported as a significant negative correlation with salinity and nonsignificant correlation with temperature (Siddique & Ayub, 2015). The period of slowest growth for H. verrucosa corresponded to the monsoon when the highest GI value occurs (i.e. June). The slowest growth period for H. pardalis, however, was estimated to be February, which is the period with a relatively low water temperature. Since the growth rate of holotorians depends on the effects of the climatic events (e.g. monsoon rain, temperature fluctions) on the food availability and quality of the habitat and also reproduction events, the slow growth of H. pardalis in winter may be the result of the extended both reproduction and poor nutrition periods due to high rainfall regime (lowers coastal seawater salinity) in monsoon and water temperatures.
In conclusion, seasonal growth was more pronounced in H. pardalis, probably due to the long spawning period and fluctuations of rainfall which decreased the productivity and avai1ability of food and therefore caused individuals to grow seasonally. In H. verrucosa, growth was dictated by the climatic events rather than reproduction activity. The effects of the climatic events as monsoon rain and reproduction activities on growth should be studied in detail in order to provide data for holotorian fisheries management in the north Arabian Sea, Pakistan
|
2020-10-28T18:00:50.491Z
|
2020-09-08T00:00:00.000
|
{
"year": 2020,
"sha1": "ea7421e8f5bbb3cebf455ff21f0379897b339cd8",
"oa_license": null,
"oa_url": "https://dergipark.org.tr/en/download/article-file/1099022",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "164fe126e3f411fabdddc2a29e011513bba58ea3",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
246185472
|
pes2o/s2orc
|
v3-fos-license
|
Author Correction: Structural evolution of tunneling oxide passivating contact upon thermal annealing
We report on the structural evolution of tunneling oxide passivating contact (TOPCon) for high efficient solar cells upon thermal annealing. The evolution of doped hydrogenated amorphous silicon (a-Si:H) into polycrystalline-silicon (poly-Si) by thermal annealing was accompanied with significant structural changes. Annealing at 600 °C for one minute introduced an increase in the implied open circuit voltage (Voc) due to the hydrogen motion, but the implied Voc decreased again at 600 °C for five minutes. At annealing temperature above 800 °C, a-Si:H crystallized and formed poly-Si and thickness of tunneling oxide slightly decreased. The thickness of the interface tunneling oxide gradually decreased and the pinholes are formed through the tunneling oxide at a higher annealing temperature up to 1000 °C, which introduced the deteriorated carrier selectivity of the TOPCon structure. Our results indicate a correlation between the structural evolution of the TOPCon passivating contact and its passivation property at different stages of structural transition from the a-Si:H to the poly-Si as well as changes in the thickness profile of the tunneling oxide upon thermal annealing. Our result suggests that there is an optimum thickness of the tunneling oxide for passivating electron contact, in a range between 1.2 to 1.5 nm.
conventional solar cell process without cost overruns such as the atomic layer deposition for Al 2 O 3 passivation and the local contact formation using laser ablation.
A lot of research on the TOPCon passivating contact is ongoing, and there are two different pictures depicting the working principle [11][12][13][14] . The direct tunneling of carriers via a very thin oxide thickness is one popular explanation for the carrier selectivity 8,11,12 while localized carrier transport through pinholes in the interfacial oxide has been proposed based on the practical result of the interfacial oxide breaking up upon thermal annealing 13,14 .
In spite of a lot of research results, a detailed study on the structural evolution of the passivating contact layer stack upon thermal annealing is still missing. In this work, we aimed to study the structural evolution of tunneling oxide electron contact upon thermal annealing at various temperatures. In particular, we examined the evolution of a doped a-Si:H thin film into crystallized poly-Si by thermal annealing with significant phase changes. In addition, the thickness profile of the tunneling oxide was also changed by thermal annealing. Since the thickness of the tunneling oxide plays a crucial role on the carrier selectivity, our results suggest that there is an optimum thickness of the tunneling oxide in passivating tunneling contact structure, which in a range from 1.2 to 1.5 nm for electron contacts. The structural evolution of the TOPCon electron contact upon annealing at different temperatures was analyzed by spectroscopic ellipsometry (SE) and its modeling. The SE measurement and modeling results are further supported by comprehensive analysis using quasi-steady-state photoconductance (QSSPC), secondary ion mass spectrometry (SIMS), and transmission electron microscopy (TEM).
Results and Discussion
The microstructural and electrical properties of the TOPCon layer stack greatly depends on its annealing condition. The evolution of the microstructure of the TOPCon layer stack plays a crucial role on the solar cell properties such as back surface passivation, carrier selectivity, and carrier transport. Therefore, in this work, the evolution of the microstructure of the TOPCon layer stack upon thermal annealing was studied in detail. Figure 1 shows the evolution of the implied V oc after annealing at various conditions of the TOPCon electron contact fabricated on both sides of a 190 μm thick n-type solar grade wafer substrate. The particular annealing time of one and five minutes was decided by two technological considerations. At first, the annealing time should not be very long in order to prevent the diffusion of phosphorus from the doped a-Si:H into the silicon substrate. Second, the short time of thermal annealing enables the TOPCon contact formation to be implemented into a conventional solar cell manufacture process, such as firing, without additional process steps or extra equipment cost.
The implied V oc of the as-deposited sample was found to be 556 mV, and it showed an increase to 651 mV after annealing at 600 °C for one minute. The implied V oc of the sample decreased to 583 mV after annealing at a higher temperature of 700 °C for one minute, but increased to 615 mV at 800 °C. The implied V oc of the sample continued to increase to 683 mV after annealing at 900 °C and 697 mV after annealing at 1000 °C. It is interesting that for the samples annealed for one minute there was a sudden increase in the implied V oc after annealing at 600 °C, but there was degradation at the higher temperature annealing of 700 and 800 °C for one minute. It is also of note that the implied V oc of the sample annealed at temperatures above 800 °C tended to ramp up until the temperature reached 1000 °C. There was a drastic change in the implied V oc of the samples annealed for longer than five minutes. At 600 °C, the implied V oc of the samples showed a large increase to 651 mV after annealing for one minute. However, further annealing for five minutes led to a drop in the implied V oc of samples to 570 mV. Such drastic changes in the implied V oc of the TOPCon electron contact was tested its reproducibility with more than four types of independent test series using a-Si:H films deposited in different process conditions. It should also be pointed out that the samples annealed at a higher temperature of 700, 800, and 900 °C showed a continuous increase of their implied V oc upon annealing time. In the case of the sample annealed at 900 °C, the implied V oc of the sample increased to 702 mV. However, the implied V oc of the sample decreased to 657 mV after annealing at a higher temperature of 1000 °C for five minutes. It is necessary to have a more detailed study on the physical origins of the evolution of the implied V oc by the annealing of the TOPCon electron contact. Figure 2 shows a depth profile analysis of hydrogen concentration of the TOPCon electron contacts at the as-deposited state and annealing at 600 °C for one and five minutes, respectively, by using secondary ion mass Fig. 2 is attributed to the sputter depth (or sample thickness). The measured hydrogen concentration profiles were normalized at a bulk wafer level. At the as-deposited state, the 50 nm thick a-Si:H film shows a hydrogen concentration of 1700 arb. unit. In quantitative translation, this value corresponds to 5.5 × 10 21 cm −3 , which indicates the hydrogen content of the a-Si:H film in ordinary cases to be about 10~15 at.%. After annealing at 600 °C for one minute, there is a striking change in that the hydrogen concentration of the a-Si:H film decreased to a level of 160 arb. unit, while the hydrogen concentration at the interface of the a-Si:H/SiO x increased to 950 arb. unit compared to 710 arb. unit in the as-deposited state. After annealing at 600 °C for five minutes, further reduction of the hydrogen concentration in the a-Si:H film, as well as the hydrogen concentration at the interface, was observed to have gone down to the level of 50 arb. unit. As mentioned above, the PECVD deposited a-Si:H usually consists of hydrogen around 10~15 at.%, and the hydrogen effuses out from the film at temperatures above 400 °C 15 . As compared to the hydrogen level detected in the crystalline silicon in Fig. 2, it is believed that all of the hydrogen in the film effused out after annealing at 600 °C for five minutes. In other words, upon annealing at 600 °C, the hydrogen bonded to silicon in thin films became mobile and effused out, and eventually depleted after five minutes of annealing.
There is general consensus that in the case of the PECVD deposited SiN x layer, the firing process that introduces mobile hydrogen gets released, diffuses, and passivates electrically active defects 16 . In spite of the fact that direct observation of hydrogen motions is difficult and the study is limited to indirect analysis through the performance improvement of solar cells, such a beneficial effect of the hydrogen release and interface passivation is generally accepted 17 . One may notice that a large concentration of hydrogen is already detected at the interface, but shows a low implied V oc for the sample of the as-deposited state. There are two reasons why the sample in the as-deposited state shows a low implied V oc despite the high hydrogen concentration at the interface. This is often found in silicon heterojunction technology, where the rearrangement of Si-H bonding changes the surface passivation properties 18 . Even though the changes in the hydrogen content are small, the rearrangement in the Si-H bonding configuration can also result in significant enhancement through thermal annealing.
The second point is the measurement artifact of the SIMS measurement. In spite of a high hydrogen concentration in the interface being detected at the as-deposited state, the hydrogen concentration at the interface could have been hindered by a high hydrogen concentration of the a-Si:H film and the tailing effect. One of the most important types of measurement artifacts during the SIMS measurement is known as cascade mixing. This originates from primary ions striking sample atoms and displacing them from their lattice positions, leading to the homogenization Figure 2. (a) The depth profile analysis of the thin film black surface field's hydrogen concentration measured by secondary ion mass spectrometry at the as-deposited state and after annealing at 600 °C for one and five minutes, respectively, and (b) its zoom-in of the interface region between the a-Si:H and SiO x layers. of all atoms within the depth affected by the collision cascade. Lightweight impurity atoms are usually affected and redistributed throughout this "mixing depth" as sputtering proceeds, and the impurity profile will give a deeper distribution, thus introducing the so-called "tailing effect" 19,20 . Therefore, one should be careful in interpreting the hydrogen depth profile of the sample in the as-deposited state. The hydrogen concentration at the interface of the as-deposited state could have been hindered by a high hydrogen concentration in the a-Si:H film and its profile tail. Figure 3(b-d) shows the real (ε 1 ) and imaginary (ε 2 ) parts of the pseudo-dielectric function of the film measured by SE measurement and the modeling results of the as-deposited sample annealed at 600 °C for one and five minutes, respectively. Table 1 summarizes the Tauc-Lorentz modeling parameters of the results in Fig. 3. Not only the pseudo-dielectric functions of the film measured by SE, but also the material parameters deduced from T-L modeling are presented in this figure. The SE spectra of the samples can be modeled using a layer consisting of T-L dispersion (a-Si:H) 21 with surface roughness on a Cauchy dielectric (interface oxide). The modeling results reveal significant changes in the material properties upon annealing at 600 °C, while maintaining the absence of crystallization in the doped a-Si:H. One of the most notable changes in the fitting parameters is the optical bandgap (E opt ), as summarized in Table 1. The E opt of the a-Si:H decreased from 1.68 eV in the as-deposited state to 1.47 eV after annealing for one minute and five minutes, respectively. The E opt of amorphous silicon depends on its hydrogen content, and the E opt of the film decreases as the hydrogen content decreases 22,23 . Therefore, the result suggests that the hydrogen content of the film continued to decrease from the as-deposited to annealed states for one and five minutes, respectively. This is also similar to the SIMS result in Fig. 2, and both results support the idea that annealing at 600 °C can effuse hydrogen. There is another interesting point in the SE modeling that the interface oxide thickness increased after annealing at 600 °C. However, one should be cautious in interpreting the SE modeling that SE measurement is sensitive with refractive index contrast, so the increase in the oxide thickness could be an effect of an additional porous layer formation. In order to cross-verify the SE modeling result, detailed observation was done using TEM cross-section analysis. Figure 4(a),(c) and (e) shows cross-sectional TEM images of the as-deposited and annealed states at 600 °C for one and five minutes, respectively. Figure 4(b),(d) and (f) shows zoomed images into the interface region of Fig. 4(a),(c) and (e), respectively. The TEM cross-section images verify that all the samples are in an amorphous phase, which is in agreement with the SE modeling result in Fig. 3. However, the interface oxide thickness appears to be unchanged in the TEM cross-section images. Instead, a porous interface region (marked by arrows) is observed underneath the c-Si wafer surface, which is at the c-Si/SiO 2 interface of the samples after annealing at 600 °C for both one and five minutes. Such a porous region is indeed microstructural damage that is often observed on the c-Si wafer surface after exposure to atomic hydrogen (hydrogen plasma) 24 . The observation of the porous region once again supports the hydrogen effusion during annealing at 600 °C, as discussed above. The TEM cross-section result also suggests that the porous region would have introduced contrast in the refractive index, resulting in changes in the interface oxide thickness in the SE model of Fig. 3. Therefore, it is of note that the increase in the interface oxide thickness after annealing at 600 °C, shown in Fig. 3 and Table 1, should be an artifact that occurred during the optical modeling of the SE result, which originated by the formation of porous microstructural damage after hydrogen effusion. We also made modified optical modeling of the SE results of the samples after annealing at 600 °C, including the porous layer underneath the interface oxide, but the optical model did not show any significant improvement in its reliability because of introduction of additional modeling parameters (void fraction and thickness of the porous layer).
The material properties of the samples annealed at 800, 900, and 1000 °C, respectively, were also analyzed by SE. Figure 5(b-d) shows the real (ε 1 ) and imaginary (ε 2 ) parts of the pseudo-dielectric functions of the film measured by spectroscopic ellipsometry measurement and the modeling results of the samples in the as-deposited and annealed states at 800, 900, and 1000 °C, respectively, for five minutes. The measured spectra for the samples annealed at temperatures above 800 °C cannot be modeled with T-L dispersion. The results suggest that the a-Si:H thin films crystallized in a short annealing time at annealing temperatures above 800 °C. The imaginary parts of the pseudo-dielectric function in the SE spectra of the samples annealed at temperatures above 800 °C show two distinct peaks at 3.4 and 4.2 eV, respectively, which represent the first two direct transition energies in the dispersion relation of crystalline silicon 25 . Therefore, the SE spectra of the samples annealed at 800, 900, and 1000 °C were modeled using the Bruggeman effective medium approximation 26,27 . This approach is based on the assumption of the linearity of the mixed phase materials' optical response, and it enables to define the film structure as expressed in terms of the volume fraction of its constituents. The modeling steps consist of rebuilding the measured pseudo-dielectric function from combining the optical responses of a mixture of known dielectric function materials 26,28 . This approach has been shown to be well adapted to modeling complex multilayer structures consisting of amorphous and nanocrystalline silicon materials. In case of the SE spectrum of material consisting of a small crystallite size, such as nanocrystalline silicon, shows a broadening of these peaks 29 . The measured spectra were modeled by a medium based on the relative fractions of large grain (LG) poly-Si, small grain (SG) poly-Si, and void. Note that a large grain poly-Si refers to an average grain size of about 100 nm while a small grain poly-Si refers to an average grain size of about 10 nm 30 . Figure 6(a-d) shows the BEMA modeling results of samples in the as-deposited and annealed states at 800, 900, and 1000 °C, respectively, for five minutes. Table 2 summarizes the BEMA modeling parameters of the results in Figs 4 and 5. When comparing the samples annealed at 800 and 900 °C for five minutes, the LG fraction increased from 73.2% to 76.3% and the SG fraction showed a decrease from 23.4% to 19.9%. The LG fraction of the sample showed an increase to 84%, while the SG fraction decreased to 12% after annealing at a higher temperature of 1000 °C for five minutes. In other words, at annealing temperatures of 800, 900, and 1000 °C, both the crystalline volume fraction and crystalline size increased with an increase in the annealing temperature, while the thickness of the surface oxide increased. There is another important point along with the crystallization of the thin film. Furthermore, it is of note that at a higher temperature, the thickness of the tunnel oxide is reduced. In order to observe the thickness of the tunnel oxide in more detail, TEM cross-section measurements on the samples were done. Figure 7(a) is a cross-sectional TEM image of the as-deposited sample that shows that the structure of the silicon film is clearly amorphous. Figure 7(b)-(d) shows the TEM images of the TOPCon electron contacts annealed at 800, 900, and 1000 °C, respectively, for five minutes. At an annealing temperature above 800 °C, the a-Si:H thin films crystallized. The TEM cross-section images also reveal that the thickness of the tunnel oxide is reduced after the samples are annealed at temperatures above 800 °C. In particular, the tunneling oxide thickness was found to be 1.4, 1.2, and 1.0 nm after annealing at 800, 900, and 1000 °C, respectively. These results are consistent with the SE modeling. Assuming an ideally low interface defect density and good quality oxide, the tunneling oxide thickness and its uniform thickness profile play a crucial role in the carrier selectivity. Tunneling current density through the tunneling oxide depends on the thickness of the tunnel oxide, and can be expressed as below where ε ox , A and B are given by with V ox is the oxide voltage, t ox is the oxide thickness, q is the electron charge, h is Planck's reduced constant, Φ B is the effective barrier height, m ox is the effective electron mass in the oxide, and m the free electron mass 31 . Eq. 1 suggests that barrier height (Φ B ), effective mass (m ox ), and oxide thickness (t ox ) are dominant parameters of the tunneling current density.
Since the effective mass of the electrons is lighter than that of the holes in silicon, the tunneling current of electrons would be higher than that of holes 32 . In addition, the tunneling current density decreases as the tunneling length gets longer, which is the thickness of the tunneling oxide in our case. Therefore, the difference in Fig. 3. Interface oxide thickness analyzed by TEM cross-section is also presented. the effective mass of carriers and the thickness of the tunneling oxide layer appear to be major sources of carrier selectivity. The carrier selectivity of the TOPCon passivating contact can be expressed by the ratio of tunneling probabilities for electrons T e and for holes T h . Disregarding the barrier shape and effects of image force, the ratio is given by: Here, d ox is the oxide thickness, while m t,e (Φ e ) and m t,h (Φ h ) are the tunneling masses (the barrier height) for the electrons and holes, respectively. ħ is the Planck's constant divided by 2π 11,13 . Eq. 5 suggests that oxide thickness is the main parameter in the ratio of the tunneling probabilities. Considering a 1.5 nm thick oxide, Eq. 5 yields a transmission probability ratio of ratio of electrons/holes about 6.31 while for 2.4 nm thick oxide, a respective value of 19.04 is obtained 13 . The calculation suggests that the carrier selectivity is exponentially proportional to the oxide thickness. There would be an optimum point in the thickness of the tunnel oxide that shows significant carrier selectivity while insuring good carrier transport. Otherwise, the carrier selectivity of the passivated tunnel contact would be insignificant for the tunneling oxide thickness getting too thick or too thin. One can expect that the carrier selectivity in the TOPCon passivating contact would disappear if the oxide thickness is too thin, as predicted in the numerical simulation by Steinkemper et al. 11 . In the case of a too thick tunneling oxide, nevertheless the carrier selectivity increases, the tunneling current of both the electrons and holes would be simultaneously suppressed. Thus, the carrier transport of both electrons and holes would be reduced in a thicker oxide 11 . Regarding that the thickness of the tunneling oxide gradually decreases upon a higher annealing temperature, e.g., 800, 900 and 1000 °C, the tunneling current density through the tunneling oxide would vary after thermal annealing at various temperatures. In our work, the optimum thickness of the tunneling oxide was also found to be in between 1.2 to 1.5 nm observed by TEM, which is also consistent with the results found in the literature elsewhere. Both the numerical simulation and experimental results suggest that the ideal oxide thickness for the TOPCon electron contact should be around 1.5 nm 8,[11][12][13]33 . Last, but not least, in the case of high temperature annealing at 1000 °C, the thickness profile of the tunnel oxide became rough. The rough profile of the tunnel oxide shown in Fig. 7(e) suggests that at a high annealing temperature annealing of 1000 °C, the stoichiometry of tunnel oxide is broken and forms silicon suboxide 12 . The locally reduced tunnel oxide introduces direct contact between the c-Si wafer substrate and n+ poly-Si, transporting both electrons and holes that deteriorates the carrier selectivity of the TOPCon passivating contact.
Conclusion
In this work, we studied the structural evolution of TOPCon passivating contact upon thermal annealing. We demonstrated that the implied V oc of the TOPCon electron contact depends on the hydrogen motion and the crystallization of the doped a-Si:H. After annealing at 600 °C for one minute, the implied V oc showed a significant increase due to the hydrogen motion, thereby resulting in the chemical passivation of electrically active defects that originated in the effusion of hydrogen from doped a-Si:H. However, at further annealing at 600 °C for five minutes, the implied V oc dropped again because all of the hydrogen was effused out and depleted. Therefore, the changes in the implied V oc after annealing at 600 °C for one minute may be difficult to implement into real devices.
At an annealing temperature above 800 °C, the crystallization of the a-Si:H formed P doped poly-Si. Spectroscopic ellipsometry analysis revealed that the grain size of the crystallized poly-Si increased by annealing at a higher temperature, which further resulted in an increase in the implied V oc . After annealing at 1000 °C for five minutes, the implied V oc of the sample decreased. TEM analysis revealed that the decrease is because of a reduced thickness of the tunnel oxide, which introduces the deteriorated carrier selectivity of the TOPCon structure.
Methods
TOPCon electron contact was fabricated on n-type, 200 μm thick ohm•cm solar grade wafers with a resistivity of 0.6 ohm•cm for measuring the implied open circuit voltage (implied V oc ) upon thermal annealing. 650 μm thick semiconductor grade wafers with a resistivity of 3 ohm•cm were also used for measurements by spectroscopic ellipsometry (SE), secondary ion mass spectrometry (SIMS), and transmission electron microscopy (TEM) after thermal annealing. The saw damage removal (SDR) of the solar grade wafers was done using KOH solution and deionized water at 80 °C. About 10 μm of SDR was etched on both sides of the damaged wafer surfaces. All wafers were cleaned using the following sequence: 10% HF dip, deionized water + H 2 O 2 + NH 4 OH (RCA1) at 80 °C for 10 min, deionized water + H 2 O 2 + HCl (RCA2) at 80 °C for 10 min, and 10% HF dip. Then, a thin silicon oxide was grown using the nitric acid (HNO 3 ) solution. The temperature and time for the wet chemical oxidation process were set to be 120 °C and 15 minutes, respectively. The thickness of the silicon oxide layer was found to be about 2.5 and 1.4 nm, deduced by the SE optical model and TEM, respectively.
On the thin oxide layer, hydrogenated amorphous silicon (a-Si:H) thin films were deposited by the capacitively-coupled-plasma (CCP) radio-frequency (RF, 13.56 MHz) glow discharge PECVD method at substrate temperatures ranging from 175 to 200 °C. P doped a-Si:H films were deposited under carefully controlled plasma conditions using hydrogen-diluted silane gas mixtures. The silicon films deposited under such conditions show various controllable material properties. For example, the silicon films can consist of a small fraction of SciEntific RepoRtS | 7: 12853 | DOI:10.1038/s41598-017-13180-y silicon nanocrystals, which work as seeds for the crystalline growth [34][35][36][37][38] . In this work, our a-Si:H layers were deposited at a p•d product (pressure × inter-electrode distance) ranging from 1 to 3 Torr•cm and a RF power density in a range of 30 to 100 mW/cm 2 .
After the a-Si:H deposition process, the thermal annealing of the wafers was done at temperatures ranging from 600 to 1000 °C for one and five minutes, respectively, in a quartz tube furnace under a nitrogen atmosphere. At an elevated temperature above 600 °C, a-Si:H was crystallized in a solid phase 39 .
The implied V oc represents the potential V oc of the solar cell, and it can be deduced by using the carrier concentration under illumination 40 : where q is the unit charge, k is the Boltzmann constant, T is the temperature, n i is the intrinsic carrier concentration, n is the electron concentration, and p is the hole concentration. After annealing, the implied V oc of the solar grade wafer was determined by the quasi-steady-state photoconductance (QSSPC) method using WCT-120 made by Sinton instruments. Semiconductor grade wafers were used for secondary ion mass spectrometry (SIMS) measurement and SE analysis. SIMS is a technique used to analyze the composition of solid surfaces and thin films by sputtering the surface of the specimen with a focused primary ion beam and collecting and analyzing the ejected secondary ions. The hydrogen profiles of the TOPCon electron contact upon different thermal annealing conditions were analyzed by an IMS-6f magnetic sector SIMS from the CAMECA instrument. All wafers were measured with the following analysis conditions: primary ion = Cs+; impact energy = 15 keV; current = 10 nA; analysis area = 30 μm(Φ).
Spectroscopic ellipsometry (SE) is a non-destructive test method used to analyze the optical properties of the sample. It can also deduce various information like the film thickness, crystallinity, and bandgap, depending on the modeling method 41 . The measured SE spectra of the TOPCon electron contact annealed at various temperatures were analyzed by the Tauc-Lorentz (T-L) dispersion model. The ε 2 of T-L dispersion can be expressed as below 42 : where E g is the optical band gap, A is the amplitude, E 0 is the peak-to-peak transition energy, and C is the broadening term. Since T-L dispersion considers a single transition in the direct bandgap material, the crystallized poly-Si after thermal annealing cannot be analyzed by T-L dispersion. The poly-Si samples were analyzed using Bruggeman effective medium approximation (BEMA). Analysis using BEMA is widely used to determine the volume fraction of mixed phase materials such as nano-and micro-crystalline materials 30 .
The detailed microscopic structure of the samples was also analyzed using the cross-section measurements from the transmission electron microscopy (TEM). The sample for the cross-section was prepared using a focused ion beam.
|
2022-01-23T06:16:27.330Z
|
2022-01-21T00:00:00.000
|
{
"year": 2022,
"sha1": "50e9931b6b40bc1fbeacf572a2d13ee1cd931c25",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-05626-9.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c117e5b1992f460309582e99c5afa8a6db56bb0c",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13517831
|
pes2o/s2orc
|
v3-fos-license
|
Equity in health and healthcare in Malawi: analysis of trends
Background Growing scientific evidence points to the pervasiveness of inequities in health and health care and the persistence of the inverse care law, that is the availability of good quality healthcare seems to be inversely related to the need for it in developing countries. Achievement of the Millennium Development Goals is likely to be compromised if inequities in health/healthcare are not properly addressed. Objective This study attempts to assess trends in inequities in selected indicators of health status and health service utilization in Malawi using data from the Demographic and Health Surveys of 1992, 2000 and 2004. Methods Data from Demographic and Health Surveys of 1992, 2000 and 2004 are analysed for inequities in health/healthcare using quintile ratios and concentration curves/indices. Results Overall, the findings indicate that in most of the selected indicators there are pro-rich inequities and that they have been widening during the period under consideration. Furthermore, vertical inequities are observed in the use of interventions (treatment of diarrhoea, ARI among under-five children), in that the non-poor who experience less burden from these diseases receive more of the treatment/interventions, whereas the poor who have a greater proportion of the disease burden use less of the interventions. It is also observed that the publicly provided services for some of the selected interventions (e.g. child delivery) benefit the non-poor more than the poor. Conclusion The widening trend in inequities, in particular healthcare utilization for proven cost-effective interventions is likely to jeopardize the achievement of the Millennium Development Goals and other national and regional targets. To counteract the inequities it is recommended that coverage in poor communities be increased through appropriate targeting mechanisms and effective service delivery strategies. There is also a need for studies to identify which service delivery mechanisms are effective in the Malawian context.
Background
There has been increased attention to issues of equity in health and healthcare with the renewed commitment of governments and international organizations to improve the health status of the poor and marginalized [1,2]. Equity is one of the basic principles of the Primary Health Care approach [3] and features implicitly or explicitly in the health policies of most countries [4].
Growing scientific evidence points to the pervasiveness of inequities in health and healthcare both between and within countries at different stages of development [5]. Despite achievements in the second half of the 20 th Century in improving life expectancy and child survival, inequities in health have persisted and in some cases have even widened [6].
It is now a well established fact that the poor and marginalized segments of society have a greater need for health care than their rich counterparts. However, access to healthcare still follows the inverse care law -the availability of good quality healthcare seems to be inversely related to the need for it [7].
Despite the commitment of governments to pursue propoor health policies and interventions vigorously, in sub-Saharan Africa the level of inequity in health status and access to basic health care interventions remains high. Benefit-incidence studies in a number of African countries have unequivocally shown that government expenditures on health tend to benefit the richest of society in absolute terms. On average the richest 20% receive more than twice the financial benefit than the poorest 20% of the population from overall government health spending [8].
Monitoring trends in equity in health and access to essential health interventions is important in order to target scarce public resources to those who have more needs, i.e. the poor. Poor countries in sub-Saharan Africa face many constraints in collecting and processing relevant information for gauging trends in equity. This, however, should not be a cause for inaction. It is possible, even in the poorest countries with the least resources, to do much more with the existing data and resources than what is being done currently [9]. Many countries in Africa have conducted various studies such as the demographic and health surveys (DHS) and household income and expenditure surveys. The availability of data for different time intervals makes it possible to review changes in equity in health and healthcare.
The objective of this report is to assess the trends in equity in Malawi for the various indicators of health and healthcare using data from the Malawi Demographic and Health Surveys of 1992, 2000 and 2004.
Brief country profile
Malawi, a landlocked country in Southern/Central Africa, has an area of about 118,484 square kilometers, one-third of which is made up by Lake Malawi [10]. Based on its Human Development Index (HDI) of 0.404, the country ranks 165 th out of 177 countries and is classified as one of the low human development countries. Furthermore, the HDI has declined from its level of 0.412 in 1995 to a level of 0.404 in 2003 [11], indicating a drop in society's welfare.
The per capita GDP in 2003 was US$ 156 with an annual growth rate of 0.9% during the period 1990-2003. The GDP per capita for Malawi is much lower than the average values for low income and sub-Saharan African countries.
According to the 2004/2005 Integrated Household Survey (IHS), about 52% of Malawi's population is classified as poor, i.e. below a national poverty line of MWK 16,165 per person per year -the equivalent of US$ 147 at that time. The median per capita income of the richest decile is about eight times that of the poorest decile [12].
Health and development indicators of Malawi are those typical of other low-income countries in sub-Saharan Africa, as depicted in Table 1.
During the period 1990-2004, infant and under-five mortality rates have declined by an annual average of 5%. This is a significant decline compared to that in many countries in the region and exceeds the average annual reduction rate of about 4.3% required to achieve the targets of the Millennium Development Goal related to reducing child mortality by two-thirds between 1990 and 2015 (MDG 4). However, population averages do not always represent the reality. The average annual reduction rates for the poorest 20% of the population for infant and under-five mortality rates in Malawi are in the order of 2.2% and 2.7% respectively-much lower than the population average. Hence, although it appears potentially feasible to achieve the targets of MDG 4 with the current population average annual reduction rates, disaggregation by wealth quintile indicates that the poorest 20% are unlikely to achieve it.
The greatest proportion of the disease burden is composed of infectious and parasitic diseases and nutritional disorders. However, like most developing countries undergoing demographic and epidemiological transition, non-communicable diseases are also on the increasethus posing an additional problem to a health system that is grappling with communicable diseases that sometimes assume epidemic proportions.
The per capita total expenditure on health is one of the lowest in sub-Saharan Africa and is critically short of the US$ 34 recommended by the WHO Commission on Macroeconomics and Health to provide a basic package of services [17]. The total expenditure on health amounts to about 9.8% of the GDP. Government expenditure on health constitutes only 41% of the total health expenditure. Furthermore, expenditure on health constitutes only 9.7% of total government expenditure. This is far below the Abuja target -a resolution by the African Heads of State to allocate 15% of the national budget to health.
The country's health service delivery system is four-tiered, consisting of community, primary, secondary and tertiary care levels [18]. At the community level, service is provided through health surveillance assistants. The focus is on preventive interventions. Primary care is delivered through clinics and health centres. District and central hospitals provide secondary and tertiary care services respectively. The private not-for-profit sector plays a significant role in service provision.
In order to address the enormous health problems effectively with very limited resources, the country has designed an essential healthcare package (EHP) as part of its health Sector-wide Approach (SWAp) adopted in 2004. The EHP being delivered at community, primary and secondary levels of the healthcare delivery system is provided free of charge. The EHP addresses the most common causes of morbidity and mortality and focuses mainly on health problems that disproportionately affect the poor [18].
Equity: concept and measurement
Health-related equity may be viewed from three perspectives: (i) equity in health; (ii) equity in health service delivery; and (iii) equity in health financing. Operational definitions of the first two are given below, as they constitute the focus of this study.
Equity in health is defined as minimizing avoidable inequalities in health and its determinants -including but not limited to healthcare -between groups of people who have different levels of underlying social advantage or privilege [19]. Inequities exist when there are disparities in health and its determinants that are deemed to be avoidable, unfair and unjust [20]. Hence not all health inequalities between population groups are regarded as inequities. Inequities in health specifically refer to disparities between groups of people related to their social position as measured by such characteristics as income/ wealth, occupation, education, geographic location, gender and race/ethnicity [9]. Health inequalities due to inevitable and unavoidable conditions (e.g. biological/genetic variations) do not constitute inequities.
The focus of equity in healthcare provision is to ensure that all people have access to a minimum standard of healthcare according to need and not any other criteria, such as ability to pay. In this case, equity may therefore be defined as equal access for equal need, where access refers to the absence of barriers -mainly geographical and financial barriers; and need refers to the capacity to benefit or severity of illness. Equity in service provision takes two forms: horizontal equity and vertical equity. While horizontal equity implies equal treatment for equal need, vertical equity implies that individuals with unequal needs should be treated unequally according to their differential need.
The Measurement of equity in health and healthcare entails three important steps: (i) classifying people by socio-economic status; (ii) measuring health status/ healthcare; and (iii) quantifying the degree of inequality.
Measuring household economic status in developing countries is a difficult exercise. This is because data on two frequently used indicators of wealth -household income and expenditure -are often scarce and unreliable [21]. In developing countries, studies have shown a close relationship between asset ownership and consumption expenditure [22] and that household assets are a good indicator of the long-run economic status of households [21]. Asset indices are established to classify households into wealth quantiles (e.g. quintiles, deciles) using the method of Principal Components Analysis (PCA). Analysis of Demographic and health surveys of many countries conducted by the World Bank demonstrates the use of PCA to compute asset indices from data on durable consumer goods (e.g. ownership of radio, television etc.), housing quality (e.g. floor type), water and sanitary facilities and other amenities [21]. This categorization of households into wealth quintiles is used in this report to analyze inequities.
The next step in assessing equity is to devise appropriate measures of health and healthcare. Having decided on the attribute of health/healthcare to be compared among individuals/population groups, it is then important to find an appropriate technique to quantify the degree of the existing inequality. Several methods have been in use to date. Some have their origin in research on income inequality (e.g. Lorenz curve and the associated Gini coefficient) [23,24] or from modifications of these (e.g. concentration index) [25]. Other methods are based on measures of association (index of dissimilarity, slope index of inequality) [26]. This report is based on the measurement of inequities using the concentration index and corresponding concentration curve.
Source of data
The information used in this study is based on findings from the Malawi Demographic and Health Surveys of 1992, 2000 and 2004.
Data analysis
Inequities are represented by concentration curves that are relatively easier to understand compared to the concentration indices. The concentration curve plots the cumulative proportion of the individuals under consideration ranked by wealth against the cumulative proportion of the health/healthcare variable (e.g. stunting, under-five mortality rate, use of modern contraception etc.) being measured. To demonstrate the use of the concentration curve, the case of underweight (low weight-for-age) in under-five year old children is presented in Figure 1.
If there is no wealth-related inequality in the rate of severe underweight, the concentration curve would coincide with the diagonal line (line of equality). This implies that there are no inequities in severe underweight. However, if severe underweight has disproportionately higher prevalence among the poor, the concentration curve lies above the line of equality. The above example reveals that the poor are more likely to experience a greater burden of severe underweight in association with their socio-economic disadvantage (pro-rich inequity).
If the health indicator under consideration is an undesirable outcome such as severe underweight as in the above example, a concentration curve that lies above the line of equality signifies inequity disfavouring the poor and is bad from the equity point of view. If the indicator being considered is a desirable one (e.g. immunization coverage), a concentration curve that lies above the diagonal (line of equality) shows inequity favouring the poor -a situation that is desirable from the equity point of view. A point worthy of note is that the degree of inequity becomes more when the concentration curve is further from the line of equality.
In this study, the concentration curves for the different indicators of health/healthcare from the Malawi Demographic and Health Surveys of 1992, 2000 and 2004 are presented in the same figure so as to observe changes in inequities very easily.
The concentration index that is computed from the concentration curve assumes values between -1 and +1. Its value is negative when the concentration curve is above the diagonal and positive when the curve is below the diagonal. In the absence of inequities (the concentration curve coinciding with the diagonal), the value of the concentration index is zero.
From grouped data, the concentration index (C) is computed in a spreadsheet programme using the following formula [27]: Where p is the cumulative percent of the sample ranked by economic status, L(p) is the corresponding concentration curve ordinate and T is the number of socioeconomic groups. To test for the statistical significance of the concentration index, standard errors can be computed using the formula given in Kakwani et al [28].
Results
This section presents the findings categorized into two groups: (i) indicators of health status; and (ii) indicators of health service use.
The concentration curve Figure 1 The concentration curve.
Health status
Indicators of health status employed in this study include: infant mortality rate (IMR), under-five mortality (U5MR), under-five child malnutrition (represented by stuntinglow height-for-age and underweight), prevalence of diarrhoea and acute respiratory infections (ARI), total fertility rate (TFR) and low body mass index (BMI) in women, which is an indicator of adult undernutrition. A summary of the distribution of the indicators is depicted in Table 2 below.
As can be observed from Table 2, the population averages of indicators such as infant and under-five mortality showed significant improvement while in others there was little or no improvement. The quintile ratios indicate the presence of inequalities in all indicators that favour the rich. For most of the indicators (namely infant mortality rate, under-five mortality rate, total fertility rate, prevalence of ARI and diarrhoea in under-five children) widening of inequalities between the two extreme wealth quintiles (poorest 20 % and richest 20%) was observed. For example, while there was 33% more infant mortality in the poorest quintile as compared to the richest one in 1992, the excess mortality in the poorest quintile increased to 52% in 2000 and 65% in 2004. Thus even if there were slight improvements in the population averages, the improvements accrued more to the non-poor. A caveat is, however, in order here. The quintile ratios compare only the two extreme wealth quintiles (quintiles 1 and 5) and therefore disregard the situation of the three middle quintiles (quintiles 2, 3, and 4). Hence, the information provided does not give an overall measure of inequities in the entire population. It is for this reason that a summary measure -the concentration index/curve is needed.
As can be seen from Figure 2(a and b), there was an increase in the levels of pro-rich inequity in infant and under-five mortality rates in 2004 compared with the base period 1992. This implies that the burden of infant and under-five mortality was getting disproportionately higher among children from the poor than the non-poor households. For IMR, it is observed that the curve for 1992 is below those of 2000 and 2004 and closer to the line of equality, although it is not dominant (at some point there is intersection of the two curves). When the concentration curve for one year does not dominate the other one, it is helpful to resort to the corresponding summary measure, the concentration index, in order to have a clearer picture of the inequity. The concentration index (C) for 1992 was -0.03448 (standard error [SE] = 0.0341). This has no statistical significance and implies that in 1992, there was no inequity in IMR. However, it increased to -0.0434 (SE = 0.0006) in 2000, which is statistically significant inequity that disadvantages the poor. The same trend was observed in under-five malnutrition (Figure 2c and 2d) as measured by the levels of underweight and stunting. Thus, the interventions that were designed during this period to improve child survival and nutritional status did not benefit children from poorest segments of society. The indicators of women's health status in Table 2 tend to move in a different direction as can also be seen from Figure 3 below.
It can be observed from Panel 3a that the concentration curve for BMI in 2000 and 2004 was closer to the line of equality than it was in 1992. The pro-rich inequity in BMI in 1992 has diminished significantly over the years (The curve for 1992 was farther from the line of equality than the one for 2000; and the one for 2004 is the closest to the line of equality). The concentration index (C = 0.0802, SE = 0.0514) is also testimony to this. However, with respect to the TFR, a progressive increase in pro-rich inequities is observed -the TFR concentration curve for 2004 is farther from the line of equality than that of 2000 and of the base period, 1992.
Health service use Utilization rates of various mother and child health interventions are employed as indicators of health service use ( Table 3).
As can be discerned from Table 3, in most of the service use indicators, there was no improvement in inequities that were in favour of the richest quintile. The degree of inequity increased substantially in the two ARI service indicators. On the other hand, a remarkable reduction in inequity was seen in ORT use and proportion treated for diarrhoea in public facility. It should also be noted from the above table that the inequity in the proportion of births taking place at home favours the poorest quintile. This implies that home delivery is mainly practiced by the poor compared to the non-poor. Furthermore, it is observed that the magnitude of the pro-poor inequity in home delivery has widened during the period under consideration, implying that the poor increasingly resorted to home delivery. As discussed earlier, comparison of two extreme quintiles (quintiles 1 and 5) excludes the situation of the middle quintiles. It is therefore essential to use the concentration curve and index to have a summary measure that takes into account the situation of all the five quintiles. Figure 4(a and b) indicates that no improvements were seen in equity in the use of child health services related to immunization coverage and ARI treatment during the period under consideration. With respect to immunization coverage, the pro-rich inequity has increased. In 1992, there was no inequity in ARI treatment as observed from Figure 4(b) where the concentration curve is very close to the line of equality. However, caution should be exercised here. As it has been discussed in Section 5.1, prorich inequity is observed in the prevalence of ARI, that is, there is a high concentration of the ARI burden among children from the poorest households. If equity is to prevail, the principle of vertical equity (unequal treatment for unequal need) demands that those with greater need should receive more of the treatment. However, what is observed in the current case is that there is equal treatment for unequal need and clearly violates the requirements of vertical equity. Hence, there is inequity, as the poor who have a greater need for treatment as compared to the nonpoor are not getting the treatment according to their need. Furthermore, Figure 4(b) shows that the concentration curve for 2000 has deviated from the line of equality significantly. This implies that use of public sector facilities has become more inequitable -the non-poor using the public sector healthcare resources more than the poor and out of proportion to their need. Other indicators of use of child health services include interventions related to the treatment of diarrhoea. Figure 5 below depicts this information. Figure 5(a) on the use of ORT among those who reported diarrhoea in both time periods has a significant pro-rich orientation despite a slight reduction in the levels of inequity. No improvement was observed in equity in ORT use. If equity prevailed in the use of ORT, then the concentration curves should have been located above the line of equality. In other words, the poor should have used ORT more than the non-poor, as they bear the greatest burden of diarrhoeal disease as discussed in Section 5.1. The same trend was also observed in the case of those who sought medical attention for diarrhoea -the non-poor sought care more than the poorest. The status quo was maintained during the two time periods. There was no distinction between the poor and non-poor in terms of seeking care in a public facility. Caution should, however, be exercised here. The fact that there is no difference in use of public facilities for diarrhoea treatment among the poor and non-poor does not accord with the principle of vertical equity. Scarce public resources for the treatment of diarrhoea should be used more by the poorest who have more need for it as indicated by the relatively high prevalence of the condition among them.
The other indicators of health service use employed to assess trends in equity are related to maternal health. These include use of antenatal and delivery services as depicted in Figure 6 below.
As can be observed from Figure 6(a), there was no significant difference in the state of pro-rich inequity in the use of antenatal services by medically trained personnel. The inequity in antenatal care use was, however, of a lesser magnitude compared to those of child delivery services. With respect to delivery services, the same trend of inequity was observed. As can be observed from Panel 6b, the Concentration curves for selected health service use indicators in children: Immunization coverage and ARI treatment degree of pro-rich inequity was more than that observed in antenatal care services, as the concentration curves are relatively far from the line of inequality.
Panel 6c clearly depicts the fact that publicly provided services for child delivery are utilized more by the nonpoor. This implies that the non-poor benefit from public subsidies more than the poor -contrary to stated intentions of public policies. Panel 6d demonstrates that home-deliveries have pro-poor orientation. The poor utilize home delivery services excessively compared to the non-poor.
Summary of findings
The findings described above are summarized in Table 4 using a framework for evaluating health equity changes [31].
Discussion
This paper attempts to assess trends in inequities in selected health status and health services utilisation indicators in Malawi by using quintile ratios and concentration curves and indices. The analysis is based on data from the Demographic and Health Surveys of 1992, 2000 and 2004. This time period allows for analyzing trends in inequities of health indicators that often change gradually and over a longer period of time.
By and large, the findings indicate that in most of the selected indicators of health and healthcare, increases in pro-rich inequities have occurred. This is an undesirable trend in light of the government's explicit commitment to equity in health and healthcare and policy stances. Interventions intended to lessen inequities disfavouring the poor have not borne the expected results.
Concentration curves for selected health service use in children: ORT and treatment of diarrhoea The quintile ratios for infant and under-five mortality rates indicate progressive inequities between the two extreme quintiles, i.e. wealth quintiles 1 and 5 during the period considered. This is also corroborated by the concentration curves in Figure 2, where the respective concentration curves for the year 2004 have moved further away from the line of equality. Thus, there was no improvement in inequities in these indicators and the improvement in the population averages was primarily due to marked improvements in the rates for the relatively wealthy segments of the population.
Although child mortality rates are influenced by a host of factors, many of which lie outside the health sector, they are often regarded as a proxy for overall disease conditions [17]. Infant and under-five mortality rates are closely related to economic growth and distribution of economic and social resources. Studies have shown that countries whose IMR rates are relatively lower enjoy better economic growth rates than those otherwise [17]. This significant correlation between child mortality rates and economic growth implies that, addressing inequities in infant and under-five mortality should be multi-sectoral and that beyond the biomedical solutions, there is a need to also address the underlying social determinants through concerted and complementary efforts of all sectors of the economy. This is also in line with the principles of the Primary Health Care strategy.
The main direct causes of mortality in under-five children are infectious diseases occurring because they were neither prevented (e.g. vaccine-preventable diseases) nor successfully treated (e.g. ARIs, diarrhoeal diseases) [32]. Diarrhoea, ARIs, measles, malaria and malnutrition account for at least 70% of childhood diseases [32]. The underlying causes are related to socio-economic factors. Thus, from the health sector's perspective, the immediate response to reducing infant and under-five mortality is improving access of the poor to preventive, curative and rehabilitative interventions that are geared towards addressing the major direct causes of childhood mortality. Improving coverage of the interventions through the Integrated Management of Childhood Illness (IMCI) programme may go a long way to bridge the inequity gaps, as 70% of the direct causes are related to the diseases and conditions covered in the IMCI strategy. In addition to improving access to health facilities, improving coverage of IMCI interventions also necessitates outreach services and an increase in community level activities [33]. Widening inequities may imply that the poor's access to the appropriate preventive, curative and rehabilitative interventions has not improved or has even declined.
With respect to child malnutrition (stunting and underweight), there has been an increase in inequities between 1992 and 2004. After a significant increase in inequities in 2000 from the 1992 levels, there was a marginal but statistically insignificant decline in 2004. Thus, no change was observed in the inequity levels in child malnutrition between 2000 and 2004.
According to the WHO cutoffs used to identify nutrition problems of public health significance, the population averages of both stunting and underweight in Malawi fall under the categories of severe stunting (cutoff ≥ 40%) and moderate underweight (cutoff 20-29%). Although the rate of stunting is high even in the non-poor wealth quintile (Quintile 5), there is a marked difference in comparison to that of the poorest quintile (Quintile 1). Stunting, which is an indicator of chronic malnutrition poses adverse long-term consequences on economic productivity. Hence, strategies aimed at reducing poverty and income inequalities need to also tackle the problem of stunting in the overall population and in particular among the poorest of society.
Inequities in total fertility rate (TFR) have been increasing progressively over the given period of time despite a marginal decrease in the population average. The average TFR for Malawi is one of the highest in countries of the Southern African Development Community. Widening inequities suggest that the marginal decline in TFR observed is due to a decrease in TFR among the non-poor. This implies that health sector-specific interventions to curb high fertility rates (e.g. uptake of contraceptives) are not benefiting the poor due to a number of reasons including problems of access and cultural barriers. High TFR has farreaching effects in that it adversely affects child survival and household welfare particularly among the poor. It is therefore necessary that policies aimed at improving household welfare need to boost coverage of the poor with the available effective interventions. Furthermore, barriers to accessing those interventions need to be identified and addressed appropriately.
A remarkable achievement has been scored in low BMI (body mass index) of mothers, an indicator of maternal undernutrition. Pro-rich inequity that was observed during the earlier years (i.e. 1992 and 2000) was reversed in 2004. Hence there are no inequities in this indicator; maternal undernutrition does not vary systematically with socio-economic status. The DHS data also indicate that overweight and obesity are less of a problem among women from poor households [14].
The BMI, which is an indicator of chronic energy deficiency among adults, is less of a biomedical problem than it is socio-economic. It is influenced by a host of factors including household socio-economic status, household feeding patterns and seasonal factors [34]. It can therefore be discerned that improvement in those influencing factors among the poor was registered over the years, thus bridging the inequity gap. Reduction in the rate of low BMI in women is beneficial, as low pre-pregnancy BMI is an established risk factor for low birth weight [35], which in turn affects child survival negatively. It is therefore essential to identify the measures that effectively resulted in abolishing pro-rich inequities so as to replicate them in other related areas and avert any future relapses of inequity in BMI.
Inequities in the prevalence of diarrhoea and ARI among under-five children have also increased over the years significantly. These two conditions are among the major killers of children in sub-Saharan Africa and amenable to low-cost preventive and curative interventions. The fact that pro-rich inequities have widened may imply that environmental conditions (including biological, physical and social environments) that are necessary for the propagation of these diseases among the poor have been deteriorating. Many of the enabling factors for diarrhoeal diseases and ARIs are related to household and community-level socio-economic conditions. Therefore, preventing the disproportionately higher burden of diarrhoea among the poor needs a multi-sectoral strategy beyond the bounds of the health sector (e.g. provision of safe water supply; sanitation, decent housing etc).
The population average for immunization coverage in 2004 has declined by about 17 percentage points from the levels in 1992. Besides, the inequities in immunization coverage seem to have widened over the years implying that the immunization coverage among the poor has continuously declined. It is a well established fact that effective and equitable health systems are a pre-requisite for achieving the MDGs and other health goals [36]. Therefore, the current trend is likely to slow down or even reverse the achievement of the Millennium Development Goal aimed at reducing child mortality.
With respect to Diarrhoea and ARI interventions it has to be noted that an equitable condition demands that those with a higher burden of illness receive more of the treatment according to their need. Hence, the concentration curves should lie above the diagonal (line of equality). Equal use is not equitable in this case. As discussed earlier, diarrhoeal diseases and ARIs are among the major causes of morbidity and mortality among under-five children. It is therefore, necessary to identify the barriers to the utilization of these interventions by the poor so that the poor make use of these interventions more than the non-poor who have less need for it. The current situation of inequity may potentially affect progress towards the aforementioned MDG.
Although there is no inequity in antenatal care, delivery by medically trained personnel favours the non-poor. Moreover, delivery in public facilities is inequitable and to the advantage of the non-poor. This implies that the poor get less of the benefits of publicly financed/subsidized services, contrary to the government's policy objectives. Not unexpectedly, child delivery at home has a pro-poor orientation, which implies that the poor deliver at home proportionately more than the non-poor. The fact that government services are utilized more by the non-poor implies that the poor have a constrained access to child delivery services. This may be related to physical distance, low perceived quality or cultural barriers to name but a few. The definitive contributing factors should be identified by means of further studies. By and large, this trend is likely to jeopardize the pace of reducing maternal mortality and thereby achieving the MDG 5 target, that is reducing maternal mortality.
The inverse equity hypothesis proposed by victora et al [37] states that new interventions will initially benefit those of higher socio-economic status and only later do they reach the poor. This results in initial increase in inequity ratios for coverage, morbidity and mortality [36]. Policy makers should, therefore, take this phenomenon into account and counteract the widening of inequities through appropriate service delivery strategies. Increasing coverage in poor communities through targeting of those interventions that mainly benefit the poor as well as universal coverage of interventions that address conditions that significantly affect the poor is needed [38].
Overall pro-rich inequities in health and healthcare are widespread in Malawi and in some cases are widening despite the concerted efforts of government and its development partners. Improvements in population averages of the indicators should not be taken at face value, as the widening disparities imply that the MDG targets may be achieved by the non-poor, but the poor segments of society might not be able to reach them. The fact that the nonpoor benefit more from the publicly provided services, which are highly subsidized, is also a point of concern that calls for effective means of targeting the scarce resources. Initiatives such as the sector-wide approach (SWAp) [39] and the design of essential healthcare package are not inherently equitable if not complemented with policies and strategies that uphold the principles of equity. It is therefore, important to assess interventions/ initiatives not only in terms of their efficiency, but also their impact on equity through an appropriate equity gauge [40].
|
2014-10-01T00:00:00.000Z
|
2007-05-15T00:00:00.000
|
{
"year": 2007,
"sha1": "a278a7463b5be85e73bed5921c84b57748248a4e",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-7-78",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f7d9f745fd7e0152853f037c70604a91abcc0281",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11748809
|
pes2o/s2orc
|
v3-fos-license
|
Local Control by Radiofrequency Thermal Ablation Increased Overall Survival in Patients With Refractory Liver Metastases of Colorectal Cancer
Abstract Radiofrequency thermal ablation (RFA) is widely used for local solitary liver tumor control. However, the benefit of RFA for colorectal cancer with liver metastases, which is refractory to chemotherapy, remains unknown. We retrospectively enrolled 70 consecutive colorectal adenocarcinoma patients, who had synchronous liver metastases, who were refractory to chemotherapy, and whose life expectancy was >6 months, into this study to investigate the outcomes of RFA and associated prognostic factors. RFA was introduced to all of these patients during the enrollment. The time interval from RFA to recurrence of liver metastases and overall survival was recorded. Age, sex, carcinoembryonic antigen level, primary tumor location, postoperative adjuvant chemotherapy regimens, and the size and number of metastatic liver lesions were recorded. Cox regression analysis was used to determine the prognostic significance. Thirty-nine patients accepted RFA during chemotherapy, whereas 31 chose to receive chemotherapy alone. Patients with ≤5 and >5 liver metastases had median survival durations of 28 and 17 months, respectively (P = 0.018). The dominant liver tumor size (<5 vs ≥5 cm) was significantly associated with median survival (30 vs 17 months, respectively; P = 0.038), as was the carcinoembryonic antigen level (35 vs 16 months for ≤200 vs >200 ng/mL respectively; P = 0.029). Besides, radiofrequency thermal ablation plus chemotherapy was associated with a better median overall survival than chemotherapy alone (29 vs 12 months, respectively; P = 0.002). In multivariate analysis, only radiofrequency thermal ablation treatment and number of liver tumors were significant prognostic factors for survival. Our result further revealed that patients treated with radiofrequency thermal ablation had longer progression-free intervals than those treated with chemotherapy alone (18 vs 9 months, respectively; P = 0.001). Hence, radiofrequency thermal ablation is a safe and effective adjunct treatment to chemotherapy.
INTRODUCTION
C olorectal cancer (CRC) is a devastating disease. Based on the statistical data, there were about 25% of CRC patients having synchronous liver metastases while being diagnosed, whereas another 50% eventually developed recurrent disease within their livers. 1,2 Complete resection for liver metastases remains the criterion standard of treatment for CRC with liver metastases. However, 80% to 90% of them, unfortunately, are usually ineligible to receive complete resection because of either their extensive liver lesions or multiple medical comorbidities. 3,4 Palliative chemotherapy is the standard of care for metastatic CRC patients who are ineligible to receive complete resection of liver metastases in the past. 5 Nevertheless, there were growing evidence showing that regional treatments, including intrahepatic arterial infusion pumps, cryotherapy, chemoembolization, and radiofrequency thermal ablation (RFA), may provide some benefits to those patients who have inoperable liver metastases.
RFA is widely used for local control of primary and secondary liver tumors. During RFA, heat, generated from a high-frequency alternating current, is applied to induce cellular death. Several studies [6][7][8] showed that RFA is safe and feasible in patients with solitary metastatic liver tumors. They had further shown that overall survival (OS) rates were not significantly different between patients who underwent RFA and those who underwent surgical resection of liver metastases. [6][7][8] In terms of cases involving multiple liver lesions that would like to treat with curative intent, however, it remains unclear whether surgery to liver metastases or RFA may provide better OS. Some studies [9][10][11][12][13] reported that RFA-treated patients have similar OS when compared with surgically treated patients, whereas other studies showed better OS rates in surgically treated patients.
Although benefits of RFA for patients who had resectable solitary liver metastases have been proven, the outcomes of RFA to those patients who had unresectable CRC with liver metastases and only received palliative chemotherapy remain unknown. Therefore, the aim of this study was to evaluate the potential benefits of RFA plus chemotherapy compared with that of chemotherapy alone for CRC with metastatic lesions confined to the liver. Additionally, we evaluated various factors that may predict survival in these patients.
MATERIAL AND METHODS
We retrospectively enrolled consecutive patients, whose cancers were histologically proven as adenocarcinoma of the colon or rectum, who had synchronous liver metastases, and were referred to the National Taiwan University Hospital between January 2007 and December 2009. Their treatments were evaluated and decided by a multidisciplinary team, which included colorectal surgeons, liver surgeons, oncologists, radiologists, and pathologists. The reasons for not being able to perform complete hepatic resection of liver metastases included the number and location of liver lesions, insufficient hepatic reserve of patients, and patients' comorbidities. Patients were considered to be potential candidates for RFA treatment if they had the following 2 conditions and responded poorly to chemotherapy (determined by growing or new liver masses identified by computed tomography [CT] or by elevated serum carcinoembryonic antigen [CEA] levels): they still had liver metastases after their primary cancers had been resected and they had received at least 2 different regimens of chemotherapy (generally consisting of FOLFOX or FOLFIRI AE bevacizumab), and they remained in good performance status (Eastern Cooperative Oncology Group-World Health Organization scores of 0 or 1). Patients who had either a life expectancy of <6 months or had their disease progressed outside their livers were excluded. All of the patients who agreed to receive RFA continued to receive chemotherapy during and after RFA. The regimens during RFA treatment between these 2 groups are FOLFOXIRI or high-dose infusional 5 FU/leucovorin AE targeted therapy.
RFA was performed percutaneously for hepatic tumors <5 cm under ultrasonographic guidance. All of the patients received RFA with general anesthesia in the operation room. Single 17-G internally cooled electrodes (Cool-tip TM RF ablation system, COVIDIEN, Mansfield, Massachusetts, USA) were used for each tumors. The radiofrequency current had been applied for 12 minutes for each tumor. Either 1 or 2 tumor was ablated in each RFA sessions depending on the total number of the tumors. Follow-up CT scanning of the ablated tumors was proceeded 1 month after RFA. Patients underwent additional RFA sessions for those who had the residual tumors, which was unable to be ablated completely in the first attempt, or for those whose viable tumors after ablation remained identified based on CT scan. All the patients who only received chemotherapy were followed-up by abdominal CT scanning every 3 months. The numbers and sizes of metastatic hepatic tumors were reviewed via CT by a single radiologist. The response to treatment was scored retrospectively as partial response, stable disease, or progressive disease (PD) according to the revised response evaluation criteria in solid tumors (RECIST) guideline. 14 Recurrence from previously ablated tumor was evaluated and reablated by the same surgeon.
A number of potential prognostic variables were analyzed. All patient data were obtained and managed in accordance with the approved guidelines from the Institutional Review Board of the National Taiwan University Hospital. The progression-free time from RFA to recurrence of liver metastases was determined from RFA application to PD. The time of progressionfree from RFA to recurrence of liver metastases and OS were analyzed by the Kaplan-Meier curve. The difference between RFA plus chemotherapy and chemotherapy alone was evaluated by the log-rank test. The associations among prognostic variables with progression-free time from RFA to recurrence of liver metastases and OS were analyzed by the Cox proportionalhazards model. Data were analyzed using SPSS 19.0 statistical software (SPSS Inc, Chicago, IL). P values <0.05 were considered statistically significant.
Demographics
Thirty-nine patients were enrolled into the RFA plus chemotherapy group and 31 were enrolled in the chemotherapy alone group. Patients' characteristics were presented in Table 1. The 2 groups had similar mean ages, serum CEA levels, distributions of primary and metastatic tumor locations, frequencies of comorbid medical conditions, numbers of liver metastases, largest size of hepatic metastases, and frequencies of receiving targeted therapy.
The surgical intervention for primary colon cancer was also analyzed. In RFA plus chemotherapy group, among 27 patients with primary cancer in colon, 5 received anterior resection, 10 received left hemicolectomy, and 12 received right hemicolectomy. While other 12 patients with primary rectal cancer, all of them received low anterior resection. In chemotherapy-alone group, 7 received anterior resection for their sigmoid colon cancers, 7 received left hemicolectomy, 10 received right hemicolectomy, and 7 received low anterior resection for their rectal cancers. Eighty-five percent (33/39) of operations in RFA plus chemotherapy group were conducted by laparoscopic method, whereas 87% (27/31) of operations in chemotherapy group were performed by the same method (laparoscopy) (P ¼ 0.77).
RFA Result
A total of 113 RFA sessions for 135 tumors were performed in 39 patients. The mean number of RFA sessions for each patient was 2.89 (range: 1-11, SD: 2.26). The mean size of an ablated tumor was 2.96 cm (range: 0.7-4.8 cm, SD: 1.12). The complete ablation rate was 91.9% (124/135). Seven patients received additional RFA because of 9 recurrent tumors. The median recurrent time of these 9 tumors is 6 months. While most patients received RFA for >1 time because multiple tumors, which were unable to ablate in the first attempt, but not because of recurrent tumors. Post-RFA complications were assessed and graded according to the previously described classification of surgical complications. 15 The complication rate was 6.19% (7/113), with complications including the formation of 3 hepatic abscesses without drainage (Grade I), 2 pleural effusions (Grade IIIa), 1 hemoperitoneum (Grade II), and 1 hepatic abscess with radiological drainage (Grade IIIa). There were no cases of post-RFA mortality during the same admission period. The patients in chemotherapy-alone group continued to receive different cycles of chemotherapy and their common morbidities were neutropenia, anemia, and bacteremia. The mortality in these patients was caused by progression of liver disease.
Overall and Progression-Free Survival
Patients with <5 liver metastatic tumors had a longer duration of survival than those with >5 liver tumors (28 vs 17 months, respectively; P ¼ 0.018; Figure 1). Patients with dominant lesions <5 cm in size had better survival than those with dominant lesions >5 cm (30 vs 17 months respectively; P ¼ 0.038). Serum CEA levels less than 200 ng/mL were also associated with better survival than those >200 ng/mL (35 vs 16 months, respectively; P ¼ 0.029). Overall survival in the group treated with RFA plus chemotherapy was significantly longer than that in the chemotherapy-alone group (29 vs 12 months, respectively, P ¼ 0.002; Figure 2). No survival advantage was observed with respect to sex, age, medical comorbidity, colon versus rectal primary tumor location, or the use of targeted therapy, as shown in Table 2. In multivariate analysis, the independent factors associated with survival were limited to treatment type and the number of liver tumors. Only RFA treatment and metastatic liver tumors 5 were significant predictors of mortality (odds ratio [OR]: 4.122, 95% confidence interval [CI]: 1.897-8.953, P ¼ 0.001 and OR: 3.359, 95% CI: 1.485-7.598, P ¼ 0.004; respectively), shown in Table 3. The rate of lost follow-up is 5.13% (2/39) in RFA group and 3.22% (1/31) in chemotherapyalone group.
Our result further revealed that patients treated with RFA had longer progression-free intervals than those treated with chemotherapy alone (18 vs 9 months, respectively; P ¼ 0.001), shown in Figure 3. Although most patients were died of progression of liver disease, however, RFA treatment significantly decreased the incidence of death caused by hepatic failure or biliary infection (51% in the RFA plus chemotherapy group and 87% in the chemotherapy-alone group, P ¼ 0.002), shown in Table 1.
DISCUSSION
The previous studies showed that patients with ''solitary resectable'' liver tumor treated with RFA has equivalent OS compared to those treated with surgical resection. RFA was used for curative intent in these studies. [6][7][8] However, the outcomes of RFA to those patients who had unresectable CRC with liver metastases and only received palliative chemotherapy remain unknown. 11,12 We are uncertain whether these patients, who are not eligible for surgical resection with curative intent, can benefit from RFA. In our study, comparison was conducted between the patients who received RFA plus chemotherapy to those who received chemotherapy alone. Therefore, we limited our study to patients with unresectable CRC with liver metastases who responded poorly to chemotherapy (determined by growing or new liver masses identified by computed tomography or by elevated serum CEA levels) and RFA was included with palliative intent. All patients continued to receive chemotherapy and RFA was added as an adjuvant therapy during the period between different cycles of chemotherapy. The regimens between these 2 groups are FOLFOXIRI or high-dose infusional 5FU/leucovorin. Notably, our data showed that more patients in the chemotherapy-alone group received FOLFOX-IRI (22/30, 73.3%), which is a more intensive regimen, when they compared to those, who received RFA plus chemotherapy (19/31, 61.3%). This result indirectly demonstrated that the shorter overall survival in the chemotherapy-alone group was not attributed to a weaker intensity of chemotherapy, although the percentage of the patients who received FOLFOXIRI between 2 groups was not significantly different (P ¼ 0.086). This result was shown in Table 1 Furthermore, we analyzed the cause of death in both groups. Most patients with CRC with liver metastases died from hepatic tumors progression, including hepatic failure and severe bile tract infection. Other causes of death included sepsis, multiple organ failure, or other metastases. However, the proportion of patients dying from hepatic progression was greater in the chemotherapy group than in the RFA group (87% vs 51% respectively, P ¼ 0.002). This could be explained by the reasons that RFA may provide better local (hepatic) control for metastatic disease 16 Prognostic factors associated with increased survival were identified in our study. Consistent with previous studies, [17][18][19] we found that parameters associated with tumor burden, such as the CEA level, tumor size, and number of hepatic lesions, determined overall survival. Multivariate analysis also indicated that fewer hepatic lesions were significant predictors of longer survival.
Complications after RFA had been analyzed in several large case series. The frequency of major RFA complications ranges from 0.6% to 8.9%. 20,21 In our study, the overall complication rate was 6.19%, in line with previous studies. In recent years, new devices capable of ablating tumors >5 cm have appeared; this would make more tumors in studies such as ours eligible for ablation.
We retrospectively evaluated the benefits of RFA in patients who had CRC with liver metastases and refractory to second-line chemotherapy. Therefore, this nonrandomized study, indeed, had some limitations. The limitations included the retrospective nature of data acquisition, heterogeneous patient populations, different chemotherapy regimens before RFA, and a small sample size. It could be argued that comparing the outcomes of these patients, who had received and not received RFA, is inconclusive because the disease nature of those patients who received RFA may be more indolent than those who were unable to receive RFA. This bias could also explain the relatively shorter overall and progression-free survival in the group of chemotherapy alone. However, in our study, this bias was, in part at least, overcome by subsequent comparison of the characteristics of the 2 groups shown in Table 1. Although our study identified the palliative role of RFA in patients with CRC with liver metastases, further prospective studies are required to verify the results reported here.
CONCLUSION
The results of our study provide evidence that RFA plus chemotherapy may delay the progression of hepatic tumors and improve the survival of colorectal cancer patients when surgical resection is not feasible and metastases are confined to the liver. There was a low frequency of RFA complications and no deaths immediately following treatment. For patients with poor responses to chemotherapy, RFA could be an adjunct treatment of choice. Kaplan-Meier survival analysis for progression-free survival according to treatment method. The chemotherapy-only group (green lines) has a shorter interval to progressive disease than the radiofrequency ablation (RFA) plus chemotherapy (blue lines) group (median time to progression: 9 vs 18 months, respectively; P ¼ 0.001).
|
2018-04-03T06:16:05.627Z
|
2016-04-01T00:00:00.000
|
{
"year": 2016,
"sha1": "fab1189a45473d512c82167d823c15d398f41fb4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000003338",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fab1189a45473d512c82167d823c15d398f41fb4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247123102
|
pes2o/s2orc
|
v3-fos-license
|
The case of determining the species, gender, age and race of the skull with congenital multiple developmental anomalies
In forensic medicine, a reconstructive victim profile is a widely used procedure for providing individual data in cases of complex identification of a person. The most valuable data such as gender, age, origin and height are obtained from skeletal and dental analyses. Deformed skulls require special expert attention during the forensic examination of bone remains. Unusual skull shapes are usually formed with certain diseases (hydrocephalus, craniostenosis, rickets), various kinds of injuries or as a result of artificial (intentional) changes in the shape of the head. The detection of cranial deformity has a great forensic importance in identifying a person, allowing you to further outline the range of diseases that an unknown person could have suffered during his lifetime. The article describes a rare forensic case of identification of a human skull with congenital multiple developmental anomalies. During the forensic medical and forensic examination, the deceased had a history of signs of a rare disease characterized by the formation of a specific deformation of the skull. A comprehensive assessment of the data is very important when conducting forensic medical studies aimed at helping investigative authorities to identify human remains.
Introduction
In the research, forensic identification of a person of bone remains can present significant difficulties for forensic experts [1,2]. Difficulties in identifying bones arise during the destruction of bone tissue as a result of exposure to high temperatures, chemical and other factors, as well as when several corpses are found in one burial [3,4]. The skull and long tubular bones remain the most informative for determining gender, age and individual anthropometric data [5,6]. For the present, a number of problems remain in identifying a person's identity, despite the improvement of biometric identification and DNA analysis methods [4,7]. This is primarily due to the processes of race mixing, a characteristic trend of modern humanity, which leads to a change in the main craniometric parameters of the skull [1,5,8]. On the territory of the Republic of Kazakhstan, the mestization of the population is directly related to migration processes. In this region, the formation of a contact zone (the zone of fusion of Caucasians and Mongoloids) [2] is clearly traced, which is caused by many factors, including the deportation of the people of the former USSR during the repressions, the evacuation of the population during the Second World War and the subsequent processes of globalization. As a result, the number of Mestizos living in this territory is steadily increasing, creating certain difficulties in the racial identification of bone remains. The digital indicators existing in the methods for identifying a person by bone remains are somewhat outdated, in addition, in most cases they are adapted exclusively for the identification of the Caucasian population of central Russia. The use of these data for the multinational ethnic composition of the Republic of Kazakhstan is poorly justified, since they can be interpreted incorrectly due to the processes of acceleration and urbanization of the local population. At the same time, individual innate and acquired bone features has a particular importance [9,10]. However, the differential diagnosis of gender, age and race in congenital anomalies of the development of the skull bones in forensic medical practice is difficult, because anomalies of the development of the skull lead to changes in metric, anatomical, morphological and radiological signs [5,11,12]. At the same time, situations when it is necessary to identify the identity of skeletonized human remains continue to be very popular. In this regard, an interesting example of a medical and forensic examination of bone remains found in the Almaty region.
Case presentation
Not far from the farm "Zhylkibayev", located near the village of Shengeldy, Almaty region, skeletal fragments were found, presumably belonging to a missing person who became a victim of murder. For identification of the person, the material was transferred for medical and forensic examination to the Institute of Forensic Examinations in Almaty, RSME "Center for Forensic Examination of the Ministry of Justice of the Republic of Kazakhstan". The skull, four cervical, three thoracic and two lumbar vertebrae, five ribs, left femur, left and right tibia were presented for the research. The bones presented for examination are white with a yellowish tinge, dry, light, completely devoid of soft tissues. A comparative analysis of the morphology of the skull bones revealed a pronounced deformation in the occipital bone, forming a hemispherical protrusion ( Figure 1, Figure 2). The interparietal suture is deformed and deviates to the left in its apical and bregmatic parts ( Figure 3). The occipital suture is represented by significantly overgrown teeth, has a width of up to 35 mm ( Figure 4). For further investigation, the cranial cavity was opened with the help of an angular saw passing through the frontal and parietal bones. When measuring the base of the skull using anthropological craniometric instruments and subsequent comparison of anatomical characteristics, it was revealed: a decrease in the size of the anterior and middle cranial pits, an increase in the size of the posterior cranial fossa, a decrease in the length of the Blumenbach slope and a flattening of the base of the skull ( Figure 5). The mastoid processes are reduced, the awl-shaped processes are reduced, one horizontal abnormal suture of the occipital bone departs from the occipital-mastoid sutures ( Figure 6). The circumference of the skull is 555 mm, the longitudinal diameter is 195 mm, the transverse is 148 mm. The thickness of the frontal and parietal bones corresponds to the norm, the occipital bone is somewhat thinned.
When establishing the race of the investigated skull, out of 28 craniometric indicators of the external structure of the skull [1], 21 indicators were identified, confirming the Mongoloid origin. 3 indicators were identified as probable Caucasoid signs. The remaining 4 craniometric indicators of the examined human skull could not be identified due to the absence of a part of the dental apparatus. Additionally, a comparative analysis of the cranioscopic parameters of the examined skull was carried out with similar indicators of 10 skulls of men of the Mongoloid race taken from the regional database (there are no female skulls of the Mongoloid race in the regional database). It should be particularly noted that from 25 generally accepted cranioscopic indicators, 3 indicators (forehead width, condylar and bigonal width) were not studied. This is due to the fact that on some skulls the frontal bone and lower jaw were missing.
For the convenience of comparison, for each of the studied 22 signs, the smallest, average, the largest indicators were used, which are respectively designated as uncertain (U), probably male (PM), reliably male (RM) and probably female (PF). The results of the comparative research are presented in Table 1.
It was found that from 22 signs, 14 exceed the average indicators, while 7 of them exceed the largest values. The analysis of the results showed that the skull provided for the research has metric characteristics that differ from most indicators characteristic of the skulls of the Mongoloid race. When examining the skull to establish somatic gender, only 23 parameters out of 25 parameters accepted in forensic medical practice were identified. It was not possible to determine 2 parameters, due to the complete absence of the left branch of the lower jaw. There were 6 reliably male signs, probably male -12, uncertain -4 and probably female -1.
When determining the age, it was found that: the teeth on the examined skull of the coronal suture were smoothed in the temporal and bregmatic parts, which corresponds to the age of about 30-40 years. At the same time, the interparietal suture is partially smoothed in the back, which corresponds to 20-30 years. The degree of occipital suture overgrowth was not evaluated due to the presence of anomalies in the development of the skull. The wedge-frontal, wedge-parietal and wedge-temporal sutures are smoothed, but not overgrown throughout, which corresponds to an age of less than 40 years. On the inside of the skull sutures, the coronal suture is overgrown, the rest of the sutures are smoothed, which corresponds to an age of less than 40 years. A comprehensive analysis of the data obtained showed that the degree of overgrowth of the skull sutures corresponds to the age of 20 to 40 years, however, taking into account the presence of anomalies in the development of the skull, the result of the study may have relative significance.
Discussion
The results obtained convincingly indicate that the anatomical and physiological features of the bones of the skull under study are characteristic of basilar impression, in which the base of the skull is flattened, the dimensions of the anterior and middle cranial pits at the level of the Turkish saddle are reduced, and the length of the Blumenbach slope is also reduced [13,14]. According to some authors, basilar impression is rarely isolated and often occurs in such genetic diseases as Hajdu-Cheney syndrome, Gorham syndrome and others [12,14,15]. Hajdu-Cheney syndrome is a rare autosomal dominant congenital connective tissue disorder characterized by severe and excessive bone resorption, leading to osteoporosis and a wide range of other possible symptoms [12]. Patients may have a peculiar phenotype, which characterized by a small lower jaw, a thick depression in the back of the head, osteoporosis, low height, dislocations of bones, a short neck, thick eyebrows, thick hair, high or low palate and low-lying ears (Figure 7, Figure 8) [16].
Gorham-Staut disease is an extremely rare disease characterized by osteolysis due to anomalous proliferation of blood vessels. In the case of the onset of the disease in childhood, skeletal deformities develop. Bone loss can occur both in one bone and in several, involving soft tissues in this process [13]. The course of the disease is variable and unpredictable. Involvement of the bones of the skull and spine in the process is unfavorable in prognostic terms. However, there are cases when people with this nosological form lived up to 70 years [11,13]. In our case, the bones of the arch and the base of the skull also had pronounced finger-like indentations ( Figure 5), which indicates the presence of pronounced intracranial hypertension in the during his lifetime.
The racial affinity of the examined skull was carried out solely for the purpose of identifying human remains. However, one of the main problems of human identification is that the emergence of racial hybridity is not taken into account [7]. According to a number of authors, skeletons currently show features "typical" of two or more racial groups and it is very problematic to attribute them to one specific racial group [3,17]. World migration has further strengthened categorical ideas about biological variations, and some authors emphasize the important influence of socio-geographical environmental factors on the shape of the human skeleton [18]. At the same time, some authors emphasize that when conducting a forensic medical examination, one should be fully aware of the presence of many biological inaccuracies when identifying human remains [3,7]. However, there is also a point of view according to which it is necessary to form databases characterizing specific craniometric indicators of populations for each region separately, since the geographical movement of people occurs on a very large scale and leads to an increase in populations of mixed individuals [1,7,19,20].
The solution of the issue of the gender of bones has the greatest practical importance in forensic medical examination, because, in the identification process, it allows to reduce the number of wanted persons by half. However, in human populations, the degree of difference in size characteristics between men and women may be small [17,19]. The size range within each gender is widely superimposed so that only the smallest women and very large men are outside the overlap range of indicators of the opposite gender, in addition, it is necessary to take into account the racial and ethnic characteristics of the population [2,10,20]. Diagnosis of gender by the skull can be complicated by various factors, including environmental, occupational, as well as nutritional characteristics and pathological changes due to diseases [4,21,22]. The craniometric approach allows to unify the degree of human identification, however, the craniometric method of determining the gender of V.I. Pashkova used in the CIS countries requires certain additions and changes, since it was originally based on the research of skulls belonging exclusively to people of Russian nationality aged 22 years and older who lived in the north-west of Russia. In addition, this technique is not recommended for use in the study of deformed, fragmented skulls and remains exposed to high temperature, as well as the skulls of children [10]. According to many researchers, the significance of various signs is not the same, and therefore, the use of a combined application of craniometric and cranioscopic approaches is recommended [2,7,19].
The solution of the tasks is impossible without an integrated approach to the research of the totality of all diagnostic and identification features of the human skeleton. Based on the conducted research, it was found that the skull most likely belonged to a man of the Mongoloid race with the presence of separate Caucasoid features. The age of the unknown ranged from 20 to 40 years, but at the same time had only relative significance due to multiple congenital anomalies of the skull, the formation of which is due to the presence of a rare genetic disease during the life of the deceased, most likely the Hajdu-Cheney or Gorham syndrome. The detection of such a developmental anomaly has a great forensic importance, since the appearance of such people during their lifetime is very specific and contributes to rapid identification of the individual. Thus, it can be argued that a forensic medical research characterized by variability of results, cannot be used in isolation from auxiliary data indicating the presence during life of certain diseases affecting the structure of the skeleton when identifying a person. The expansion of the competence of a forensic medical expert is due to the demands of time, the expansion of the tasks of expertise and methods of their solution. The technologies applied within the framework of the conducted medical and forensic examination made it possible to successfully solve the issues of interest to the investigator and to work out all possible investigative versions of criminal events as objectively as possible.
Disclosures: There is no conflict of interest for all authors.
|
2022-02-26T16:18:10.532Z
|
2022-02-25T00:00:00.000
|
{
"year": 2022,
"sha1": "75d0a638200baed12980c45b2ee2097c7b732cdb",
"oa_license": "CCBY",
"oa_url": "https://www.clinmedkaz.org/download/the-case-of-determining-the-species-gender-age-and-race-of-the-skull-with-congenital-multiple-11682.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6afe602861a3c3a182ea231802770a28e78ae865",
"s2fieldsofstudy": [
"Medicine",
"Law"
],
"extfieldsofstudy": []
}
|
211540272
|
pes2o/s2orc
|
v3-fos-license
|
Transcriptome analysis reveals the defense mechanism of cotton against Verticillium dahliae in the presence of the biocontrol fungus Chaetomium globosum CEF-082
Background Verticillium wilt of cotton is a serious soil-borne disease that causes a substantial reduction in cotton yields. A previous study showed that the endophytic fungus Chaetomium globosum CEF-082 could control Verticillium wilt of cotton, and induce a defense response in cotton plants. However, the comprehensive molecular mechanism governing this response is not yet clear. Results To study the signalling mechanism induced by CEF-082, the transcriptome of cotton seedlings pretreated with CEF-082 was sequenced. The results revealed 5638 DEGs at 24 h post inoculation with CEF-082, and 2921 and 2153 DEGs at 12 and 48 h post inoculation with Verticillium dahliae, respectively. At 24 h post inoculation with CEF-082, KEGG enrichment analysis indicated that the DEGs were enriched mainly in the plant-pathogen interaction, MAPK signalling pathway-plant, flavonoid biosynthesis, and phenylpropanoid biosynthesis pathways. There were 1209 DEGs specifically induced only in cotton plants inoculated with V. dahliae in the presence of the biocontrol fungus CEF-082, and not when cotton plants were only inoculated with V. dahliae. GO analysis revealed that these DEGs were enriched mainly in the following terms: ROS metabolic process, H2O2 metabolic process, defense response, superoxide dismutase activity, and antioxidant activity. Moreover, many genes, such as ERF, CNGC, FLS2, MYB, GST and CML, that regulate crucial points in defense-related pathways were identified and may contribute to V. dahliae resistance in cotton. These results provide a basis for understanding the molecular mechanism by which the biocontrol fungus CEF-082 increases the resistance of cotton to Verticillium wilt. Conclusions The results of this study showed that CEF-082 could regulate multiple metabolic pathways in cotton. After treatment with V. dahliae, the defense response of cotton plants preinoculated with CEF-082 was strengthened.
studies have shown that various biological control agents can suppress Verticillium wilt in different host species [7,8]. Iturins mediate the defense response, and significantly activate PR1, LOX, and PR10 at 24 h after V. dahliae infection [9]. The nonvolatile substances produced by CEF-818 (Penicillium simplicissimum), CEF-325 (Fusarium solani), CEF-714 (Leptosphaeria sp.), and CEF-642 (Talaromyces flavus) inhibit V. dahliae growth [10]. Fusarium oxysporum 47 (Fo47) reduced the symptoms of Verticillium wilt in pepper; the expression of three defense genes, CABPR1, CACHI2 and CASC1, was upregulated in the roots [11]. Bacillus subtilis DZSY21 reduced the disease severity of southern corn leaf blight, and upregulated the expression level of PDF1.2 [12]. Preinoculation of cauliflower with Verticillium Vt305 reduced symptom development and the colonization of plant tissues by Verticillium longisporum [13]. Various fungal and bacterial strains showed biocontrol activity against Verticillium wilt of olive. These microorganisms protect plants from the deleterious effects of the various pathogens, cause induced systemic resistance (ISR), compete for nutrients and colonization space, or promote plant growth through the production of phytohormones and the delivery of nutrients [14].
It has been reported that a series of immune reactions are induced in cotton plants infected with V. dahliae. In recent years, transcriptomic studies of the defense responses of plants infected with V. dahliae have become increasingly common, and several signal transduction pathways and key genes have been identified, including those involved in plant hormone signal transduction, plantpathogen interaction, and phenylpropanoid-related and ubiquitin-mediated signals in cotton; additionally, these studies have investigated members of key regulatory gene families, such as receptor-like protein kinases (RLKs), WRKY transcription factors and cytochrome P450s (CYPs) [3]. The expression levels of phenylalanine ammonia-lyase (PAL), 4-coumarate-CoA ligase (4CL), cinnamyl alcohol dehydrogenase (CAD), caffeoyl-CoA O-methyltransferase (CCoAOMT), and caffeoyl O-methyltransgerase (COMT) in the phenylalanine metabolism pathway have been shown to be upregulated in sea-island cotton [2]; the expression levels of 401 transcription factors (TFs), mainly in the MYB, bHLH, AP2-EREBP, NAC, and WRKY families, have been shown to be up-or downregulated in response to V. dahliae in Arabidopsis thaliana [15]; and genes encoding cyclic nucleotide gated channel (CNGC), respiratory burst oxidase homologue (RBOH), flagellin-sensitive 2 (FLS2), jasmonate ZIM domain-containing protein (JAZ), transcription factor MYC2, regulatory protein NPR1 and transcription factor TGA have been shown to be induced by V. dahliae in sunflower [16]. Several studies have investigated transcript levels in plants in response to biocontrol agents [17,18].
In previous studies, we found that the endophytic fungus Chaetomium globosum CEF-082, which was isolated from upland cotton plants, suppressed the growth of V. dahliae and increased cotton resistance to Verticillium wilt [19]. However, the signalling mechanism induced by CEF-082 is unknown. Therefore, the purpose of this study was to reveal the molecular mechanism by which CEF-082 increased cotton resistance to Verticillium wilt via RNA sequence analysis.
Results
Control effect of CEF-082 on Verticillium wilt of cotton and H 2 O 2 content The disease index was 18.61 in the control group (water+ V. dahliae) and 7.62 in the treatment group (CEF-082+ V. dahliae) 14 d after V. dahliae inoculation (Fig. 1A). The results showed that CEF-082 enhanced the resistance of cotton to Verticillium wilt, and the biocontrol effect was 59.1% (Fig. 1C).
The H 2 O 2 content in the treatment group was higher than that in the control group throughout the majority of the duration of the experiment and lower than that in the control group at 5 dpi with V. dahliae. The H 2 O 2 content in the treatment group was highest at 2 dpi (12.80 μmol/g), while the H 2 O 2 content in the control group was highest at 1 dpi (10.38 μmol/g). The changes in the two groups were similar and were stable 5 d later (Fig. 1B).
Verification of RNA-Seq analysis by qRT-PCR
Twelve DEGs were randomly selected. The gene expression levels in the control and treatment groups were compared by qRT-PCR. The RNA-seq data showed that the expression of the 12 genes was upregulated at 0 h, 12 h or 48 h. The qRT-PCR results showed that the expression of nine of the 12 genes was upregulated, which was consistent with the results of their upregulated expression in the transcriptome; however, the expression of three genes was downregulated, which was inconsistent with their expression in the transcriptome, namely, Gh_D12G2793, Gh_D08G2484 and Gh_D05G3615 (Fig. 2). In addition, the levels of upregulation of 5 genes in the qRT-PCR data were lower than those in the RNA-seq data. The qRT-PCR data were up to 75% consistent with the transcriptome data.
Functional annotation and enrichment analysis of the DEGs
The minimum correlation between the three replicates was 95.5% (Additional file 1: Figure S1). Principal component analysis (PCA) of 18 arrays (Additional file 2: Figure S2) was also used to compare the samples and to explore the dynamic changes in the cotton transcriptome after treatment with CEF-082 and V. dahliae.
The average clean reads of the 18 samples was 62.08 M. The lowest Q20 value of the clean reads was 97.93, and the lowest Q30 value was 90.06 (Additional file 9: Table S2). A total of 47,183 new transcripts were found, of which 7288 belonged to new protein-coding genes (Additional file 10: Table S3).
There were 3480 upregulated and 2158 downregulated DEGs at 0 h, 1716 upregulated and 1205 downregulated DEGs at 12 h, and 1524 upregulated and 629 downregulated DEGs at 48 h. The greatest number of DEGs was identified after inoculation with CEF-082 for 24 h. After inoculation with V. dahliae, the number of DEGs gradually decreased.
The GO enrichment analysis revealed that the 5638 genes were mainly enriched in 86 terms, including the intrinsic component of membrane, integral component of membrane, membrane part, membrane, catalytic activity, response to biotic stimulus, cell wall, oxidoreductase activity, defense response, response to stimulus, response to stress, and response to fungus (Q-value < 0.001), and the first 15 terms are listed in Table 2. Of the 16 genes in the response to fungus term, 15 were upregulated and 1 was downregulated. The GO classification showed that there were 18, 14 and 12 terms in DEGs coinduced by CEF-082 and V. dahliae The 463 shared DEGs at 12 h and 48 h were significantly enriched in 6 KEGG pathways (Table 3). In the plantpathogen interaction pathway, 29 DEGs regulated 8 crucial points, including CNGCs, calmodulin (CaM), FLS2, disease resistance protein RPS2 (RPS2), heat shock protein 90 kDa (HSP90), pto-interacting protein 1 (Pti1), disease resistance protein RPM1 (RPM1), and EIX receptor 1/2 (EIX1/2). In the phenylpropanoid biosynthesis pathway, 23 DEGs regulated 9 crucial points. In the flavonoid biosynthesis pathway, 12 DEGs regulated 8 crucial points. The enriched GO terms included terpenoid metabolic process, oxidoreductase activity, defense response, H 2 O 2 metabolic process and ROS metabolic process terms. A total of 1209 specific DEGs were identified at 12 h and 48 h, which were induced only in cotton plants inoculated with V. dahliae in the presence of CEF-082, but not when cotton plants were inoculated with V. dahliae only. The cluster thermogram showed the expression patterns of these genes at different stages (Additional file 3: Figure S3). KEGG classification showed that these DEGs mainly belonged to metabolism (672 DEGs) and were significantly enriched in 5 KEGG pathways, including flavonoid biosynthesis, indole alkaloid biosynthesis, MAPK signalling pathway-plant, plantpathogen interaction, and phenylpropanoid biosynthesis (Table 4). GO classification showed that there were 14, 12 and 9 terms in the biological process, cellular component and molecular function, respectively. GO enrichment indicated that these DEGs were enriched in ROS metabolic process (14 DEGs Figure S4). At 12 h and 48 h, 96 shared DEGs were obtained, which were induced only in cotton plants inoculated with V. dahliae in the presence of CEF-082, but not when cotton plants were inoculated with V. dahliae only (Additional file 5: Figure S5). KEGG analysis of the 96 DEGs indicated that they were mainly enriched in glutathione metabolism and flavonoid biosynthesis ( Table 5). GO analysis showed that the DEGs were enriched in the terms superoxide dismutase activity, oxidoreductase activity, acting on superoxide radicals as acceptors, and antioxidant activity. Of the 96 DEGs, 9 encoded TFs and 20 encoded predicted PRGs (Additional file 11: Table S4).
A protein-protein interaction network (Additional file 6: Figure S6) was constructed via the 96 DEGs shared Pathways with a Q-value < 0.05 are shown between 12 h and 48 h and other genes interacting with them in cotton. Six hub genes were obtained, Gh_ A05G1020, Gh_D09G0858, BGI_novel_G004376, Gh_ A08G0125, Gh_D07G1197 and Gh_A05G3508. Among them, Gh_D07G1197 was annotated in the flavonoid biosynthesis pathway.
Putative R genes and TFs involved in resistance to Verticillium wilt
On the basis of the transcriptome analysis, a total of 65 candidate genes that may be related to the resistance of cotton to Verticillium wilt were identified, including 5 CNLs (whose members contain an NB-ARC domain), 3 CNs (members of the U-box domain-containing protein kinase family protein), 5 NLs (whose members contain an NBS-LRR domain), 7 RLPs (whose members contain an eLRR-TM-S/TPK domain), 7 Ns (whose members contain an NBS domain only), 9 TNLs (members of the TIR-NBS-LRR class), 6 Ts (members of NAC domain containing protein 17), 1 Mlo-like (a member of the Mlo-like resistant proteins) and 2 other types (which have resistance functions but do not fit the known classes). These genes mainly included a disease resistance protein, 2 probable calcium-binding protein (CML45), 3 ethylene-responsive transcription factor (ERF), 2 cyclic nucleotide-gated ion channel 2 (CNGC2), 5 MYB TFs and 2 GST (Tables 6, 7 and 8). A clustering thermogram of 65 genes (Fig. 4) showed that certain genes were upregulated at 0, 12 and 48 h; certain genes were downregulated at 0 h and upregulated at 12 and 48 h; and certain genes were downregulated at 0, 12 and 48 h.
Discussion
The number of DEGs identified at 12 h and 48 h was lower than that identified at 0 h. The number of DEGs may have decreased in these cases because the plants were infected with V. dahliae and began to respond defensively. The DEGs between the CEF-082 treatment and CEF-082+ V. dahliae treatment, were enriched mainly in 5 signalling pathways: plant-pathogen interaction, MAPK signalling pathway-plant, flavonoid biosynthesis, phenylpropanoid biosynthesis, and glutathione metabolism. The pathways of plant-pathogen interaction and flavonoid biosynthesis were also induced in sunflower plants infected with V. dahliae [16], and the results were also consistent with those of Tan [20], who reported that most DEGs in tomato were associated with phenylpropanoid metabolism and plant-pathogen interaction pathways. However, the glutathione metabolism pathway has rarely been reported in the transcriptome of cotton plants treated with V. dahliae.
It is clear that plant responses to biotic or abiotic stress depend on interactions among several signalling pathways, including those mediated by JA, ET, salicylic acid (SA) or ABA [21,22]. Morán-Diez et al. [17] found SA-and JA-related DEGs were downregulated in A. thaliana after 24 h of incubation in the presence of Trichoderma harzianum T34. A set of DEGs influenced by JA or ET was induced upon pathogen attack when A. thaliana was previously colonized by a photosynthetic Bradyrhizobium sp. strain, ORS278 [18]. DEGs related to ET, SA, JA, brassinosteroid (BR) and cytokinin were upregulated or downregulated upon V. dahliae infection in cotton [3]. In this study, we also found that DEGs in ABA, auxin and gibberellin were significantly induced not only after treatment with CEF-082 but also after inoculation with V. dahliae. In addition, DEGs related to JA, ET, SA, BR and cytokinin were induced in cotton plants treated only with CEF-082. The 8 plant hormones were also induced after infection with V. dahliae in sunflower [16]. The responses of the A. thaliana auxin receptors TIR1, AFB1 and AFB3 and auxin transporter AXR4 were impaired upon infection with V. dahliae [23]. Therefore, both CEF-082 and V. dahliae can induce changes in hormones. Previously, it was shown that after plants were infected with pathogens, the FLS2 pattern recognition receptors recognized pathogens, and the hypersensitive response (HR) was activated through ROS, JA, WRKYs and the NO signalling pathways [24,25] and mediated by CNGC, RBOH, CaM/CML and FLS2 [26][27][28]. These results are consistent with the results of this study. In this study, 24 h after treatment with CEF-082, the DEGs of FLS2, Rboh, CDPK, CNGCs and GST in the plants were also upregulated or downregulated to varying degrees (Fig. 3). In addition, most of the genes encoding peroxidase (POD), superoxide dismutase (SOD), and catalase (CAT) were also upregulated. These genes were related to the accumulation of ROS. Forty-eight hours after treatment with V. dahliae, the genes encoding CNGC, CaM/CML and FLS2 were upregulated. However, in this study, the NO signalling pathway was not induced.
Phenylpropane synthesis is related to cotton defense mechanisms [29], while flavonoids are known to buffer substantial stress-induced alterations in ROS homeostasis and to modulate the ROS-signalling cascade [30]. Plant CNGC subunits and CaM constitute a molecular switch that either opens or closes calcium channels [31]. Previous reports have shown that calcium-dependent CDPK4 and CDPK5 regulate ROS production by phosphorylating NADPH oxidase in potatoes [32]. ROS are important not only as defense signalling mechanisms [33] but also for regulating programmed cell death via the establishment of the HR [34]. MAPK family members can improve resistance to Verticillium wilt of cotton [35]. In this study, 24 h after CEF-082 inoculation, certain signal transduction pathways might have been involved in the plant response to CEF-082 (Fig. 5). After inoculation with CEF-082, FLS2 recognized CEF-082, MAPK signal transduction was induced, and calcium channels opened. H 2 O 2 was then produced, leading to an ROS burst. Plant hormones were also induced, including ET, SA, JA, ABA, BR, auxin, gibberellin and cytokinin. The signalling pathways of flavonoids and phenylpropane synthesis were also involved in this process. In addition, lignin synthesis was induced after treatment with CEF-082 (Fig. 6). Figure 6 refers to the lignin biosynthesis pathway of Miedes et al. [36]. Cinnamate 4-hydroxylase (C4H) and p-coumarate 3 hydroxylase (C3H) were not induced in T0 hvs-C0h, T12 h-vs-C12h, or T48 h-vs-C48h but were induced in C12h-vs-C0h, which was similar to the results of Xu et al. [37], who indicated that C4H-1 and C4H-3 were upregulated after treatment with V. dahliae. Three days after inoculation with V. dahliae, lignin was detected, and the pith diameter of CEF-082 + V. dahliae-treated plants was slightly larger than that of water + V. dahliae-treated plants (Additional file 7: Figure S7). The defense response at T12 h and T48 h was similar to that at T0 h, and only a few key points induced were different in the pathways, which are shown in Figs. 5 and 6. Thus, it is speculated that CEF-082 reduced the occurrence of cotton Verticillium wilt because inoculation with CEF-082 can prime signalling pathways involved in defense against V. dahliae upon infection. When pathogens infect plants, they induce a series of defense responses. GST participates in plant defense and can remove ROS [38]. Plant GSTs can be subdivided into eight categories, phi, zeta, tau, theta, lambda, dehydroascorbate reductase (DHAR), elongation factor 1 gamma (EF1G) and tetrachlorohydroquinone dehalogenase (TCHQD) [39]. GSTF8 has been used as a marker of early stress and defense responses [40], and JA, methyl jasmonate, ABA and H 2 O 2 can induce GST expression [41][42][43]. LrGSTU5 was obviously upregulated after treatment with Fusarium oxysporum [44], and the GST genes were also upregulated in G. barbadense treated with V. dahliae [45]. In this study, the GST genes were also significantly induced 24 h after treatment with CEF-082 (Fig. 3), and GST genes were upregulated in cotton treated with water + V. dahliae. These results are consistent with those of Han et al. and Zhang et al. [44,45]. Certain GST genes were also significantly induced in the treatment group but were not significantly induced in the control group after treatment with V. dahliae. The GST gene Gh_A09G1509 was shown to increase resistance to Verticillium wilt in tobacco [46]. Hence, we suggest that CEF-082 can induce specific GST genes to protect cotton from V. dahliae. V. dahliae can induce a defense response after it infects cotton [3]. In this study, susceptible cotton varieties were inoculated with the biocontrol fungus CEF-082 and V. dahliae, which also induced a series of defense responses. Compared with plants inoculated with water +V. dahliae, the plants inoculated with CEF-082 + V. dahliae presented significantly upregulated or downregulated expression of resistancerelated genes. Therefore, it is speculated that the defense response was strengthened after inoculation with the biocontrol fungus CEF-082. In addition, we obtained 1209 specific DEGs, that were not induced in plants inoculated with water +V. dahliae, but were induced only in plants inoculated with CEF-082 + V. dahliae. GO enrichment showed that these genes were involved in ROS metabolic process. The disease resistance of cotton was enhanced after CEF-082 treatment, and thus, we inferred that these specific DEGs might be genes related to plant disease resistance.
Conclusion
CEF-082 can induce defense responses in cotton, and pretreatment with CEF-082 at an appropriate concentration of 10 5 spore/mL can improve the resistance of cotton (Jimian 11) to Verticillium wilt. Transcriptome analysis revealed that genes expressed in cotton leaves involved in ROS burst, Ca 2+ , lignin biosynthesis, flavonoids and phenylpropane synthesis were significantly upregulated or downregulated. Defense responses could be induced in cotton plants treated with CEF-082, and these responses were stronger in cotton plants inoculated with V. dahliae in the presence of CEF-082. In addition, 1209 specific DEGs induced only in plants inoculated with V. dahliae in the presence of the biocontrol fungus CEF-082 were obtained.
Fungal strain culture
The cotton endophyte C. globosum CEF-082 was cultured on potato dextrose agar (PDA) plates for 20 d. Spores were obtained by adding sterile water to each plate, rubbing a sterile spatula over the colony and then filtering the suspension through a sterile cheesecloth, after which the suspension was diluted to a 1 × 10 5 spore/mL. V. dahliae VD1070-2 was cultured on PDA for 7 d, inoculated into liquid Czapek-Dox medium [47], and cultured in the dark at 25°C and 150 rpm for 7 d. The mycelia were filtered out and removed, and the filtrate was subsequently diluted to a 1 × 10 7 spore/mL spore suspension.
Cotton inoculation treatment
Jimian 11, a highly Verticillium wilt-susceptible upland cotton variety, was provided by Professor Heqin Zhu from State Key Laboratory of Cotton Biology, Institute of Cotton Research of Chinese Academy of Agricultural Sciences. It is a cultivar selected from hybrid cross [(Jihan 4× Ke 4104) F 2 × 74Yu102]. The seeds were sterilized with 70% alcohol for 1 min and then with 1.05% sodium hypochlorite for 10 min, after which the seeds were washed with sterile water 5 times. The cotton seeds were planted in vermiculite and transferred to plastic pots (25 cm × 15 cm) that contained 2000 mL of liquid culture solution after emergence. The cultivation solution was prepared according to the methods of Zhang et al. [48], with some modifications. In this study, 2 mM NaCl was used instead of 2.5 mM KCl, while the other 9 mineral nutrients were the same. A black foam board Fig. 4 Clustering thermogram of putative R genes and genes encoding TFs with 20 holes was placed on the plastic pot, and cotton plants were placed into the holes and supported by a sponge. Twenty plants were cultivated per pot per treatment, and each treatment was repeated three times. Twenty cotton plants in each treatment were removed from the plastic pots, and inoculated with CEF-082 by soaking the cotton roots in 300 mL of a 1 × 10 5 spore/mL spore suspension for 40 min prior to the flattening of the first true leaf. For the control group, water was used instead of the CEF-082 spore suspension. The cotton plants were then returned to the pots. At 0 h, 6 h and 24 h later, 5 leaves were randomly collected at each time point for each biological replicate in each treatment, and 24 h was considered to be 0 h before inoculation with V. dahliae (24 h (0 h)). Twenty four hours post inoculation with CEF-082, the same method was used to inoculate V. dahliae VD1070-2 (1 × 10 7 spore/mL) in the treatment group and the control group. Leaf samples were then collected at 12 h, 1 d, 2 d, 3 d, 5 d and 7 d, and 5 leaves were also randomly collected at each time point for each biological replicate under each treatment. Three biological replicates were included.
Determination of hydrogen peroxide (H 2 O 2 ) content H 2 O 2 content was estimated according to the methods of Sharma et al. [49] with minor modifications. Approximately 0.1 g of cotton leaves was weighed and added to 1 mL of acetone for ice bath homogenization. The samples were then centrifuged at 8000×g and 4°C for 10 min, and the supernatant was collected. Then, 25 μL of 20% titanium chloride in concentrated HCl and 200 μL of ammonia solution (17 M) were added. The precipitate was washed 3 times with acetone. Afterward, the washed precipitates were dissolved in 1.5 mL of H 2 SO 4 (2 N), and the absorbance was read at 415 nm. The above mentioned hydroponic seedlings were investigated at 14 d post inoculation (dpi) with VD1070-2. The disease severity was rated according to a disease index that was based on a five-scale categorization of Verticillium wilt disease of cotton seedlings [50].
RNA sequencing (RNA-seq)
A polysaccharide polyphenol RNA extraction kit (Tian-Gen, Beijing) was used to extract RNA from cotton leaves. Electrophoresis was performed, and a One Drop (1000+) spectrophotometer was used to detect the concentration and quality of RNA. Transcriptome sequencing was performed for the 24 h (0 h (T0 h, C0h)), 12 h (T12 h, C12h) and 48 h (T48 h, C48h) samples. T0 h, T12 h and T48 h represented the 0, 12 and 48 h samples in the treatment group, respectively, and C0h, C12h and C48h represented the 0, 12 and 48 h samples in the control group, respectively. Three biological replicates were performed, and there were 18 samples. The construction of the DNA library and sequencing were performed by Beijing Genomics Institute (BGI). Data filtering was performed using SOAPnuke software (BGI, Beijing). Clean reads were obtained by removing the reads containing adapters, reads with more than 5% N, and low-quality sequences. The clean reads were spliced and aligned to the reference G. hirsutum genome retrieved from the cotton genome website (https://www.cottongen.org/). The fragments per kilobase per transcript per million mapped reads (FPKM) values were calculated and used to estimate the effects of sequencing depth and gene length on the mapped read counts.
Screening and analysis of differentially expressed genes (DEGs)
The DEGseq R package (1.20.0) [51] was used to analyze DEGs in cotton leaves treated or nontreated with CEF-082 under the criteria of a corrected P value < 0.001 and an absolute log2 ratio ≥ 1. GO (Gene Ontology) terms and KEGG (Kyoto Encylopedia of Genes and Genomes) pathways were enriched by DEGs if the P values were < 0.001. Resistance genes among the DEGs were predicted by a BLAST search of the Plant Resistance Gene (PRG) Database (identity ≥40, E-value <1E-5) [52]. TFs encoded by the DEGs were predicted (E-value <1E-5) according to the Plant Transcription Factor Database [53].
Quantitative reverse-transcription-PCR (qRT-PCR) analysis
The plant-pathogen interaction pathway and R genes are important for plant resistance. Twelve DEGs involved in the plant-pathogen interaction pathway and predicted R genes were randomly selected for qRT-PCR to verify whether the trends in their expression was consistent with the transcriptome sequencing results. Data were collected from three replicate experiments, and the samples used for qRT-PCR were the same as those used for RNA-seq. RNA was extracted from sample leaves and reverse transcribed into cDNA. qRT-PCR was performed via a Bio-Rad CFX96 Real-Time System (Bio-Rad, USA), [36]. Enzymes colored red or black indicate the key points induced or uninduced by CEF-082. The red numbers represent the number of upregulated genes, and green numbers represent the number of downregulated genes. PAL, phenylalanine ammonia-lyase; C4H, cinnamate 4-hydroxylase; 4CL, 4-coumarate-CoA ligase; C3H, p-coumarate 3 hydroxylase; HCT, hydroxycinnamoyl transferase; CCR, cinnamoyl CoA reductase; CAD, cinnamyl alcohol dehydrogenase; CCoAOMT, caffeoyl-CoA O-methyltransferase; F5H, ferulate-5-hydroxylase. Figure 6 is a part of the figure of Miedes E, and then added the numbers of upregulated and downregulated genes, and draw it by ourselves, not copied from the paper. The gene numbers upregulated and downregulated in Fig. 6 are our data from RNA-seq and each PCR mixture (20 μL) consisted of 10 μL Super-Real PreMix Plus SYBR Green (Tiangen), 0.4 μL of each primer, 2 μL of cDNA and 7.2 μL of sterile water. Each sample involved at least three technical repeats. The PCR cycle consisted of an initial denaturation step of 95°C for 10 min, followed by 40 cycles of 95°C for 30 s, 60°C for 30 s and 72°C for 30 s. The cotton ubiquitin gene was used as the internal reference, and relative gene expression was calculated using the 2 -ΔCT method. Primers were obtained from the upland cotton gene fluorescence quantitative specific primer database (https://biodb.swu.edu.cn/qprimerdb/) (Additional file 8: Table S1).
|
2020-02-27T19:13:59.883Z
|
2019-12-11T00:00:00.000
|
{
"year": 2020,
"sha1": "bdcaa9145cacd276aed47d91c05063a1a426fdc8",
"oa_license": "CCBY",
"oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-019-2221-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bdcaa9145cacd276aed47d91c05063a1a426fdc8",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
234885252
|
pes2o/s2orc
|
v3-fos-license
|
Corporate Social Responsibility and Internationalization of Czech Transport Enterprises
: The paper aims at investigating the conditions for development and application of Corporate Social Responsibility best practices in selected Czech transport small and medium-sized enterprises. Furthermore, the study sheds a light on the internationalization process, which is taking place in seventy Czech transport companies by exploring the importance of corporate social responsibility pillars in three regions in Bohemia. Furthermore, the paper is focusing on the applicability and transferability of such of corporate social responsibility and internationalization measures, which will support the Czech transport companies to become more competitive entities by adopting or improving such practices. Paper’s results show that the firms, which tend to internationalize more than their counterparts, tend to implement more corporate social responsibility activities. The study suggests that the selected firms from the transport sector are acting proactively by adopting best practices on corporate social responsibility in order to attract more investor and to foster their process of internationalization.
Introduction
The global business highly is acknowledged with the fact that international trade has a big impact on society. Wide variety of stakeholders across businesses suggests that customers are getting more and more demanding. Furthermore, there are challenges and risks connected with the businesses, which appear without any signals from the global market. Moreover, there are many political wars, which have affected international business such as the US-China trade was. Therefore, many medium-sized enterprises (SMEs) have introduced into their business activities so called corporate social responsibility (CSR) practices, which would help the enterprises to leverage their business activities by taking into account the inside and outside shareholders such as workers, clients, creditors, business allies, trade unions, local governance, non-profit organizations (NGOs) and governments. Big body of research suggests that SMEs are losing the competition with the multinational firms in the acquiring of CSR policies and practices (cf. Brammer et al., 2012;Cassells & Lewis, 2011;Revell et al., 2010). Similarly, a report from the European Union explicitly underlines this gap between SMEs and bigger firms. Based on the above statements SMEs are significant factor for the economic growth across the majorty of European states, that are engaged with sustainable growth (Klewitz & Hansen, 2014;Revell doi: 10.36689/uhk/hed/2021Revell doi: 10.36689/uhk/hed/ -01-083 et al., 2010. Moreover, it is of crucial weight to realize that SMEs' aspects of implementing a converging pattern to business. Other studies on corporate social sustainability suggest CSR steers to better corporate image, emerging sales and higher clients credibility, and higher efficiency and quality (Mishra & Suar, 2010). CSR in the frame of big multinational enterprises leads to the company's setting of social accountability and social responsiveness, practices, and initiatives, which could boost its bonding with the local communities (Luo, 2006). Considering the economic prominence of SMEs and their increasing level of internationalization (for example, in the neighboring Austria more than 80% of the enterprises are SMEs and their level of internationalization is very high), this study intends to focus on the process of internationalization across selected Czech SMEs, their extent of CSR program adoption, organizational structure and models of management. We start with the notion that local communities impose particular demands on appropriate business policies and attitudes. Apart from business acumen for generating profit and higher margins, the paper argues the satisfaction of community's expectations is significantly critical for SMEs. CSR is more than a strategy, but at the same time outlines a firm assignment, which SMEs supposes to comply with the moral, social, ecological, and economic requirements from the local business partners in all markets where they are present. This positive image and broad stakeholder support can be a valuable extension of SMEs' resource bases that can be used to compete against larger firms that are less resource-constraint and ultimately influence SMEs performance (Fiala & Hedija, 2019).
Evidently, the most influential aspect on SME endeavors is nested in the competences of the top management to make strategic decisions (Sommer, Durst, & Haug, 2007, pp. 256). The senior management plays significant role in the process of internationalization, which have been research in previous studies on sustainability and top management teams (Velinov et al., 2020). European Union has investigated in the previously stated document that the most common issues in the process of SME internationalization are absence of envisaged strategy, lack of know-how on internationalization patterns, information asymmetries during searching corresponding allies and wrong forecasts on the market prospectus (Observatory of European SMEs 2003, pp. 35 et seqq.). In the context of internationalization, the resource scarcity of SMEs may impact on their ability to enter foreign markets and can also limit a smaller firm's ability to reach more advanced stages of internationalization (Westhead et al., 2001 and2002). Other aspects are the experiences of managers regarding internationalization. Different studies mentioned the relevance of the managers' attitudes (CEDEFOP, 2002;Ajzen & Fishbein, 1980;Ajzen & Madden, 1986;Allport, 1935;Rosenberg & Hovland, 1960).
There are numerous entities that maintain a "critical eye" on CSR. These relationships are critical for the firms to realize its mission in producing goods or services are often referred to as primary stakeholders and include: clients, inside managers and workers, governmental bodies, suppliers, and creditors. Secondary stakeholders are consisting of social and political participants functioning as supporters of the mission by assuring their tacit approval of the SME's activities, thereby making them acceptable and giving the business credibility. Such nonpriority stakeholders might be competitors, media, local communities, and non-governmental organizations (NGOs) (Maon et al., 2009). Based on the literature review, the paper aims to investigate the link between CSR practices and the level of internationalization, and the influence of independent stakeholders on SMEs foreign-market activities.
There are the following hypotheses in the paper:
Hypothesis 1: SMEs with CSR practices tend to have higher level of internationalization.
Hypothesis 2: Managers and employees play an important role in influencing CSR practices of SMEs in foreign markets.
Hypothesis 3: CSR practices of SMEs in foreign markets will have a positive impact on SMEs'
performance.
Methodology and Data Collection
The study investigates more than 70 Czech SMEs from the transport and logistics sectors classified according to the EU classifications on SMEs. The data collection will be carried out from secondary information sources as SMEs annual reports, Eurostat, database Albertina, Bisnode and Thomson Reuters. Additionally, data on sustainability and FDIs will be collected directly from several of the SMEs because of the fact that specific information is required on their international business activities and corporate strategy. The paper aims at identifying the stakeholders and the specifics of small and medium business in Czechia, identifying the share of SMEs that have social responsibility practices in regard to environment, employees and society, systematizing the factors that impede the formation of corporate social responsibility of SMEs. Furthermore, the paper is assessing the level of corporate responsibility of SMEs in Czechia, analyzing the influence of employees of enterprises on the formation of corporate social responsibility, conducting a survey of employees of SMEs to determine the functioning elements of business social responsibility, studying Global practices in the implementation of corporate social responsibility. Moreover, the paper is developing a model of adaptation of individual elements of foreign corporate social responsibility for Czech small business, proposing a mechanism allowing SMEs to increase the efficiency of their activities through the implementation of corporate social responsibility practices, testing the results of research in the international scientific space (Velinov et al., 2020). In order to establish the level of social responsibility in practice, empirical research was carried out in the form of a questionnaire survey. Nuts 2 Northeast Cohesion Region has been elected. It consists of Pardubice, Hradec Kralove and Liberec region. Medium-sized and large enterprises have been selected for research, given that the smaller the company, the more difficult it is to implement CSR both organizationally and staffed. The selected sector was Transport. This section covers passenger and freight transport activities, regular or irregular, on rails, by pipeline, by road, water or air, and related activities such as terminals, parking and storage facilities, terminals, etc. This section includes the rental of transport equipment with a driver or operator, as well as postal and courier activities.
Using the Magnus web and Albertina enterprise database (2017), it was found that there is a basic statistical set of 70 medium and large enterprises (with 50 or more employees), that are doing business in the Transport sector. The questionnaire was intended for a top management employee of a company who is expected to have comprehensive knowledge of about the enterprise. First and foremost, it was necessary to ensure representativeness. The representativeness of the sample for research can be statistically determined by the formula (1) (Kozel, 2006): where: n the minimum selection range is required, α is reliability, tα indicates the coefficient of reliability for a given α, p is an estimate of the relative frequency of the character examined in the population, d determines the required permissible error in the research. If the required reliability is selected α = 0.1, the coefficient of the confidence interval is 90% tα = 1.65, with a permissible error d = 10% with an estimate of the relative frequency p = 0.9, then according to the above, the minimum number of elements in the sample should reach at least 24 enterprises.
At the beginning of 2018, data collection took place and the return was 40%, or 28 enterprises. No questionnaire was excluded from this file. This ensured the calculated representativeness. The evaluation of the questionnaire was carried out in IBM SPSS Statistics and MS Excel. The data are drawn from the original research of the co-author (Činčalová, 2018). Table 1 shows the basic statistics of this representative sample of the enterprises in transport sector, namely the variables number of employees in 2017 and turnover revenues for 2017.
Results
The questionnaire was completed by representatives of companies with an average of 173 employees and a turnover of almost 362 million CZK, the median is 103 employees and CZK 289 million. Smallest business in the research sample had 30 employees and a turnover of CZK 51.8 million, on the contrary, the largest company employs 482 employees and has a turnover of CZK 1.33 billion.
Discussion
The questionnaire contained 9 questions of semi-open and closed questions concerning the use of CSR, the degree of interconnection of social responsibility with the corporate strategy, the activities carried out by enterprises under the 4 pillars, subsequent CSR measurement, CSR certification, as well as other areas.
The introductory question found that 21 of the companies examined (75% of those surveyed) use the CSR concept, 4 companies do not use it, but are considering introducing it from this year or next.
Another question examined the importance of different motives for introducing social responsibility. Table 2 shows the basic characteristics (mean, standard deviation and error, confidence interval for average, minimum and maximum value) for all subquestions that have been rated by the Likert scale from 1 to 5 according to the importance of the theme (1 -most important, 5 -unimportant).
Conclusions
The following questions concerned the use of CSR activities within the pillars, their importance and measurement. It was found that all the undertakings examined reported at least one activity from each area and also measured them in these pillars, except for 6 enterprises that do not target to a philanthropic area. The social pillar is the most important of the 10 (see Table 3). It is clear from Table 3 that 18 undertakings see differences between the pillars. The other 10 companies indicated that all pillars are equally important and it is not possible to say which or less. There are a large number of transport companies and it is difficult for them to increase their market share. They often enter into short-term contracts, also because they offer similar services and compete mainly with price. Potential new comers have easy access to distribution channels, and with more and more opportunities and no need for high initial capital, there are almost no barriers to entry.
Suppliers' bargaining power depends on fuel price, policy, taxes, land price and others.
Input prices have a big share of how profitable the business is. The bargaining power of customers is related to the number of current competitors in the sector -there is a large selection of them, customers can easily compare the prices of the services provided and in some cases vertical integration with producers takes place. The availability of substitutes is high (e.g. rail, air transport), but it depends on the nature of the goods transported (the difference in whether bricks or perishable goods are transported).
|
2021-08-23T20:22:02.881Z
|
2021-03-27T00:00:00.000
|
{
"year": 2021,
"sha1": "2722f03672eb017902829d8b08fde6a518cffb2d",
"oa_license": "CCBY",
"oa_url": "http://digilib.uhk.cz/bitstream/20.500.12603/561/1/VELINOV_Emil_Simona_CINCALOVA.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "bcd25596d81e23ade3571c7f6255a51863145b77",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
16648548
|
pes2o/s2orc
|
v3-fos-license
|
Interaction of Sleep Duration and Sleep Quality on Hypertension Prevalence in Adult Chinese Males
Background Previous studies demonstrated conflicting results about the association of sleep duration and hypertension. Given the potential relationship between sleep quality and hypertension, this study aimed to investigate the interaction of self-reported sleep duration and sleep quality on hypertension prevalence in adult Chinese males. Methods We undertook a cross-sectional analysis of 4144 male subjects. Sleep duration were measured by self-reported average sleep time during the past month. Sleep quality was evaluated using the standard Pittsburgh Sleep Quality Index. Hypertension was defined as blood pressure level ≥140/90 mm Hg or current antihypertensive treatment. The association between hypertension prevalence, sleep duration, and sleep quality was analyzed using logistic regression after adjusting for basic cardiovascular characteristics. Results Sleep duration shorter than 8 hours was found to be associated with increased hypertension, with odds ratios and 95% confidence intervals (CIs) of 1.25 (95% CI, 1.03–1.52) for 7 hours, 1.41 (95% CI, 1.14–1.73) for 6 hours, and 2.38 (95% CI, 1.81–3.11) for <6 hours. Using very good sleep quality as the reference, good, poor, and very poor sleep quality were associated with hypertension, with odds ratios of 1.20 (95% CI, 1.01–1.42), 1.67 (95% CI, 1.32–2.11), and 2.32 (95% CI, 1.67–3.21), respectively. More importantly, further investigation of the association of different combinations of sleep duration and quality in relation to hypertension indicated an additive interaction. Conclusions There is an additive interaction of poor sleep quality and short sleep duration on hypertension prevalence. More comprehensive measurement of sleep should be performed in future studies.
INTRODUCTION
The association between sleep disorders and hypertension has aroused the attention of cardiologists for a long time. Many prospective studies have suggested a robust relationship between short sleep duration and hypertension risk. [1][2][3][4][5] However, sleep consists of both qualitative and quantitative aspects, and two previous studies suggested that maybe it is not enough to evaluate sleep only by measuring sleep duration when investigating the potential relationship between sleep and hypertension. 6,7 In recent years, the potential associations between sleep quality and several cardiovascular risk factors have been explored in several cross-sectional studies, and the results suggested that sleep quality was associated with prevalence of metabolic syndrome and obesity. [8][9][10][11][12] Poor sleep quality was also found to have an adverse effect on fasting blood glucose control. 13,14 Hypertension shares many common potential mechanisms with cardiometabolic disorders, 15 but the specific association of sleep quality and hypertension prevalence is still inconclusive. 4,16 In addition, considering the fact that people with short sleep duration often have a high prevalence of poor sleep quality, 17 it is necessary to preclude the potential interactively confounding effects of both sleep duration and sleep quality and confirm the specific and separate roles of sleep quality and duration in hypertension prevalence.
In this study, we investigated the potential association of self-reported sleep duration and quality in relation to hypertension prevalence in adult Chinese males using the data from a cross-sectional survey. Further, the interaction of sleep duration and quality on hypertension prevalence was explored.
Study design and population
This study was designed as a cross-sectional study and was conducted from September to December 2013 in Fangezhuang, Tangshan, Lvjiatuo, and Qianjiaying communities located in the northern China city of Tangshan, which is approximately 180 km southeast of the capital of China. Subjects aged 18 years or older in the 4 communities were invited to participate in this study. Critical exclusion criteria included those with a previous diagnosis of obstructive sleep apnea syndrome (OSAS) or restless legs syndrome (RLS), as well as those who reported snoring by themselves or roommates. The contents and purposes of this study were thoroughly explained to the participants prior to the study, and written consent was obtained. The study protocol was in accordance with the Declaration of Helsinki, and ethical approval was obtained from the Science and Technology Committee of Tangshan City.
Citizens in the Fangezhuang, Tangshan, Lvjiatuo, and Qianjiaying communities are mainly employees of the Kailuan Group, a large-scale comprehensive enterprise that mainly manages coal products and has a higher male to female ratio than the general Chinese population. Only 571 female citizens were enrolled in this study, a sample size too small for subsequent statistical analysis; we therefore only presented the relevant results from male participants in the current report.
Anthropometric measurements
Doctors and nurses were trained in the standard protocol of measurement before the survey. Height and weight were measured to the nearest 0.1 cm and 0.1 kg, respectively, when the subjects stood upright and were barefoot in light clothes. Two separate measurements were performed for each subject, and the average was used for analysis. BMI was calculated as the ratio of weight (kg) to height (m) squared (kg/m 2 ). Blood pressure was measured in a sitting position with calibrated standard mercury sphygmomanometer (Yuyue Medical Equipment & Supply Co., Ltd., Jiangsu, China), and an average of two readings was used in the present study. If the two readings differed by more than 5 mm Hg, a third reading was taken, and the average of three readings was used. Hypertension was defined according to the 7th edition report of the American Joint National Committee on Prevention, Detection, Evaluation, and Treatment of Hypertension 18 as SBP ≥140 mm Hg and/or DBP ≥90 mm Hg on average of measurements or by current antihypertensive treatment according to hospital records.
Blood test
Subjects were asked to fast overnight before venous blood sample collection. Plasma samples were prepared by centrifuging at 3000 rpm for 10 minutes within 4 hours of blood collection for determination of total cholesterol (TC) and fasting blood glucose (FBG) in the central laboratory of Kailuan Hospital on automatic biochemical analyzers (Hitachi 717; Hitachi, Tokyo, Japan).
Questionnaire
A structured questionnaire was administered face to face to each subject and recorded on paper to obtain demographic and behavior-associated information, including age, gender, smoking status, drinking status, educational level, physical activity, sleep duration, and sleep quality. Smoking and drinking status were classified using self-reported information as "never", "former", or "current". Subjects who consumed more than 175 grams of alcohol per week in the past half year were defined as current drinkers. Physical activity was evaluated from responses to questions about the type and frequency of physical activity during leisure time. Individuals were classified as "active" and "inactive" according to whether or not at least 30 minutes aerobic exercise for at least 5 days per week was attained. Educational level was assessed by responses to questions regarding the final degree attained, and senior high school or higher was defined as welleducated. Sleep duration was evaluated from the responses to questions about average sleep duration in the past month, and participants were reminded that time spent awake in bed was not included. Sleep duration was categorized into "<6 hours", "6 hours", "7 hours", "8 hours", and ">8 hours". Sleep quality was evaluated using the standard Pittsburgh Sleep Quality Index (PSQI), which is a widely used measure of sleep quality, 19 and sleep quality was classified as "very good" (score <3 on PSQI), "good" (score 3 to <6 on PSQI), "poor" (score of 6 to <9 on PSQI), and "very poor" (score ≥9 on PSQI). The English version of the PSQI and the scoring system are provided in eAppendix 1.
In addition, considering the frequent comorbidity of sleep disorders with anxiety and depression, anxiety and depression status of participants was evaluated using the General Anxiety Disorder-7 (GAD-7) and Patient Health Questionnaire-9 (PHQ-9 scales, respectively. GAD-7 is a seven-question inventory for self-assessment and is one of the most common instruments for measuring severity of anxiety. 20 PHQ-9 is a widely used nine-question inventory for selfassessment of depression. 21 Statistical analysis Continuous variables were presented as mean (standard error [SE]) and categorical variables as frequency (proportion). Continuous variables were compared using one-way ANOVA followed by Dunnetts's post-hoc test, and categorical variables were compared using the χ 2 test. The association between sleep duration, sleep quality, and hypertension prevalence was investigated by logistic regression analysis, and we adjusted for plausible confounders, including age, BMI, smoking status, drinking status, physical activity, educational level, and anxiety and depression scores. Further, to investigate the interaction of sleep duration and sleep quality on hypertension prevalence, participants were divided into groups of different combinations of sleep duration and sleep quality. Odds ratios (ORs) and 95% confidence intervals (CIs) of each group were calculated using multiple logistic regression analysis, with the group of 8 hours' sleep duration and very good sleep quality as the reference. For all comparisons, the level of statistical significance was set at P < 0.05 (two-sided). SPSS 19.0 (IBM, New York, USA) was used for all statistical analyses.
Prevalence of hypertension
The prevalence of hypertension in subjects with different combinations of sleep duration and sleep quality are presented in Figure. Generally, a U-shaped relationship could be observed between sleep duration and hypertension prevalence. With the exception of those with very good sleep quality, subjects with sleep duration of 8 hours had the lowest prevalence of hypertension, and participants with both less and more sleep time had increased hypertension prevalence. However, the trend was not statistically significant. Figure also shows that hypertension prevalence increased with worsening sleep quality in those with different sleep duration, but this trend was also not statistically significant.
Association of sleep duration or sleep quality in related to hypertension prevalence
The association between sleep duration or quality and hypertension prevalence was analyzed using logistic regression, and the results are presented in
DISCUSSION
In this cross-sectional study, we investigated the separate and combined association of sleep duration and sleep quality with the prevalence of hypertension in adult Chinese males. Shorter sleep duration than normal was found to be associated with hypertension prevalence, but not longer sleep duration. In addition, poor sleep quality was also associated with hypertension prevalence. The present study investigated the association of different combinations of sleep duration and sleep quality in relation to hypertension prevalence, and the results indicated an additive interaction between sleep quality and duration and hypertension prevalence. It has been argued that the relationship between sleep and hypertension maybe gender-specific. Two prospective studies conducted in English and Korean populations 22,23 and several cross-sectional studies 22,[24][25][26] have demonstrated that short sleep duration was associated with hypertension incidence only in women. The underlying mechanisms that explain sex-specific correlation of short sleep duration and hypertension are still unknown, although sex differences in hormone secretion, stress responses, inflammatory reaction, and changes in sympathetic nerve activity have been suggested. [27][28][29][30] Though the current literature supports a more robust correlation between short sleep duration and hypertension in women than men, one cross-sectional study by Fang et al indicated the possibility of a complex association between sleep duration and hypertension. 31 The present cross-sectional study adds new evidence concerning the association between sleep duration and hypertension in men. However, we still hold a cautious attitude towards the gender-specific association mentioned above because we think only measuring sleep duration in the previous studies is not sufficient to assess the global sleep status, and results are somewhat unreliable.
Two previous studies have suggested a U-shaped relationship between sleep duration and hypertension in adults and adolescents, which meant not only short but also long sleep duration was related to hypertension prevalence and incidence. 32,33 In the current study, we also observed a U-shaped trend between sleep duration and hypertension prevalence, but it failed to reach statistical significance. One of the findings of the current study was that sleep duration and quality were additively related to hypertension. Neglecting the potential role of sleep quality may explain the conflicting results, but we must note that the small sample size of subjects with longer sleep duration may make our results less persuasive.
Sleep has both qualitative and quantitative aspects. Two studies have demonstrated that short sleep duration only failed to increase hypertension risk, but a combination of sleep duration and other sleep disorders did increase risk, which indicated that evaluation of sleep only by measuring sleep duration was not sufficient. 6,7 This viewpoint was supported by the current study. Sleep quality was measured by the PSQI in this study, and the results show that sleep quality was also significantly correlated with hypertension risk. Bruno et al investigated the relationship between sleep quality assessed by the PSQI and resistant hypertension, and their results were consistent with ours. 34 Another study also indicated a relationship between sleep quality and blood pressure level, although sleep quality was assessed by overnight polysomnography in that study. 35 We must note that people with sleep insufficiency often have poor sleep quality. Therefore, to preclude the interactively confounding effect of sleep duration and quality in the current study, the separate and combined 39,40 and it was necessary to preclude the potential effects of these conditions in the current study. The diagnosis of OSAS was based on the results of polysomnography, but the test was very difficult to perform for each subject due to the limited funds and time. Therefore, we adopted a comparatively easy way to preclude OSAS according to whether or not the subject snored during sleep, which was reported by the subjects themselves or by their roommates. It has been reported that snoring has a high sensitivity (87%) for detecting OSAS. 41 In addition, RLS was excluded on the basis of self-reported (roommate-reported) symptoms, considering that the clinical diagnosis of RLS was mainly based on self-reported symptoms. 42 In the current study, we paid attention to several potential confounders for the association between sleep and hypertension, such as educational level, anxiety, and depression, which were rarely controlled for in previous studies. Socio-economic status is a well-known risk factor for hypertension. 43 Recently, mental status, such as anxiety and depression, has also been suggested to be related to onset and control of hypertension. 44,45 Considering the frequent comorbidity of anxiety and depression in sleep disorders and the effect of educational level on sleep, 5 it is necessary to control for the confounding effect of these factors, as we have done in the present study.
Possible mechanisms accounting for the association between sleep and hypertension have been reported in a couple of previous studies, although they are far from being fully elucidated. Most of those studies support that sympathetic overactivity due to sleep deprivation was associated with elevated blood pressure level. [46][47][48] Additionally, as a contributor to psychological stress, sleep insufficiency could also induce sodium retention, proinflammatory responses, and endothelial dysfunction through the activation of the neuroendocrine system. 28,48,49 Due to the cross-sectional design of the present study, we cannot assess the causal relationship between the two sleep aspects and hypertension prevalence. However, the hypothesis that sleep deprivation is involved in the development and sustainment of hypertension has been better established than the opposite cause-effect link. Although it is possible that living with hypertension, which acts as a mental stressor, may disturb sleep homeostasis directly or through drug treatments (such as diuretics) that are often prescribed for hypertensive patients, rare relevant evidence is available in the current literature. 50,51 In addition, Bruno et al analyzed the association of sleep quality and the most frequently prescribed antihypertensive agents and found no relationship between them. 34 Our study has several limitations. First of all, it has been reported that logistic analysis used in cross-sectional study may overestimate the prevalence ratio, although it is still the most frequently used method in studies with a similar design to ours. 52 Second, there are no standard cut-off values to judge good or poor sleep quality, and the cut-off values used in this study were based on previous reports and our experience. Third, eating habit is one important meditating factor in the relationship between sleep disorders and increased prevalence of hypertension, but we did not investigate this because we did not have suitable questionnaires about eating habits.
Despite the above limitations, this cross-sectional study demonstrates for the first time that both short sleep duration and poor sleep quality are associated with hypertension prevalence in adult Chinese males. More importantly, the current study suggests that the association of sleep duration and quality with hypertension is an additive relationship.
|
2017-04-03T00:40:19.337Z
|
2015-04-25T00:00:00.000
|
{
"year": 2015,
"sha1": "73473dc79a9050bea41312c2a9ad0000cd939eea",
"oa_license": "CCBY",
"oa_url": "https://www.jstage.jst.go.jp/article/jea/25/6/25_JE20140139/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0bf78a43d46cae7718f6b36923d16fe4c4409608",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248726438
|
pes2o/s2orc
|
v3-fos-license
|
Benchmarked effectiveness of family and school involvement in group exposure therapy for adolescent anxiety disorder
Although cognitive-behavioral therapy (CBT) is an effective treatment for adolescents with anxiety disorders, the majority remain impaired following treatment. We developed a group CBT program ( RISK ) with high degrees of exposure practice and family and school involvement delivered in a community-based setting and investigated its effectiveness. The treatment involved adolescents ( N = 90), with a primary diagnosis of anxiety disorder (82%) or obsessive-compulsive disorder (18%), and their families who received 38 hours of group treatment over 10 weeks. Diagnostic status and symptom severity were assessed at pre- and post-treatment, and a 12-month follow-up and benchmarked against previous effectiveness studies. Our results showed that, at post-treatment, the RISK - treatment was comparably effective as benchmarks on measures of diagnostic status, parent-rated measures, adolescent-rated measures, and clinician-rated measures. At 12-month follow-up all outcomes were superior to benchmarks, including the proportion of participants in remission (79.5%, 95% Highest Posterior Density Interval [74.7, 84.2]), indicating that the RISK- treatment enhanced effectiveness over time. The combination of group format, a high degree of exposure practice, and school and family involvement is a promising format for real-world settings that may help sustain and increase treatment effectiveness. Trial registered at helseforskning. etikkom.no (reg. nr. 2017/1367).
Introduction
Anxiety disorders are common in the developmental stage of adolescence (12-18 years of age), with a prevalence rate of 4%-8% (Essau et al., 2018;Vizard et al., 2018). Anxiety disorders during adolescence inhibit the ability to seek autonomy and enter adulthood because they negatively affect social interaction, the development of independent living skills and educational outcomes (Swan et al., 2018). Furthermore, these impairments can continue into adulthood if left untreated (Swan & Kendall, 2016). Given the prevalence and negative impact of anxiety disorders in adolescents it is an important challenge to design interventions that provide short-and long-term effectiveness in routine-care clinical settings.
The best-established treatment for child and adolescent (2-19 years of age) anxiety disorders is cognitive-behavioral therapy (CBT), which has shown effect in specialized settings (i.e., efficacy) and in routinecare settings (i.e., effectiveness) in several meta-analyses (Whiteside et al., 2020;Wergeland et al., 2020;James et al., 2020). Regarding outcomes in routine-care settings, Wergeland et al. (2020) describe the outcomes of 29 studies on CBT for anxiety conducted in clinical routine-care or school healthcare settings. These outcomes were based on studies that primarily included children (Mean age = 9.9 years, SD = 1.7), with only 2 studies having a mean age above 12 years (Bodden et al., 2008, van Steensel & Bögels, 2015). The treatments were delivered individually or in groups, lasted 4-20 hours (M = 12.6, SD = 4.6) and included moderate to high degrees of family involvement. The results indicate that at post-treatment, half of the children and adolescents were not in remission (loss of all anxiety diagnoses), and at follow-up one-third were not in remission.
The importance of treatment outcome research focusing on adolescents specifically has been highlighted by a recent meta-analysis by Baker et al. (2021). This meta-analysis presented 15 studies on CBT for adolescent anxiety, with 4 studies with adolescents receiving treatment in routine clinical care. The treatments were delivered individually or in groups, lasted 4-24 hours and parents were included in 7 of the studies. The results indicated that at post-treatment, two-thirds of adolescents were not in remission. These discomforting results may be considered in light of the characteristics of adolescents in contrast to children, which include more severe symptoms, more difficulty attending school and higher rates of social anxiety disorder (SAD) (Waite & Creswell, 2014). Notably, SAD is associated with poorer treatment response (Hudson et al., 2015) and predicts a greater risk of relapse after treatment . Based on the above-mentioned observations, it has been recommended that interventions should be designed specifically for adolescents to handle more severe symptoms, more difficulty attending school and higher rates of SAD (Waite & Creswell, 2014). When questioning adolescents themselves, they are interested in interventions that are effective, do not interfere with participation and attendance in school, are intensive (i.e., longer sessions) and interventions with varied activities (Persson et al., 2017).
To address severe symptoms and improve the effectiveness of treatments for child and adolescent anxiety, several approaches have been investigated. The majority of these approaches have focused on increasing exposure practice, which is consistently associated with improved treatment effects (Whiteside et al., 2020). Additionally, a substantial amount of research has investigated the effect of modifying the type and amount of family involvement (Manassis et al., 2014;Sigurvinsdottir et al., 2020). The importance of involving parents is that they may reduce treatment dropout, increase treatment adherence and enhance trust and communication between parents and adolescents, which are known are protective factors against anxiety in adolescents (Ebbert et al., 2019;de Haan et al., 2013;Lee et al., 2019). Despite the potential benefits of involving parents, results on effectiveness are inconsistent, with a Cochrane review suggesting no added benefit (James et al., 2020). However, several studies suggest that parental involvement increases treatment effectiveness insofar as the treatment focuses on increasing the overall exposure practice (Breinholst et al., 2012;Manassis et al., 2014;Whiteside et al., 2020). A promising format for exposure enhancing parental involvement is the multi-family group. Lau et al. (2010) employed such a format in an effectiveness study setting for children (age 6-11 years) and included in-session exposure practice in two-thirds of the treatment sessions, which is substantially more than the average one of five sessions . As a result, Lau et al. (2010) demonstrated effectiveness with a remission rate of 65% at post-treatment.
To avoid interfering with adolescents' school participation, and address any difficulties attending school, CBT for adolescent anxiety could potentially benefit from involving school personnel (e.g., teachers, school nurses) in treatment. In addition to practical help, school personnel could, similar to parent involvement, increase engagement in exposure practice. The school environment is also important because adolescents spend a large amount of time in this setting and often report that school is where their disability is most profound (Beidas et al., 2012). Involving school personnel in CBT for adolescents with anxiety symptoms offers potential benefits for challenging fears directly in the school environment (Werner-Seidler et al., 2017), thereby further enhancing the generalizability of CBT-related learning.
Given the consequences of anxiety disorders in adolescents, there is a need for interventions that provide short-and long-term effectiveness in routine-care clinical settings. However, there are several limitations to the current literature. There is a paucity of research on effective treatments for adolescents in routine-care clinical settings Baker et al., 2021). Particularly, it has been noted that there is a need for more knowledge on outcomes at follow-up and full remission (i. e., loss of all anxiety diagnoses) in adolescents receiving CBT for anxiety (Baker et al., 2021). Another limitation is that most effectiveness studies with parental involvement for adolescent anxiety disorders often contain little or no in-session exposure practice (Dekel et al., 2021, Haugland et al., 2020, Wergeland et al., 2014. Also, despite the potential benefits of involving school personnel as an adjunct to the treatment delivered in clinical settings, no studies of clinic-based treatment augmented by the involvement of school personnel exist. The lack of research on such exposure enhancing interventions may be due to cost-effectiveness considerations. There has been an increasing interest in finding effective and affordable interventions (Ollendick et al., 2018). Although developing low-cost interventions is important, it is equally important to investigate more resource-demanding interventions, which may be potentially more effective. Additionally, investigating more costly interventions may aid in understanding what treatments should be delivered to whom and when. More expensive interventions may still be cost-effective and could play an important role in stepped care approaches (Ollendick et al., 2018).
Thus, to extend the research on effective treatments for adolescents with anxiety disorders in routine-care settings, a treatment (named RISK, which refers to taking a chance) was developed to maximize the total exposure practice through parental involvement and involvement of school personnel. The treatment was developed to be delivered in a multi-family group format, include active involvement of school personnel, and allow the inclusion of adolescents with a broad range of anxiety disorders. Such transdiagnostic treatment is of particular importance in small routine-care clinics where it may not be feasible to conduct treatments targeting only one or two anxiety disorders.
This study examines the effectiveness of the multi-family group CBT (RISK) that includes three important enhancement approaches for adolescents: extensive and systematic family involvement, engagement of school personnel, and a high degree of self-conducted and therapist/ family/peer-facilitated exposure practice. The study design was a single arm open trial and comparative effectiveness was assessed through benchmarking against a recent meta-analysis on the effectiveness of CBT for children and adolescents with anxiety disorders and symptoms . A meta-analysis that included children was preferred over one that only included adolescents because it allowed more comprehensive comparison, including outcomes at follow-up and assessing differences in adolescent-, parent-and clinician-rated outcome measures. Our primary aim was to investigate whether enhanced treatment would outperform the benchmark on measures of diagnostic status (i.e., loss of all anxiety disorder) at post-treatment and follow-up. Secondary aims were to compare clinician-, parent-, and adolescent-rated anxiety symptoms to benchmark at post-treatment and follow-up, as well as to assess loss of primary diagnosis and clinically significant change. Furthermore, disorder-specific outcomes were assessed.
Participants
Participants were 90 adolescents, aged 12-18 (M = 15.29, SD = 1.32), and their parents, recruited from two community clinics for child and adolescent mental health between 2017 and 2019. Participants were informed about the study during routine intake procedures or after clinical evaluation suggesting the presence of an anxiety disorder. Parents and adolescents were invited to participate in the study if the adolescents met the Diagnostic and Statistical Manual of Mental Disorders 4th edition (American Psychiatric Association, 1994) criteria for a primary anxiety disorder (e.g., separation anxiety disorder, social anxiety disorder [SAD], specific phobia, panic disorder with or without agoraphobia, agoraphobia, generalized anxiety disorder, or obsessive-compulsive disorder [OCD]) as assessed by the Anxiety Diagnostic Interview Schedule Child and Parent version (ADIS-C/P) (Silverman & Albano, 1996). The diagnostic criteria from DSM-IV were chosen because the ADIS-5 has not yet been translated to Norwegian. Exclusion criteria for the study were as follows: the presence of a developmental or psychotic disorder, current self-harm behavior or suicidal ideation, concurrent participation in psychological treatment, a psychopharmacological treatment that had not been stable for 6 months before study enrollment, receiving CBT within the past 12 months, or not attending school more than 50% of the time over the previous month. In the exclusion criteria, a developmental disorder was defined as meeting criteria for a diagnosis of mental retardation or pervasive developmental disorder. The exclusion based on developmental disorder, psychotic disorder, current self-harm or suicidal ideation was not part of the study design per se but due to procedures at the clinic, which dictated that such disorders should be treated before anxiety disorders. The school attendance exclusion criterion was due to practical concerns about the school personnel involvement in the treatment. Only one participant was receiving concurrent psychopharmological treatment (Methylphenidate). Recruitment and attrition are described in Fig. 1. The ADIS-IV-C/P (Silverman & Albano, 1996) was employed to determine the adolescents' diagnostic status. The ADIS-IV-C/P is a semi-structured interview administered separately to the adolescent and parents and has excellent reliability (Silverman et al., 2001). Diagnoses and clinical severity ratings (CSR) were assigned as per the ADIS-IV-C/P manual. A CSR of four or higher (0-8 scale) indicates the presence of a diagnosis. Remission was defined as being free from all anxiety diagnoses. The diagnostic interviews were conducted and rated by participating clinicians. Efforts were made to ensure that assessments after treatment were not completed by clinicians who had delivered treatment within a group. Despite these efforts, 20% of participants were assessed by a clinician from the group they participated in. All interviews were videotaped, and a random selection of 20% of the interviews at pre-and post-treatment, and at the 12-month follow-up were re-rated by trained independent expert raters (one clinical psychologist, one child psychiatrist, and one clinical social worker) masked to the original assessors' rating. The inter-rater reliability on CSR, using Cronbach's α was 0.91, 0.94, and 0.97 for primary, secondary, and tertiary diagnoses, respectively.
Secondary outcome measures
The child and parent version of the Spence Children's Anxiety Scale (SCAS-C/P) (Spence, 1998) was used to assess adolescents anxiety symptoms. The SCAS includes 38 items rated on 4-point Likert scales, yielding a maximum score of 114. Spence (1998) reported a six-month test-retest reliability of 0.71 and significant correlations with other anxiety measures. In the current sample, the SCAS-C showed excellent reliability (Cronbach's α = 0.90), and the SCAS-P showed good reliability (Cronbach's α = 0.87).
The severity measure of the Clinical Global Impression (CGI-S) (Guy, 1976) scale was used to assess clinician-rated global impairment and functioning as rated by clinicians delivering the treatment. The CGI-S evaluates the severity of the patient's illness and comprises seven items ranging from 1 (normal) to 7 (extremely ill). The CGI-S is significantly correlated with self-reported measures of anxiety, depression, everyday functioning, and quality of life (Zaider et al., 2003). In this study, the CGI-S showed excellent reliability (split-half coefficient = .92).
Procedure
Participants were recruited from two community-based clinics for child and adolescent mental health that are part of the general national health services in Norway. Potential participants were contacted if there was an indication of a primary anxiety disorder in the referral letter or after a clinical evaluation suggesting the presence of a primary diagnosis of anxiety disorder. Eligibility was ascertained in three steps: (a) participants were contacted by phone by a study coordinator and screened for self-injurious behavior, suicidal ideation, and school attendance. Potentially eligible participants were (b) informed about the research project and asked to participate. Those who agreed met with a clinician (RISK-therapist) for initial screening and received information about the RISK-treatment. Finally, (c) participants met for an assessment with a participating clinician where ADIS-C/P and other study measures (SCAS-C/P, CGI-S) were completed. After treatment and at 12-month follow-up adolescents met a participating clinician to complete ADIS-C/P, SCAS-C/P, and CGI-S. Assessments after treatment completion were planned in an effort to avoid adolescents being assessed by clinicians who had delivered therapy within their group.
Treatment completion was defined as having participated >50% of the intervention. This low threshold to be categorized as a completer was set based on considerations of the intensive nature of the treatment and what we considered essential aspects of treatment. Out of 38 hours of treatment 24 hours are spent in 4 intensive exposure days, which are considered essential. Therefore a limit was set such that completers must have attended at least one of these. Written informed consent was obtained for the entire sample, and the study was approved by the Regional Ethics Committee for research with human subjects (reg. nr. 2017/ 1367).
Treatment
The multi-family group CBT for anxiety disorders was based on general CBT principles and developed specifically for the study. The treatment was conducted in groups of five to eight families (mean group size: 7 families). In total, 14 groups received treatment during the study period. The treatment consisted of 12 sessions, lasting 38 hours over 10 weeks (including two 1.5-hour sessions with school personnel). Participating families were invited to attend 2-hour follow-up booster sessions at 3-, 6-, and 12-months post-treatment.
The RISK-treatment included standard CBT-based interventions for child and adolescent anxiety. Further, the treatment included a high degree of self-conducted and therapist/family/peer-facilitated exposure practice, parental participation, and adolescents' school personnel involvement (see Table 1 for treatment description). A distinctive aspect of the RISK-treatment was in sessions 5, 6, 9, and 10. During these sessions, four hours were dedicated to exposure practice by the adolescents in locations outside the clinic (e.g., at school, in the shopping center, on a bus). In these sessions, adolescents were paired with parents other than their own and a group clinician. Clinicians would manage smaller groups of 1-3 adolescents and their accompanying parents as they went outside the clinic. The mixing of families allowed the adolescents and parents to practice the techniques learned in earlier sessions without being affected by existing family dynamics. This process aimed to maximize the time spent by the adolescents performing exposure practice and, to an equal degree, help parents become more confident in their ability to assist in conducting exposure practice. Another distinctive feature of the RISK-treatment was the involvement of school personnel. The amount and type of school personnel involvement varied as per the needs of individual adolescents. For some adolescents, their anxiety symptoms were not visible in the school setting, and school personnel mainly aided in the logistics of planning day-to-day schoolwork around the treatment, given that sessions took place during school hours. For other adolescents, their anxiety symptoms were primarily experienced in the school setting; thus, school personnel played a much more active role in planning, facilitating, and conducting exposure practice with the adolescents.
Clinicians, clinics, and assessors
Participating clinicians (N = 20) were employed at one of two community-based clinics. The clinics service a population of 76,000 children and adolescents between the ages of 0 and 18 in rural and urban areas of southern Norway. Due to changing employment or taking leave from work, eight of the initial cohort of 12 study clinicians were replaced during the study period, leaving a total of 20 clinicians (70% female) who served as RISK-therapists during the trial. Each group session included four clinicians. The clinicians had 11.8 years of experience, on average, in child and adolescent mental health care (SD = 7.9, range = 2-30). The clinicians comprised different professional backgrounds, including six clinical psychologists, six social workers, four nurses specialized in psychiatry, two child psychiatrists, one pediatrician, and one schoolteacher. They volunteered for the study and conducted treatments as part of their ordinary workload.
Training of clinicians was conducted through participation in workshops and supervision. Eleven of the clinicians had formal education in CBT but received the same amount of training and supervision as those with no formal education. In preparation for delivering the study intervention, clinicians took part in three training workshops, each lasting two days. Supervision was conducted by the program developer, who either took part in a treatment group or provided monthly supervision based on videotaped sessions. Fidelity to treatment was achieved through the training of clinicians, ongoing supervision, and the use of a treatment manual.
Six clinicians were trained in the administration of the ADIS-C/P interview. These were 2 clinical psychologists, 2 nurses specialized in psychiatry, 1 child psychiatrist and 1 social worker. The training was achieved through a two-day workshop seminar that included training in scoring and administration. The workshop was delivered by a licensed ADIS-C/P rater. All but one of the clinicians selected to conduct the ADIS interview had extensive prior experience with its use.
Data analysis
To compare findings to benchmarks a Bayesian analysis of informative hypotheses was performed (Gu et al., 2018). This approach was chosen because it would allow information on which alternative hypotheses (i.e., inferiority, equivalence) were most probable if the hypothesis of superiority was not supported (Gu et al., 2018). The planned sample size was 102 based on the estimation method by Schönbrodt et al. (2017). The estimation was based on a power of 0.80, with a minimal effect of 50% remission at post-treatment in expectation of a 20% treatment dropout.
The overall amount of missing information was 10%, with 56% of cases containing missing information. Missing data was primarily due to treatment dropouts and, to a lesser degree, an incomplete response at the item-level by treatment completers. There was no indication of any pattern of missingness in the data, and Little's test of missing completely at random (MCAR) indicated that the data were not different from MCAR (p = .23). Missing data for all variables were accommodated using multiple imputations, with 50 imputed datasets. All analyses were performed using the intent-to-treat principle unless otherwise specified.
Bayesian sensitivity analyses were performed to investigate the effect of assumptions about nesting of variables (i.e., by group, site, clinician), Note. Sessions 5, 6, 9, and 10 follow the same format. In these sessions, the adolescents perform exposure practice with parents other than their own. This process is conducted to allow parents and adolescents to practice learned skills without getting disrupted by pre-existing interpersonal dynamics. In session 7, the adolescents are encouraged to invite guests who are important to them to the therapy. These guests receive psychoeducation similar to that received by parents and the adolescents in session 2.
normality, inclusion of outliers and differences between assessors (samegroup clinician vs. not same-group). These indicated that the analyses were consistent across different assumptions of nesting, normality, and inclusion of outliers and that there was no difference in outcome between assessors. Thus, all participants were included in the analysis, and the simpler model of no clustering effect was employed for analyses. As recommended for Bayesian procedures (Depaoli & van de Schoot, 2017), we also assessed how robust results were to different priors, and found that results were similar across different priors. For all outcomes, the posterior distribution simulation was performed using Metropolis-Hastings Monte Carlo (Hastings, 1970), applying three chains, 12,500 burn-ins, and 50,000 iterations. Every fifth iteration was used to avoid autocorrelation on measures with few observed instances. Convergence and stability of simulations were checked using the Gelman-Rubin statistic. Inferential statistics were the posterior probability, the Bayes factor for the alternative over the null (BF 10 ), and the highest posterior density interval (HPD). The posterior probability describes the probability of a certain hypothesis. The BF 10 describes the weight of evidence for one hypothesis over another and allows for a three-logic interpretation, indicating the following: (a) there is evidence for the alternative hypothesis, (b) there is evidence for the null, or (c) there is not much evidence for one over the other hypothesis (Dienes & McLatchie, 2018). A BF 10 above 3/1 or below 1/3 was considered evidence for one hypothesis over another. The HPD describes the interval where the true parameter has a 95% probability, with values closer to the center being more probable.
For dichotomous outcomes, Bayesian logistic regression was employed. Prior distributions for regression coefficients were diffuse normal with a mean of 0.5. For continuous outcomes, Bayesian linear regression was used with diffuse normal priors with a mean of 0 for coefficients.
Secondary analyses were conducted to examine the effect of the primary anxiety disorder type on treatment outcomes. Bayesian multinomial logistic regression was used for dichotomous outcomes, and Bayesian repeated-measures ANOVA for continuous outcomes. Both the direct and interaction effects were assessed.
Benchmarking and reliable change
Tests against benchmarks were performed using Bayesian equivalence tests (Klugkist et al., 2005) and Bayesian analysis of informative hypotheses (Gu et al., 2018). Three hypotheses were tested: (a) the observed value is bigger than the benchmark (H Bigger ), (b) the observed value is equal to the benchmark (H Equal ), and (c) the observed value is smaller than the benchmark (H Smaller ). Results were reported as the posterior probability of each hypothesis. Benchmarks were selected to assess the clinical comparable effectiveness of the intervention and normative equivalence.
Benchmarks for clinical equivalence were based on a meta-analysis of the effectiveness of CBT for child and adolescent anxiety disorders in routine-care settings . It is important to note that this benchmark included children (age < 12), and thus differs from the current study sample. However, few effectiveness studies in routine clinical care have been conducted with only adolescents (Baker et al., 2021), and those studies that include adolescents generally having lower levels of remission from all anxiety disorders than observed in the benchmark meta-analysis. Thus, the benchmark meta-analysis was chosen because of its comprehensiveness and that it allowed a conservative estimate of the current studies' relative effectiveness.
In the benchmark meta-analysis, the proportion of children and adolescents in remission from all anxiety disorders was estimated at posttreatment (k = 27, 50.7% CI 95% 45.3-56.2) and follow-up (mean length = 10.7 months, k = 22, 69.4%, CI 95%: 64.1-74.3). Benchmarks for other outcomes were based on studies included in the meta-analysis by Wergeland et al. (2020). In raw change scores the benchmarks at post-treatment were for SCAS-P 4.2-11.9, for SCAS-C 6.7-13.0, for CGI-S 0.9-2.2 and for CSR of primary diagnosis 0.5-3.2. At follow-up benchmarks were for SCAS-P 10.8-16.1, for SCAS-C 6.7-16.6 and for CSR of primary diagnosis 1.1-3.8. No benchmark was available for CGI at follow-up. Normative equivalence was defined as scores corresponding to T-scores of less than 60 on the SCAS-C/P and the CGI-S score of 2 SD below pre-treatment mean.
Reliable change index (RCI) and clinically significant change were used to assess clinically significant change on the SCAS-C/P and CGI-S (Jacobson & Truax, 1991). Reliable change was defined as RCI > 1.96. No participants experienced a reliable change in a negative direction. Thus, reliable change is only described as present or not. When RCI scores indicated reliable improvement, and the score on the outcome measure was within the normative equivalence the adolescent was considered to have a clinically significant change.
Sample characteristics
The participants were 90 adolescents (77% female) and their parents. Social anxiety disorder (SAD) was the most prevalent primary anxiety disorder (52.4%). Comorbidity was high, with 72.9% of the participants having one and 35.3% having two or more comorbid disorders. The total proportion of adolescents who met diagnostic criteria for anxiety diagnoses was as follows: SAD (67.7%), separation anxiety disorder (10%), generalized anxiety disorder (22.2%), panic anxiety and/or agoraphobia (27.7%), specific anxiety disorder (11.1%), obsessive-compulsive disorder (27.8%). There were no significant differences in the severity of outcome measures (CSR, CGI-S, SCAS-C/P) between sexes at pre-treatment (all comparisons between sex, p > .05). There were no significant differences between groups on outcomes after treatment and follow-up (all comparisons of group as predictor of outcome, p > .05) and nesting individuals within groups or clinics did not change the results of analysis. At post-treatment adolescents rated on a scale from 1-10 how sure (1 = not sure, 10 = very sure) they would be in recommending the RISK-treatment to a friend struggling with anxiety. This measure indicated the treatment to be acceptable by adolescents (M = 7.1, SD = 2.0, Median = 8). All adolescents had at least one adult from school partake in psychoeducation. Among the school personnel, 76.5% were actively involved in the treatment. On parent-rated measures of school personnel's ability to follow through on treatment aims, the majority were rated as very good (24.9%) or good (45.3%). Only 4.8% of parents rated school personnel as poorly or very poorly (see Table 2 for further description of participant characteristics).
Treatment non-completion
Ten participants (11.1%) were defined as treatment non-completers. Reasons for treatment discontinuation were as follows: (a) finding therapy too demanding (n = 7), (b) personal disagreement involving another participant (n = 1), (c) finding distance to treatment too far (n = 1), and (d) receiving an offer for individual therapy with a private practitioner (n = 1). See Fig. 1 for the participant flowchart. Post hoc comparisons of completers and non-completers showed no pretreatment differences (BF 10 <1) for participants' age, sex, amount of previous therapy, number and severity of anxiety disorders, comorbid disorder, or symptom severity (SCAS-C/P, CGI-S).
Symptom measures
Decrease in adolescent-rated anxiety symptoms (SCAS-C) was equivalent to benchmark at post-treatment (Posterior probability; H Equal = .62, H Bigger = .37) and superiority to benchmark at follow-up (Posterior probability; H Equal = .01, H Bigger = .99). Decrease in parent-rated anxiety symptoms (SCAS-P) was equivalent to benchmark at post-treatment (Posterior probability; H Equal = .48, H Smaller = .51) and at follow-up (Posterior probability; H Equal = .92, H Bigger = .05). Decrease in severity of primary anxiety disorder and clinician-rated symptom severity was superior to benchmarks at post-treatment (posterior probability; H Bigger ~ 1.00). Benchmarks were not available for clinicianrated symptom severity, but decrease in severity of primary anxiety disorder continued to be superior to benchmark at follow-up (posterior probability; H Bigger ~ 1.00) (see Table 4 for further description).
Clinical significance
Only the adolescents in the clinical range at pre-treatment were included in the analyses of clinically significant change. The proportions of adolescents in the clinical range pre-treatment were SCAS-C (71.8%), SCAS-P (76.1%), and CGI-S (100%). At post-treatment the sample showed equivalence to the normative benchmark on SCAS-C and CGI (BF 10 > 150), thus indicating that there was >150 times more support for the hypothesis that the sample was equal to the normative benchmark than there was support for the hypothesis that it was not equal. At post-treatment there was a slight tendency toward normative equivalence on SCAS-P (BF 10 = 2.02). At 12-month follow-up the sample showed normative equivalence on SCAS-C, SCAS-P, and CGI (BF 10 > 150). Further details on reliable change and clinical significance can be found in table 5.
Exploratory validity checks were performed to assess the impact of the amount of therapy given in addition to RISK-treatment. Information was gathered from public health records on the number of therapy sessions attended before beginning the RISK-treatment and between post-treatment and the follow-up. The number of treatment sessions given before RISK was not associated with change in the probability of remission at post-treatment (OR = 1.01, 95% HPD [0.98, 1.03]) or follow-up (OR = 0.99, 95% HPD [0.97,1.02]). The study was conducted at a public health community clinic, and thus participants could not be denied treatment between post-treatment and follow-up. Additional therapy sessions between post-treatment and follow-up were offered if the participating adolescents expressed a need for further help. There was a substantial difference in the number of additional sessions between those who were in remission at the 12-month follow-up and those who were not (t(90) = 7.9, BF 10 = 1.55e+11), indicating that each additional therapy session predicted a lower probability of remission (OR = 0.90, 95% HPD [0.87,0.93]).
Among those who achieved remission by the follow-up, 87.5% (n = Note. Description of sample characteristics. Attention-Deficit Hyperactivity Disorder (ADHD). For adolescents with two parents, parental education and occupational status are based on the highest level of parenting. a Number of previous billed sessions registered in public mental health services. Note. Intention to Treat (ITT). Benchmarks were not performed on the loss of primary disorder as these were not available. ITT (N = 90), complete case (N = 85). a The Highest Posterior Density (HPD) describes the interval with a 95% probability of the true parameter value. b Benchmarking describes the probability that the observed measure is equal to (H Equal ), larger than (H Bigger ), and smaller than (H Smaller ) results described in . T.B. Bertelsen et al. 63) had received no additional therapy, 8.0% (n = 6) had received one to five additional therapy sessions, and 4.5% (n = 3) had received more than five additional therapy sessions. The additional treatment received by those in remission was primarily CBT and exposure-oriented booster sessions. The additional treatment received by those not in remission was highly varied and included the following: trauma-informed supportive therapy approach without known trauma (n = 7), eclectic supportive therapy and collaboration with school (n = 5), systemic family therapy and collaboration with schools (n = 5), CBT-oriented booster sessions (n = 1).
Discussion
This study evaluated the effectiveness of an enhanced group CBT treatment (RISK) for adolescent anxiety disorders, including intensive therapist/family/peer-assisted exposure therapy with family member and school personnel involvement. At post-treatment and at the 12month follow-up, 41.6% and 85.9%, respectively, of those who completed treatment were free of all anxiety diagnoses. This substantial increase in effectiveness, from post-treatment to the 12-month followup, was not due to receiving additional therapy. Only 12.5% of those who achieved remission at the follow-up received any additional therapy, with the majority (8%) receiving five or fewer additional sessions. Benchmarking against a meta-analysis of CBT for child and adolescent anxiety disorder indicated equivalence on symptom measures (SCAS-C: Posterior probability; H Equal = .62, SCAS-C: Posterior probability; H Equal = .48) but inferiority on measures of remission at post-treatment (Posterior probability; H Smaller = .98). However, at the 12-month follow-up, there was a 99.99% probability that the treatment was superior to the benchmark on remission measures. Similarly, parent-and adolescent-reported anxiety outcomes and the clinical global impression showed the same trend of increased effectiveness over time. In addition to the effectiveness of treatment it was found that treatment attrition rate (11.1%) was lower than the benchmark (Anxiety = 12.6%, OCD = 13.4%, Wergeland et al., 2020), and may be understood in light of the high degree of parental involvement (de Haan et al., 2013). In line with the low attrition rate, adolescents indicated at post-treatment that they would recommend RISK to a friend struggling with anxiety. Overall, the results indicate that the treatment was effective and acceptable for adolescents with a range of anxiety disorders and OCD.
In line with expectations and previous research, the current sample had a higher average age and had higher rates of SAD as primary diagnosis (52.4%) than the benchmark that included children (proportion with SAD in benchmark: 17%-39%). Thus, it is not unexpected that treatment did not outperform benchmark at post-treatment, given that higher age and SAD are associated with poorer outcomes , Hudson et al., 2015, Manassis et al., 2002. However, outcomes on remission were better than those expected from studies that only included adolescents (Baker et al., 2021). Thus, it is promising that the current sample of adolescents achieved results comparable to other effectiveness studies that targeted a younger population with SAD (7-13 years of age; Martinsen et al., 2009;Villabo et al., 2018). Despite the comparability, age and diagnostic composition may explain why the treatment did not show an enhanced effect relative to the benchmark at post-treatment.
The enhanced treatment effect was visible at the 12-month followup. Results at this time demonstrated a substantial improvement Note. Analyses performed on the intent-to-treat sample (N = 90). Effect size (ES) is Cohen's d. Spencer Child Anxiety Scale (SCAS): child (C) and parent (P) versions. Clinical global impression (CGI). Clinical severity rating of the primary anxiety disorder (CSR1), as assessed by the ADIS-IV-C/P. a The Highest Posterior Density (HPD) describes the interval with a 95% probability of the true parameter value. b Coefficients are reversed for readability. Higher values indicate the decreasing magnitude of the intervention target. Note. Analyses performed on the intent-to-treat sample (N = 90). Spencer Child Anxiety Scale (SCAS): child (C) and parent (P) versions. Clinical global impression (CGI). Reliable change was conducted following (Jacobson & Truax, 1991). Adolescents had clinical significant change if they had reliable chance and were within normative equivalence. a The Bayes Factor (BF 10 ) describes the weight of evidence in favor of the hypothesis that results are equivalent to normative samples. Higher values indicate that the sample is equivalent.
Table 6
Remission by primary anxiety disorder at post-treatment and follow-ups. across diagnoses relative to post-treatment. At the 12-month follow-up, the effectiveness was superior to the benchmark on diagnostic measures and adolescent-rated anxiety measures. Additionally, only one adolescent relapsed between post-treatment and the follow-up. The treatment effect sustainability is particularly promising given that many adolescents relapse in the long-term after treatment completion, with age and SAD predicting a greater risk of relapse .
To understand the observed sustainability of remission and the delayed increase in the treatment effect at the follow-up, it may be useful to consider how family and school support affected treatment adherence. Treatment adherence is an important predictor of treatment outcome for anxious adolescents and is promoted through adult support (Lee et al., 2019). However, when treatment ends, an important aspect of adult support to adherence, namely the clinician, is removed. The partial transfer of control to parents and school personnel performed in the treatment may have helped sustain the adult support system, thus maintaining treatment adherence. In addition to maintaining treatment adherence, parental involvement may also have improved trust and communication within families, which is a protective factor against anxiety in adolescents (Ebbert et al., 2019).
An important implication of the findings relates to the transdiagnostic group format. Such a format is advantageous in routine-care settings, where there may not be enough patient flow or resources to offer disorder-specific treatments for all types of anxiety disorders. Additionally, the group format allows multiple adolescents to gain access to therapists qualified in exposure therapy. Moreover, it also allows for longer sessions. This implication is important since time-constraints and limited therapist qualifications are primary reasons why exposure interventions are not performed in routine-care settings (Pittig et al., 2019). Notwithstanding the advantages of transdiagnostic group CBT, previous studies have found disorder-specific CBT to yield superior outcomes (Reynolds et al., 2012). This finding has led some to argue that disorder-specific CBT should be the preferred format, especially for SAD (Spence & Rapee, 2016) and OCD (Freeman et al., 2018). This study serves as a counterargument to such notions, showing effectiveness across a range of disorders, including SAD and OCD.
Some limitations of the current study should be noted. One such limitation is the study design, which did not include any control condition or randomization. Due to the lack of a control condition, it is not possible to conclude to what extent improvements can be attributed to the RISK-treatment. However, it is possible to conclude that improvements cannot be attributed to treatment other than RISK since assessments of previous therapy and additional therapy received during the follow-up were obtained from patient records. These assessments indicated that improvement in outcomes was not related to previous therapy or receiving extra therapy between post-treatment and the 12-month follow-up. Another limitation was the lack of formal assessment of clinician fidelity to treatment. This limitation also restricts the extent to which improvements can be attributed to the RISK-treatment. Although no formal assessment of clinician fidelity was performed, several measures were taken to ensure clinician fidelity. These measures included training before beginning intervention, using a detailed therapist manual, and constant supervision during the study period. A third limitation is that clinicians assessing diagnostic status post-treatment also participated as treatment providers. Therefore, the assessors may have been biased in their rating. However, 20% of diagnostic interviews were re-assessed by independent raters and excellent reliability was observed.
In addition to the above-mentioned limitations, another caveat of the RISK-treatment is the number of hours and clinician resources required for this treatment. On one hand the 38 hours of RISK seems much more costly than the 4-24 hours observed in other treatments for adolescents (Baker et al., 2021), and it may not be possible to conduct RISK in all settings. On the other hand, the intervention elements that require extra time and clinicians (i.e., longer sessions, intensive treatment, extensive parental and school involvement) were those that aimed at enhancing treatment effects for adolescents specifically. The enhanced effect was achieved and the additional time and resources allowed for transdiagnostic groups that are beneficial in routine-care settings. Additionally, the treatment format allowed for 9 clinicians with no prior education and training in CBT to deliver effective treatments, which is important given the limited access to CBT clinicians. Given the costs of adolescent anxiety, RISK may be a viable treatment, particularly in cases with SAD or when previous treatment has not been beneficial.
Considering the above-mentioned, future research is needed to investigate the cost-effectiveness of RISK, and how to implement such interventions in different settings with the aim of maintaining effectiveness while reducing resources needed. In relation to this, it will be important to investigate the potential for RISK to be offered as a first-line treatment for SAD. Currently, work has begun on modifying the RISKtreatment to a digital self-help platform, and modifying RISK to be delivered by school personnel in a shorter format. In Norway, 31 schools have received training in this shorter format, and are offering the intervention. At this time, RISK is still implemented as standard care in the community clinic where the study was conducted. These preliminary results suggest that RISK has the potential to be implemented across different settings. However, questions remain regarding the effectiveness of variations of RISK and important moderators such as adolescents' motivation that may vary under different circumstances. Thus, further research into variations of RISK is needed in the development of stepped care and tailoring interventions to target specific needs.
In conclusion, this trial provides support for the use of multi-family, multi-disorder group CBT for adolescent anxiety disorder that includes high exposure to feared situations and high levels of parental and school involvement. A particularly promising result was that only one of the participating adolescents who achieved remission at post-treatment relapsed during the follow-up period, and many participants who had not achieved remission at post-treatment achieved remission during the follow-up period. Furthermore, it provides proof-of-concept that this approach is feasible within routine-care clinics and effective across a range of included diagnoses. Further research should evaluate the described approach in a randomized controlled design to further investigate its potential in a stepped care approach.
|
2022-05-13T13:58:23.007Z
|
2022-05-01T00:00:00.000
|
{
"year": 2022,
"sha1": "110d35cc97d213a1106534a53ab5d1f52eaa0cfb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.psychres.2022.114632",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c2dfd0c4f3e822d626a7bfe80b875fd9e98286bd",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
591870
|
pes2o/s2orc
|
v3-fos-license
|
Herpes zoster surveillance using electronic databases in the Valencian Community (Spain)
Background Epidemiologic data of Herpes Zoster (HZ) disease in Spain are scarce. The objective of this study was to assess the epidemiology of HZ in the Valencian Community (Spain), using outpatient and hospital electronic health databases. Methods Data from 2007 to 2010 was collected from computerized health databases of a population of around 5 million inhabitants. Diagnoses were recorded by physicians using the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM). A sample of medical records under different criteria was reviewed by a general practitioner, to assess the reliability of codification. Results The average annual incidence of HZ was 4.60 per 1000 persons-year (PY) for all ages (95% CI: 4.57-4.63), is more frequent in women [5.32/1000PY (95% CI: 5.28-5.37)] and is strongly age-related, with a peak incidence at 70-79 years. A total of 7.16/1000 cases of HZ required hospitalization. Conclusions Electronic health database used in the Valencian Community is a reliable electronic surveillance tool for HZ disease and will be useful to define trends in disease burden before and after HZ vaccine introduction.
Background
Herpes Zoster (HZ; shingles) results from the reactivation of Varicella-Zoster Virus (VZV) that has been latent in the spinal and cranial sensory ganglia after primary infection with varicella (chickenpox), usually during childhood [1]. A vesicular skin rash in the affected dermatome, commonly accompanied by acute pain, characterizes the acute phase of HZ disease.
Approximately 14% of all patients with shingles will develop complications [2]. The most common and weakening is postherpetic neuralgia (PHN), defined by most investigators as pain being present at 90 days after the rash appears [1,2]. Both the acute pain associated with HZ and PHN have a negative impact on healthrelated quality of life, interfering significantly with physical, emotional, and social functioning [3][4][5][6], and quantitatively similar to congestive heart failure, severe depression, acute myocardial infarction, or uncontrolled diabetes [7].
The risk of HZ and its complications increases with advancing age, being more manifest in persons over 50-60 years of age [1,2,8,9]. Among individuals who reach age 85 years of age, approximately 50% will have experienced at least one episode of HZ [10]. The incidence of acute HZ in the European general population ranges between 2.0-4.8 per 1000 persons-year (PY) for all ages, and increases after age 50 years to around 7-8/ 1000 PY [11,12]. Over the coming decades, Europe's demographic patterns will change dramatically, and it is expected that our populations will become older than ever before [13]. This suggests that the incidence of HZ could vary in relation to an older population, possibly as a result of immunosenescence [14].
Several studies support the hypothesis that exposure to childhood varicella reduces the risk of developing HZ by boosting specific immunity to VZV [15][16][17][18][19]. Based on this hypothesis, mathematical models suggest that a successful childhood vaccination program may decrease the circulation of wild-type VZV, and therefore the stimulation of cellular immunity that could prevent the occurrence of shingles in the adult population [20][21][22][23]. This could have significant public health consequences, if a universal mass VZV vaccination program is implemented in childhood. Although this issue is still controversial and there is no firm evidence that this increase will occur [24][25][26], monitoring of disease trends over time will help understand the impact of different factors upon the incidence of HZ. In the Valencian Community (Spain), varicella vaccination is recommended and funded at 12 years of age in those subjects who have not been in clear contact with the virus. This helps avoid severe cases in adults with no impact upon circulation of the virus. About 30% of toddlers also receive the vaccine after their pediatrician's recommendation [27]. With this low coverage figure, the virus is circulating within the population and no effect upon HZ is expected.
The efficacy of a live attenuated zoster vaccine, Zostavax®, in preventing HZ and PHN has been tested in adults ≥ 60 years of age, yielding efficacy rates of 51% and 67%, respectively [28], and of 70% for incident HZ in subjects aged 50-59 years old [29]. In a recent population-based study of 766,330 individuals aged ≥ 65 years, the effectiveness was seen to be 48% [30]. This vaccine is generally well tolerated, has been licensed in the European Union in 2006 for people aged ≥50 years [31], and is expected to be widely used in Spain over the next few years. Another adjuvanted vaccine is under development [32][33][34].
In Spain, HZ is presently not a notifiable disease, and epidemiological data are scarce. A recent prospective study in the Valencian Community (Spain) showed an HZ incidence of 4.1 per 1000 persons > 14 years of age during 2007 [35,36]. Reliable epidemiological data are needed before any universal recommendation of HZ vaccines can be made, in order to assess the impact of HZ vaccination. The purpose of this study was to explore the epidemiology of HZ in the Valencian Community during a four-year period (January 2007 to December 2010), and to estimate the reliability of the regional electronic medical database for epidemiological studies, with a view to creating surveillance tools for future and efficient assessments of HZ incidence in the post-HZ vaccine era.
Setting and study population
The Valencian Community, in the east of Spain, has a population of 5,117,190 inhabitants (2011) [37], and over 98% are covered by the national public health system (NHS) [38]. Primary care visits and hospitalizations are recorded in clinical databases. Using these, we sought cases of HZ of all ages and both sexes attended in the NHS from 1 January 2007 to 20 December 2010. From each HZ case we obtained all medical visits, prescriptions and demographic data.
Abucasis electronic healthcare database
The Abucasis electronic medical database was implemented in the Valencian Community for outpatient and primary care settings in 2006, and offers the possibility of linking patient care and public health [39]. From 2006 to 2010 (when the whole health system was computerized), the percentage of the population included in Abucasis increased from 73.1% in 2007 to 88.8% in 2008 and 95.7% in 2009 (Abucasis managers, personal communication). Abucasis contains an ambulatory information system called SIA, which registers any medical contact (visit), and the attending physician uses a dropdown menu with the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) to record diagnoses. Abucasis also links other databases: Care Provision Management (GAIA), which is the drug information system available to the different professionals involved in prescription and dispensation, addressed from the Department of Health; and the Vaccine Information System (RVN) [40]. Other databases used in this study, such as the Hospital Data Surveillance System (CMBD), are described elsewhere [41] and can be linked through a unique personal identification number (SIP), for the collection of demographic data.
Case definition
For the identification of incident cases of HZ, we searched SIA for any subject with first appearance of an HZ-related ICD-9-CM code (all ICD-9-CM 053 codes: HZ or HZ related complications), and the CMBD database for an HZ diagnosis in any position (1st to 9th). GAIA was searched for information on all subjects who were prescribed antiviral drugs (acyclovir, famcyclovir or valacyclovir) at doses only licensed for use in HZ by the Spanish Medicines Agency (AEMPS).
Any outpatient medical contact or visit, hospital admission or electronic prescription related to HZ was considered as a medical encounter. In order to avoid an overestimation of results, and in an attempt to identify recurrent HZ episodes, each medical encounter not preceded by another encounter in the last six months, and succeeded by another encounter in the following three months was considered as a recurrent HZ case.
To assess data quality, a total of 550 medical records were reviewed by a physician (NMT) in order to assess coding reliability and the accuracy of the filters used. These medical records were randomly selected after request of the following criteria: definition of incident HZ case and recurrent HZ case (300 records reviewed), prescription of specific topical and oral HZ antiviral drugs without a specific HZ ICD-9-CM code (100 records reviewed), discordance of the narrative diagnostic description with the assigned ICD-9-CM HZ code (100 records reviewed), and hospitalizations with an HZ diagnosis not appearing in the Abucasis database (50 records reviewed). In order to determine recurrence, each medical contact in the Abucasis database from 1 June 2006 to 31 March 2011 was evaluated. Confirmation of an HZ diagnosis required an HZ ICD-9-CM code in addition to application of at least one of the following criteria: 1. Detailed physician description of characteristic HZ skin lesions 2. Prescription of antiviral drugs 3. Temporary disability under HZ diagnosis 4. Referral for specialist assessment due to HZ disease, without a subsequent diagnostic exclusion of HZ diagnosis Our data from 2007 were graphed together with the results obtained in a regional prospective cohort study in patients aged > 14 years and carried out in 25 primary care settings in the Valencian Community during the same year [35,36].
All databases were merged using the database manager MySQL 5.1. The random selection for quality control of coding was performed using its "rand" function. Analyses were performed using MySQL and Epidat 3.1. We calculated annual incidence rates of HZ by dividing the number of cases by the persons registered in SIP in each year. The exact 95% confidence interval (CI) for the incidence rates was calculated on the basis of a normal distribution. The Risk Ratio was calculated to assess differences between males and females.
Ethical considerations
The study protocol abided with the principles of the Declaration of Helsinki and was approved by the Public Health Ethics Committee of Valencia. Waiver of informed consent was accepted for medical history review.
HZ case confirmation
We identified a total of 85,586 persons in SIA and/or CMBD with a first diagnosis of HZ. Our criteria of recurrence were met by 3300 of them. After chart review, the positive predictive value (PPV) for HZ case definition was 92.7% (95% CI 89.1-95.4), and for recurrent HZ cases was 55.1% (95% CI 47.0-63.0). Due to the low PPV of the recurrence filter, only the incident HZ cases were included in our estimations.
The PPV for HZ diagnosed only by high-dose antiviral prescription was 26% (95% CI: 17.7-35.7), therefore, no case identified only by the prescription of antivirals was considered as an HZ case.
Epidemiological analysis HZ incidence
Over the four-year study period, we identified 85,586 persons with incident cases of HZ requiring medical care, which correspond to an incidence in all age groups HZ incidence rates were strongly aged-related, and more than half of the cases involved patients over 50 years of age (63.18%) ( Table 1). The incidence in the non-pediatric age group (≥ 15 years) was 5.02/1000 PY (95% CI: 4.99-5.06), while in the target population for HZ vaccine (adults aged ≥50 years) the incidence was 8.60/1000 PY (95% CI: 8.53-8.67). This trend was maintained each year during the whole study period. An incidence peak was observed in those aged 70-79 years, with a maximum of 11.55/1000 PY (95% CI: 11.32-11.79) in women and 9.41/1000 PY (95% CI: 9.18-9.65) in men. A drop in incidence in the older age group occurred in the primary care database but was not seen in the hospitalizations (Table 2).
The most frequently coded HZ diagnosis was ICD-9 -CM 053.9 (HZ without complications) ( Table 4). A total of 679 patients admitted to hospital with HZ as first listed diagnosis were not coded as a HZ case in SIA. Chart review of 50 of these patients showed the reasons for not coding to be death, change of address, lack of follow-up, or no HZ ICD-9-CM codification but mentioning hospitalization under other ICD-9-CM code. Figure 2 depicts the age incidence in this study and in the prospective study [35]. In both studies the annual incidence was higher in females. The prospective study showed an incidence of 4.5/1000 PY (95% CI: 3.5-5.4) in females and 2.7/1000 PY (95% CI: 1.9-3.5) in males, while the electronic database showed an incidence of 5.32 (5.26-5.37) per 1000 PY for women and 3.86 (3.82-3.90) per 1000 PY for men (Table 1).
Discussion
In this study we used medical electronic databases from a region of Spain in order to reliably estimate HZ incidence. Our results showed an HZ incidence of 4.60/1000 PY, with a strong correlation to age, and with a drop in incidence in patients ≥ 80 years of age, which was not observed in hospitalizations. Incidence rates were higher among women in most age groups, with an overall figure of 3.86 in males and 5.32 PER 1000 PY in females.
Epidemiological studies on HZ in Spain are scarce and are needed for future estimations of disease burden [35,36,42,43]. Our study used health databases that cover almost the entire population living in the Valencian Community, including hospitalizations and ambulatory patients. To the best of our knowledge, there is only one study of similar characteristics in Spain, conducted in Navarra (a northern region of the country) before systematic varicella vaccination [44].
One of the limitations of the study is that the Abucasis database was not developed for medical billing purposes, and this may imply the obtainment of less detailed information. Although ICD-9-CM codes are routinely used, general practitioners (GP) may be unaware of its importance. To overcome this, codification is performed in simple waythe system presenting a list of possible ICD-9-CM codes after the diagnosis is written. In the case of Zoster, which is a common diagnosis, with no synonyms, ICD-9 coding is relatively easy. On the other hand, trained personnel codify CMBD. In order to assess the reliability of HZ codification, a random review of medical records showed good matching between ICD-9-CM coding and a real episode of HZ. A total of 7.7% of the reviewed notes with the diagnosis of Herpes Zoster did not meet the full requirements for being considered a case. Most of these situations were due to a lack of description of the lesions. We considered that most of these cases could be HZ, but the GP did not provide sufficient written details, possibly because of the great care burden found in primary care, which prevents entering detailed information in medical notes. This consequently may have understated HZ case confirmation.
For recurrent cases our temporal filters were not wholly suitable, and therefore the system did not allow for analysis of recurrent cases.
Another limitation is that emergency room visits and private medical visits were not included in any of the databases. Since the medication usually prescribed for HZ is expensive and is not subsidized in these situations, we assumed that only a minority of patients would not seek public medical care in order to receive subsidized drugs. As a population database study, it is possible that some individuals did not seek medical care. Another limitation to be taken into account is that we only included clinical diagnoses of HZ, since laboratory confirmation of HZ is rarely used in normal clinical practice.
Due to bureaucratic problems, SIA data for the year 2010 were given until 20 December. We assume that the remaining 11 days would have no significant impact upon our estimations.
On comparing results corresponding to 2007 from the Abucasis database and a prospective study carried out in a similar population (>14 years) [35,36], the incidence figures were found to be similar, being slightly higher when using the electronic database (5.0/1000 PY versus 4.1/1000 PY). These differences mainly occurred in two age groups (15-49 and 60-69 years), and as the authors point out [35], this may reflect their low precision. The study conducted in Navarra [42] showed a mean HZ incidence of 4.25/1000 PY during 2005-2006. The number of hospitalizations in our study was lower than in similar retrospective Spanish studies during the previous study period [41,42]. In one of these studies [41], differences were found among the Spanish autonomous communities, with lower incidence rates in the Valencian Community. To confirm the reliability of our data, we compared them with the national data for the same study period (data not shown), and found both figures to be the same, with a decrease in the incidence of HZ hospitalization over time, possibly due to changes in admission criteria, to greater awareness among physicians of the need for early antiviral treatment, or to changes in coding practices. However, similar hospitalizations rates were found (2.69/100,000 PY) in another Spanish study using the CMBD database for the study period 1997-2004 [45].
As in other epidemiological studies, the incidence of HZ is strongly sex-and age-related, with higher incidences in women over 50 years. Our incidence was slightly higher than in other European studies using electronic databases. In France, the yearly HZ incidence rate for all ages averaged 3.82/1000 PY in the study period 2005-2008 [46], while in Italy a retrospective study showed an incidence of 4.31/1000 PY for population aged 15 years or older during the three-year study period (2003)(2004)(2005) [47]. Another study conducted in the Netherlands during 1994-1999, calculated an HZ incidence of 3.4/1000 PY [48]. A similar German study performed in an older population (≥ 50 years) reported higher figures (9.60/1000 PY during 2007-2008) [49]. We found similar rates in the United States: 4.4/1000 PY for all ages during 2006 [50]. Apart from the fact that the results of different descriptive epidemiological studies are highly dependent upon the methodology used, some of these studies included emergency room visits, and some countries use ICD-9-CM for billing purposes, which would result in variable incidence rates. Higher incidence rates were found in the placebo controls of clinical trials in the ≥ 60 years age group, specifically 13.00/1000 PY [50,51] or 11.12/1000 PY [28], which indicates that an active search of HZ cases considerably increases their incidence.
Our incidence peaked at 70-79 years of age and decreased thereafter, especially in the ≥95 years age group (data not shown). This drop in incidence was not seen in hospitalizations (Table 3). There are several explanations for this. Firstly, there could be a gradually reduced risk of HZ, explained by the hypothesis that exposure to VZV provides the host with progressive immunity to VZV reactivation [52,53]. Secondly, large proportions of subjects in this age group have disabilities or walking difficulties, and consequently are usually visited by family physicians at home. These visits are commonly not recorded in the Abucasis database, and medical prescriptions are handwritten. On the other hand, many of these patients live with relatives and usually move to other provinces outside our study area during part of the year, without unsubscribing from the Abucasis database; a potential HZ case therefore could have been registered elsewhere and not be counted as an incident case.
HZ incidence is age-related. This correlation could be explained by the progressive decline in VZV cell-
Figure 2
Description of the incidence of HZ per 1000 person-year by age (95% CI) in this study and in a prospective study in Valencia [35]. mediated immunity related to aging. The incidence should increase in a progressively aging population, which would imply a greater burden and cost of disease. Determinants of direct costs of an HZ episode are usually related to the prescription of antiviral drugs and repetitive primary care visits in the case of PHN. Direct outpatient costs in several European countries average between 72.05 € and 247 € [36,[54][55][56].
Conclusions
Our study confirms electronic databases as a reliable epidemiological tool for estimating the incidence of HZ disease. They provide an important source of information on the incidence of HZ, which can be useful to define trends in disease burden before and after HZ vaccine introduction. Competing interests JDD Is acting as national coordinator and principal investigators for clinical studies and receiving funding from non-commercial funding bodies as well as commercial sponsors (Novartis Vaccines, GlaxoSmithKline, Baxter, Sanofi Pasteur MSD, MedImmune, and Pfizer Vaccines) conducted on behalf of CSISP-FISABIO. He served as a as a board member for GSK, and received payment for lectures from SP-MSD, Novartis and Baxter that included support for travel and accommodation for meetings. JPB Is acting as national coordinator and principal investigators for clinical studies and receiving funding from non-commercial funding bodies as well as commercial sponsors (Novartis Vaccines, GlaxoSmithKline and Sanofi-Pasteur) conducted on behalf of CSISP-FISABIO. He served as a as a board member for GSK, and received payment for lectures that included support for travel and accommodation for meetings. Other authors declare no conflict of interest.
Authors' contributions NMT, JDD and JPB designed the study. SMU analyzed the data and performed the statistical analysis. NMT wrote the manuscript and reviewed clinical records. JDD coordinated the study. SAS, LPB, JDD and JPB provided valuable insight for revising the manuscript. All authors read and approved the final manuscript.
|
2017-06-21T22:43:04.366Z
|
2013-10-05T00:00:00.000
|
{
"year": 2013,
"sha1": "9012ba468c317b428e3b876fb8439833ab8210b8",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-13-463",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0209ce91af2404f4eacc895a9d35010d5a054ba4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
895079
|
pes2o/s2orc
|
v3-fos-license
|
Diet Segregation between Cohabiting Builder and Inquiline Termite Species
How do termite inquilines manage to cohabit termitaria along with the termite builder species? With this in mind, we analysed one of the several strategies that inquilines could use to circumvent conflicts with their hosts, namely, the use of distinct diets. We inspected overlapping patterns for the diets of several cohabiting Neotropical termite species, as inferred from carbon and nitrogen isotopic signatures for termite individuals. Cohabitant communities from distinct termitaria presented overlapping diet spaces, indicating that they exploited similar diets at the regional scale. When such communities were split into their components, full diet segregation could be observed between builders and inquilines, at regional (environment-wide) and local (termitarium) scales. Additionally, diet segregation among inquilines themselves was also observed in the vast majority of inspected termitaria. Inquiline species distribution among termitaria was not random. Environmental-wide diet similarity, coupled with local diet segregation and deterministic inquiline distribution, could denounce interactions for feeding resources. However, inquilines and builders not sharing the same termitarium, and thus not subject to potential conflicts, still exhibited distinct diets. Moreover, the areas of the builder’s diet space and that of its inquilines did not correlate negatively. Accordingly, the diet areas of builders which hosted inquilines were in average as large as the areas of builders hosting no inquilines. Such results indicate the possibility that dietary partitioning by these cohabiting termites was not majorly driven by current interactive constraints. Rather, it seems to be a result of traits previously fixed in the evolutionary past of cohabitants.
Introduction
An efficient strategy for organisms which depend on nesting is to inhabit the nest of another species, because this avoids building costs while keeping the benefits of such structures. It is not surprising, therefore, that many inquiline species are spread throughout virtually all groups of animals. An intriguing issue is how invaders deal with potential conflicts with the builder, especially if invaders and builders cohabit, as frequently occurring with termite inquilines and their termite hosts. Here we provide evidence that inquilinism in certain termite nests seems to be eased by the use of conflict-avoiding strategies on the part of inquilines.
Examples of nest invaders include, but are not restricted to, nest-usurping woodpeckers, cuckoos and cowbirds [1,2], joint nesting salamanders [3], inquiline bumblebees [4], and social parasitic butterflies [5]. In termite nests, intruders range from vertebrates such as birds [6] and bats [7] to a wide variety of arthropods [8,9]. Most commonly, these assemblages are com-posed of a termite species that builds and maintains the nest, plus entire invertebrate food webs [10,11] whose members are generally referred to as termitophiles. A particular subset of these is formed by termites that inhabit termite nests and may contribute to either nest maintenance or nest decay [12], the so called inquilines [13][14][15][16][17].
Inquiline termites form a particular group of invaders because, as their hosts, inquilines are detritivores. Risks imposed by inquiline termites are therefore rather distinct from those imposed, for instance, by predatory cohabitants such as larvae of elaterid beetles in termitaria [18], or the larvae of Microdon flies [19] and Lycaenidae butterflies [5] in ant nests. The absence of predation risks by no means implies the absence of trouble to the builder: negative interactions are still bound to arise if inquilines, e.g., feed on stored products or on the lining of the nest walls to a degree that requires constant replenishment or repair by the builders. At the very least, contests could be triggered when inquilines use a space originally built for the builder's nestmates.
Dealing with such conflicts so that they represent bearable costs to the builder is key to the stability of cohabitation over ecological and evolutionary time. Therefore, a plausible hypothesis is that inquiline selection favours the adoption of strategies to minimise costs to the builder, which can be achieved by inflicting low loss or offsetting losses with an associated benefit. A wide range of strategies fulfil such aims, among which segregation of feeding resources is an obvious example of conflict avoidance. A possibility that cannot be excluded is that inquilinism is based on noninteractive processes: opportunistic inquilines occupy abandoned parts of termitaria and remain there unnoticed by the builders. In this case, the relationship could be evolutionarily stable because the use of such spaces would not be deleterious to the builder but would enhance the inquilines' fitness through reduction of their own building costs.
In the present study we analysed the coexistence of termite builders and inquiline species in the same termitarium, in the field, with a focus on one of the mechanisms that could explain this interaction: the diet use by the species involved. To this end we evaluated diet coincidence between two builder termite species and 12 associated inquiline species, inspecting the stable isotopic signature of individuals from 14 termite nests in a savannoid ecosystem (cerrado) in South-eastern Brazil. Our rationale was that the diet of inquilines should differ from that of builders and the difference can be inferred from distinct 13 C/ 12 C and 15 N/ 14 N ratios for the termites. As a null hypothesis we consider that if invasion of these termitaria occurred merely by chance, without any evidence of past or present interaction, we would not find any consistent diet pattern for builder and inquiline species. In short, we argue here that one of the reasons for the coexistence of these builders and inquilines is that diet segregation minimises negative interactions and favours cohabitation in the same termitarium.
Ethics Statment
All necessary permits were obtained for the described study, which complied with all relevant regulations of Brazil. This includes collecting and transportation permits from IBAMA (The Brazilian Institute for the Environment and Renewable Natural Resources), permission from EMBRAPA (The Brazilian Enterprise for Agricultural Research) to conduct the study on their site, as well as tacit approval from the Brazilian Federal Government implied by granting the authors the post of Scientific Researcher.
Definition of Terms
The term ''termitarium'' is used here to denote the physical epigeic structure built by termites (for taxonomic status see [20,21]). We use ''mound'' and ''nest'' as synonyms of termitarium. ''Colony'' denotes the assemblage of individuals of a given species living and cooperating within the nest. ''Coexistence'' and ''cohabitation'' are used as synonyms and refer to the simultaneous occurrence of colonies of different termite species within a given termitarium, without implication of reciprocal positive or negative influences.
Diets exploited by termites were inferred from concentrations of stable carbon and nitrogen isotopes in termite bodies obtained by measuring 13 C/ 12 C and 15 N/ 14 N ratios. Termites from the same colony may forage on distinct lignocellulose sources with distinct degrees of decomposition. Therefore, the diet of a termite colony is characterized here by a set of 13 C/ 12 C and 15 N/ 14 N pairs obtained from several individuals from the same colony, this set circumscribing a bidimensional space in a Cartesian plot whose axes represent the concentration of such isotopes in termite bodies.
Study Site
The study was carried out in the Brazilian cerrado, an environment physiognomically but not floristically similar to a savannah, near the town of Sete Lagoas (19 0 279 S, 44 0 149 W, altitude 800-900 m above sea level), Minas Gerais State, Southeastern Brazil. In Köppen's classification, the study area has an Aw climate (equatorial with dry winter) [22]. The total precipitation in 2008 was 1607 mm and the mean monthly temperature ranged from 12:7 0 C to 28:9 0 C [23]. Fire often occurs naturally in the cerrado and the termites [24] and other organisms [25] that live there tolerate fire or depend on it to survive. Epigeous termitaria are a common feature of such an environment and inquilines frequently inhabit these termite mounds [26].
Sampling
We sampled, from 24 to 28 July 2008 (7:30-16:00 h), 14 termitaria whose builder colonies were still alive and (apparently) healthy. These termitaria showed no sign of damage, were epigeic, and were easily removed from the soil without breaking its hypogeic portion. The termite builder species studied, Velocitermes heteropterus and Constrictotermes cyphergaster (both Termitidae: Nasutitermitinae), do not normally build termitaria presenting a significant hypogeic portion. It is worth noting that C. cyphergaster, which typically builds arboreal nests, can also build epigeous ones [27]. The termitaria were removed from the field, put into plastic bags, labelled, and taken to the laboratory. The vegetation and landscape were similar around all the termitaria sampled.
Once in the lab, the entire termitaria were carefully inspected to extract individuals using soft entomological forceps. Individuals from the same species grouped together were considered as belonging to the same cohabitant colony. Duplicate samples were taken from these cohabitant colonies, one for taxonomic identification and the other for isotopic analyses.
Specimens used for identification were preserved in 80% alcohol, labelled, and subsequently identified to species (or morpho-species) level according to Mathews [12] and literature referred to by Constantino [28]. Identifications were confirmed by comparison with the termite collection of the Entomological Museum of the Federal University of Viçosa (MEUV), where voucher specimens were deposited.
The builder species of each termitarium was determined by matching the termitarium physical traits with previous published accounts [12,29] regarding size, geometric form, composition (soil or carton), wall texture, and wall hardness. In addition, builders tend to be far more abundant inside their termitarium than any inquiline.
Inquilines were identified as species whose colonies presented individuals of distinct instars, indicating that reproductive pairs were active and the colony was integrated in the environment. Some inquiline colonies were not populous enough to supply a minimum biomass of workers for isotopic analyses so their diet patterns were not mapped (these are denoted by 'o' for others in Table 1).
Stable Isotope Analysis
We used stable isotope concentrations to infer diet because the isotopic composition of the body of an animal reflects the food consumed and assimilated during its lifetime [30,31]. Within a given environment, comparatively higher d 15 N values indicate a termite diet biased towards more humified organic matter, whereas lower values point to a less decomposed, even xylophagous diet. Bourguignon et. al. [32] presented a practical example of such a classification.
Termite workers of each species in the termitaria were sorted, when possible, into 10 subsamples, each with a sufficient number of individuals to obtain a dry biomass of 1.5 mg for full-body isotopic analysis. Colonies meeting this criterion are denoted by 'b' (for builders) and 'i' (for inquilines) in Table 1. We used only workers for stable isotope analysis, not only because these are the most abundant individuals in a termite colony but also because they forage and feed other castes in the colony [33]. This procedure also eliminated any possible intercaste effects on isotopic values [34].
Each subsample was placed in a vial with distilled water and was immediately frozen until the analyses could be performed. Water was removed by freeze-drying for approximately 48 h to dehydrate the termites, prevent decomposition and maintain the original 13 C/ 12 C and 15 N/ 14 N ratios. The subsamples were then ground with a mortar and pestle and sieved through a 100-mesh sieve. Carbon and nitrogen isotope ratios were measured for each subsample independently, using an isotope ratio mass spectrometer (IRMS, ANCA-GSL 20-20, SerCon, UK) in the Laboratory of Stable Isotopes, Soils Department, Federal University of Viçosa (UFV). The analytical precision was estimated to be +0.1% for carbon and +0.2% for nitrogen. The natural abundance of 13 C and nitrogen 15 N is expressed as per thousand (%) deviation from an international standard(belemnite of the Pee Dee Formation in South Carolina, USA (PDB) for carbon and atmospheric nitrogen (air) for nitrogen). The ratios of the heavy ( 13 C or 15 N) to the light isotope ( 12 C or 14 N), typically corresponding to rare and abundant isotopes are hereafter referred to as ''isotopic ratios'' [35] and are referenced by d 13 C and d 15 N.
Data Analysis
Diet limits were statistically defined as Bayesian standard ellipses plotted around pairs of d 13 C and d 15 N points representative of termites' diet space, such ellipses being to bivariate data as standard deviation is to univariate data. Because these ellipses define the statistical limits for the dimensions of each diet, overlapping ellipses indicate statistically indistinguishable diet spaces. Ellipses and associated metrics were calculated using siber routines [36] from siar package [37], under R statistical computing environment [38].
Ellipses were estimated according to three distinct and complementary views of the dataset. Initially, a single ellipsis was estimated for the whole community of cohabitants within a given termitarium, thereby allowing comparisons among whole termitaria across the sampled region. Overlapping ellipses would indicate similarity between diets among termitaria in spite of their spatial distribution over the sampled region. Then, the data relative to the full set of inquilines of a given builder species were amassed (across all termitaria) in a single ellipsis, thereby allowing comparisons with the single ellipsis of the respective builder species, also amassed across all termitaria. This allowed to infer general patterns for diet spaces of inquilines versus builders. Finally, individual ellipses were plotted for each cohabitant within each termitarium, thereby allowing diet comparisons between builders and their respective inquilines within a given termitarium.
To infer on interactive processes regulating diet segregation we checked for correlation between the dimensions of diet spaces of cohabitants within each termitarium. If inquilines dynamically expand their diets at the expense of their host's diets (or viceversa), the dimension of their respective diet spaces across all termitaria should correlate negatively. Accordingly, diet spaces of builders living alone should be larger than those of builders cohabiting with inquilines. Analyses were carried out using Generalized Linear Modelling under normal errors followed by residual analyses to confirm the model suitability and the choice of error distribution. Initially, a subset of the data containing only termitaria having both, builders and inquilines, was subjected to a model in which the area of the builder's ellipsis (y-var) was correlated with the respective area of the ellipsis formed by their respective inquilines taken together (x-var). The identity of the builder entered the model as a covariate, both as a single term and as part of the first order interaction. Another independent model compared the average area of the builder's ellipsis (y-var) between termitaria with and without inquilines (x-var). This was only possible on termitaria built by C. cyphergaster because for those both instances of the x-variable were available. Models were simplified by deleting non-significant terms (Pw0:05) from the initial model according to their complexity, starting with the most complex term, following recommendations by Crawley [39]. Table 1
Species Distribution among Termitaria
A survey carried out in the study area revealed that termitaria were 4.4 + 1.7 m (mean + SD) apart from their four nearest neighbouring termitaria. This survey included, but was not restricted to, the termitaria studied here. Some 20 species of termite inquilines were found in the termitaria, of which 12 species presented individuals enough to be analysed isotopically. A total of 13 species occurred only once ( Table 1) and seven occurred in two or more termitaria. Termitaria sheltered between zero and six inquiline species. Termitaria of V. heteropterus sheltered between one and six inquiline species at once, whereas termitaria of C. cyphergaster housed between zero and one inquiline species.
Heterotermes longiceps was the sole inquiline species found in termitaria of both builder species, but it was neither frequent nor abundant: only two very small colonies were recorded, the largest of which comprised approximately 40 individuals. The remaining 19 inquiline species were not shared between builder species, suggesting species-specific differences in the ability to coexist with other species. Supporting such a trend, Inquilinitermes microcerus did not occur in termitaria of V. heteropterus but it was found only in termitaria of C. cyphergaster. This is in line with previous reports that I. microcerus is an obligatory inquiline of C. cyphergaster [12].
The 14 termitaria studied housed 29 inquiline colonies along with the builder colony ('i'+'o' in Table 1), of which 18 colonies presented individuals enough to be analysed isotopically ('i' in Table 1). Termitaria housing multiple colonies showed no evidence of more than a single colony of a given cohabitant species.
Diet Segregation
Termitaria overlapped each other regarding the overall diet space of their community of cohabitants (Fig. 1) indicating that, in average, communities exploited rather similar diets despite being confined to distinct termitaria. A single C. cyphergaster nest did not overlap the others (Fig. 1, leftmost ellipsis, corresponding to nest c7). This nest is devoid of inquilines and its detachment was not strong enough to scramble the statistical non-overlaping trend presented by the other nests, as it is shown in Fig. 2, ''builder alone''. The diets of builder and inquiline species never overlapped (Fig. 2, 3 and 4). This diet segregation tended to be majorly driven by d 15 N isotopic dimension, with inquilines occupying a higher trophic position than builders (Fig. 2), albeit still within the detritivore level (horizontal dotted lines in all figures denote changes in trophic position, as it is generally agreed that trophically distinct organisms would differ by 3% in d 15 N [40]).
There was also a general trend for diet segregation among inquiline species within termitaria, with only a single case of overlap out of 14 nests (Fig. 3, v3). Diet segregation among inquilines was also most obvious in the d 15 N dimension. Table 1 for termite species identities. doi:10.1371/journal.pone.0066535.g002
Diet Shrinkage
There was no correlation between the areas of the builder's diet space and that of its inquilines (F ½3,7~1 :3153, p~0:3432). Accordingly, the diet areas of C. cyphergaster builders hosting inquilines were in average as large as the areas of builders hosting no inquilines (F ½1,5~0 :021, p~0:8905). This seems to lend support to the notion that inquilines and builders do not interfere to each other in terms of their diet.
Discussion
Differentiation in resource use has been considered one of the main mechanisms that facilitates species coexistence (for a comprehensive historical account, see [41]) that includes communities of plants [42,43], fish [44], and insects [45,46]. In communities of termites, interactions with respect to food resources have been identified as an important regulating factor; examples include species assemblages from the African savannah [47,48] and the South American tropical rainforest [32,34].
While dietary shifts seem to affect the coexistence of termite species in environments delimited by permeable borders, patterns of interaction with respect to diet are virtually unknown for termite species assemblages circumscribed by discrete physical barriers (but see [49] for competing insular termite populations), especially those cohabiting the same termitarium.
Such spatially confined populations represent suitable scenarios for studying dietary shifts as determinants of species coexistence. Because barriers restrict spatial adjustments that could preclude species interactions, the importance of dietary adjustments may in turn be amplified. In fact, for the termite builder-inquiline assemblages studied here, feeding resource segregation appears to be typical, if not the determinant, of cohabitation in the same termitarium. Diet spaces for inquilines never overlapped host's spaces at both, regional (i.e. the sampled environment) and local (termitarium) scales (Figs. 2, 3 and 4). Mechanisms behind this diet segregation could include (i) local differences in the suitability and availability of resources, including predation constraints [50,51], so that each set of cohabitants in a termitarium has access to a particular diet; and (ii) local-scale interspecific tradeoffs [52] leading to diet partitioning along a trophic continuum within the termitarium.
The fact that inquiline-bearing termitaria presented strong overlap regarding the overall diets of their cohabiting communities (Fig. 1) seems to point out that cohabitants had access to similar resources, and that is reinforced by the close proximity of all nests (in average 4.4 m apart). Additionally, consistent patterns of nonoverlapping diets between builders and inquilines and among inquilines across all termitaria (Figs. 2, 3 and 4) seem to weaken the hypothesis of local differences in resources in favour of the trade-off hypothesis, even though these are not necessarily mutually exclusive. Segregation in resource use on its own does not imply species interactivity since species can be assembled by chance events [53]. However, as well as the consistent differences observed for their actual diets ( Fig. 3 and 4), the cohabiting termites studied here did not seem to be assembled at random (Table 1). Rather, inquiline species of V. heteropterus did not seem to be able to live in termitaria of C. cyphergaster and vice versa. This is reinforced by the presence of the obligatory inquiline I. microcerus [12,29,54], which was only found in C. cyphergaster nests. In fact, ocupation of C. cyphergaster nests by I. microcerus is not believed to occur at random but to depend on host/nest features [16]. Thus, inquiline occupation in these termitaria is likely to be related to the intrinsic characteristics of the species involved rather than being a simple chance event.
Diet segregation under such a deterministic scenario could result from feeding resource competition but the absence of overlap between the overall diet spaces of inquiline and builders would challenge this idea, because even when not sharing the same termitarium and hence not subject to potential conflicts, the diet of inquiline species never overlapped that of host species (Fig. 2). Indeed, the mere fact that inquilines did exploit distinct diets makes it risky to advocate some link between the observed segregation and contemporary competitive interactions. It seems therefore that dietary partitioning by cohabitants was not majorly driven by interactive constraints, a hypothesis also supported by the absence of correlation between the diet space areas of builder and inquiline species within termitaria (F ½3,7~1 :3153; P~0:3432), where interactions would be highly likely. This is further supported by the fact that the average diet areas of C. cyphergaster builders did not expand significantly in the absence of inquilines (F ½1,5~0 :021, p~0:8905). In other words, interspecific tradeoffs as a force driving termite inquilinism in this system would more likely to have occurred -if at all -in the evolutionary past rather than in the contemporary ecological time frame (the 'ghost of competition past' [55]). An alternative and perhaps more conservative view is that current inquiline species are descended from specialist lineages, and never conflicted with the dietary requirements of their hosts.
Despite being not able to distinguish between the hypotheses of past interspecific trade-off versus pre-adaptations favouring specialization, our data reinforce both hypotheses over a hypothesis focusing on current competition. Although still within the detritivore trophic level, inquiline and builder species never shared the same trophic position (Figs 3 and 4) and were sometimes as much as four full positions apart (taking each trophic step as 3% units in d 15 N, as in de Visser et al. [10]). Since inquiline species are obviously not predators but detritivores, such disparate trophic positions may indicate that they in fact feed on materials far more decayed than those used by the builder species. These could include stored organic material, the hosts faeces and dead bodies, and the lining of the termitarium walls, which is also composed of faeces. Although still open to investigation, this assumption is in line -at least regarding I. microceruswith previous reports by Noirot [56] and Mathews [12] and recent evidence by Bourguignon et al. [32].
Diet differences were also observed among most inquiline species cohabiting the same termitarium; those that actually differed being arranged in stepwise trophic positions. It is possible that a trophic chain was established, with one inquiline species feeding on the by-products of its host, another feeding on the excreta and remains of this inquiline, and so on. Alternatively, inquiline species could selectively feed on distinct parts of the nest, and thus would have distinct d 13 C and d 15 N inputs. Termites are indeed able to feed selectively in the field [51] and can select soil particles from distinct layers to build specific mound structures, which in turn exhibit distinct C and N contents [57] most likely with characteristic d 13 C and d 15 N values. This would explain not only the consistent differences observed between builder and inquiline species regarding d 15 N dimensions of their diets, but also the fact that diets of inquilines, albeit still distinct, differed sometimes in a single dimension and sometimes in both. In other words, under this scenario, inquiline species and their host would differ less markedly in d 13 C than in d 15 N (Figs. 1, 3 and 4) because by feeding on specific parts of the nest, inquiline species have access to a subset of the carbon resources collated by their host along with selected soil particles that are trophically distinct because they are cemented with the host's faeces. The possibility that a given inquiline species could also differ from the builder and other inquiline species by foraging for distinct food outside the nest [12] remains to be considered. All in all, this would only reinforce the diet segregation patterns observed here.
In summary, we found evidence that, at least for the system at hand, cohabitation of termite species in the same termitarium was related to diet segregation that did not seem to be majorly constrained by interspecific interactions for food. Rather, inquilines exploited diets not used by their host, thereby circumventing conflicts over use of feeding resources.
|
2016-05-04T20:20:58.661Z
|
2013-06-21T00:00:00.000
|
{
"year": 2013,
"sha1": "0d23df12b14e49a17c1199bda3f42f4a2ac78b53",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0066535&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0d23df12b14e49a17c1199bda3f42f4a2ac78b53",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
246679031
|
pes2o/s2orc
|
v3-fos-license
|
Sal-type ABC-F proteins: intrinsic and common mediators of pleuromutilin resistance by target protection in staphylococci
Abstract The first member of the pleuromutilin (PLM) class suitable for systemic antibacterial chemotherapy in humans recently entered clinical use, underscoring the need to better understand mechanisms of PLM resistance in disease-causing bacterial genera. Of the proteins reported to mediate PLM resistance in staphylococci, the least-well studied to date is Sal(A), a putative ABC-F NTPase that—by analogy to other proteins of this type—may act to protect the ribosome from PLMs. Here, we establish the importance of Sal proteins as a common source of PLM resistance across multiple species of staphylococci. Sal(A) is revealed as but one member of a larger group of Sal-type ABC-F proteins that vary considerably in their ability to mediate resistance to PLMs and other antibiotics. We find that specific sal genes are intrinsic to particular staphylococcal species, and show that this gene family is likely ancestral to the genus Staphylococcus. Finally, we solve the cryo-EM structure of a representative Sal-type protein (Sal(B)) in complex with the staphylococcal 70S ribosome, revealing that Sal-type proteins bind into the E site to mediate target protection, likely by displacing PLMs and other antibiotics via an allosteric mechanism.
INTRODUCTION
Pleuromutilin (PLM) antibiotics inhibit bacterial protein synthesis by binding to the large ribosomal subunit at the peptidyltransferase centre (PTC) and preventing the requisite positioning of the A-and P-site tRNA for peptide bond formation (1). The PLM class has a long history (>40 years) of use in veterinary medicine for the prevention and treatment of bacterial infection. Since 2007, PLMs have also been in human use in the form of retapamulin, which is approved for topical application to treat superficial infections caused by Staphylococcus aureus and other Gram-positive pathogens (2). In 2019, lefamulin became the first systemic PLM to be approved in humans, administered either intravenously or orally for the treatment of community-acquired bacterial pneumonia (3).
Growing use of this antibiotic class in human medicine underscores the need for a more complete understanding of the nature of PLM resistance. At present, resistance to PLMs appears to be relatively uncommon amongst the major target genera against which retapamulin and lefamulin are deployed, such as the staphylococci; prevalence studies in S. aureus and non-aureus staphylococci report rates for resistance of 0.1-2.6% (4)(5)(6). Nevertheless, several PLM resistance determinants have been identified in this genus. For example, the cfr gene confers resistance through methylation of 23S ribosomal RNA, a modification that serves to protect the ribosome from PLM and several other antibiotic classes whose binding sites lie in close proximity (phenicols, lincosamides, group A streptogramins, and oxazolidinones) (7). PLM resistance can also result from mutational change in the ribosome, involving amino acid substitutions in ribosomal proteins L3 and L4, or nucleotide substitutions in 23S rRNA (8,9). However, the above resistance mechanisms are not widespread in staphylococci, and do not therefore represent common causes of clinically-significant PLM resistance at present (6,10).
Of greater importance for PLM resistance in staphylococci--especially in view of their greater collective prevalence--are several antibiotic resistance (ARE) ABC-F proteins, exemplified by the Vga-and Lsa groups (11). In contrast to the more canonical members of the ATP-Binding Cassette (ABC) superfamily that participate in drug resistance, ABC-F proteins such as these do not transport antibiotic across membranes. Instead, the Vgaand Lsa-type proteins mediate resistance to PLMs and other protein synthesis inhibitors by target protection; they bind to the ribosome to drive antibiotic release (12,13), although the precise mechanism by which the latter occurs remains to be clarified. Of these two groups, the vga-type genes currently appear to represent the major PLMresistance determinants in S. aureus and some non-aureus staphylococci (6,14), and are typically associated with mobile genetic elements such as plasmids that facilitate their spread (15). Of the lsa-type genes, lsa(E) was the first gene to be characterised in staphylococci, and several other variants of this determinant have subsequently been identified in human and veterinary S. aureus isolates, again typically in association with plasmids (16). A further ARE ABC-F member known to mediate PLM resistance in staphylococci is a relatively poorly-characterized protein known as Sal(A) (17), which belongs to the subfamily of ABC-Fs designated ARE6 (18). The sal(A) gene was first identified as a cause of resistance to lincosamides and group A streptogramins in Staphylococcus sciuri (17), and only later shown to be involved in PLM resistance (19). By analogy to the Vga-and Lsa-type proteins, Sal proteins may physically associate with the ribosome to protect it from antibiotics, though this has to date not been demonstrated.
Here, we establish the importance of sal-type genes as a common source of intrinsic PLM resistance across multiple species of staphylococci. Sal(A) is revealed as but one member of a larger group of Sal-type ABC-F proteins that is likely ancestral to the genus, and which shows considerable variation in the ability to mediate resistance to antibiotics. We solve the cryo-EM structure of a Sal-type protein in complex with the staphylococcal ribosome, confirming that Sal-type proteins do indeed mediate target protection, likely by displacing PLMs and other antibiotics from the ribosome via an allosteric mechanism.
Bacteria, culture conditions and susceptibility testing
The collection of non-aureus staphylococci used in this study (n = 363) comprised 214 human isolates recovered from hospitals in the UK, Canada and Italy between 2012 and 2016 and 149 veterinary isolates obtained from the Royal Veterinary College (London, UK). Bacteria were routinely cultured at 37 • C using cation-adjusted Mueller Hinton agar (MHA) or broth (MHB) (Sigma-Aldrich) for 18-24 h. To detect PLM resistance, bacteria (10 4 CFU) were spotted onto MHA containing retapamulin (AdooQ Bio-Science) at 2 g/ml (20). Strains that grew on these plates were subjected to susceptibility determinations with retapamulin and other PLMs (tiamulin [Sigma-Aldrich] and lefamulin [DC Chemicals]) by broth microdilution according to CLSI methodology (21). PCR amplification and DNA sequencing of the 16S rDNA (22) or the rpoB gene (23) were employed for species identification of resistant isolates.
Determining the genetic basis for PLM resistance
Retapamulin-resistant isolates were screened for the presence of known staphylococcal PLM-resistance determinants by PCR using GoTaq PCR mastermix (Promega) and oligonucleotide primers (Eurofins Genomics) designed to generate amplicons from vga-, lsa-, and sal-type genes (Supplementary Table S1). DNA sequencing of the resulting amplicons was performed (i) to confirm that they correspond to the resistance gene in question and (ii) to detect sequence variants.
Where appropriate, strains were subjected to whole genome sequencing (WGS). Genomic DNA was isolated using the PurElute™ Bacterial Genomic Purification Kit (Edge BioSystems) essentially according to the manufacturer's instructions, though bacteria were resuspended in the first instance in spheroblast buffer containing lysostaphin (100 g/ml) and incubated at 37 • C for 45 min. WGS was performed on the Illumina platform at the Next Generation Sequencing Facility (St. James's Hospital, University of Leeds) or at MicrobesNG (www.microbesng.uk), and DNA sequence data assembled using CLC Genomic workbench (CLC Bio) and annotated using RAST (www. rast.theseed.org).
Confirmation and characterization of PLM resistance genes
Putative PLM resistance genes identified in this study were introduced into a PLM-susceptible S. aureus host to assess their ability to confer resistance. DNA fragments corresponding to these genes were either generated by PCRamplification with Phusion® High-Fidelity DNA Polymerase (NEB) using oligonucleotide primers described in Supplementary Table S1 or were obtained by synthesis (Genewiz). PCR amplicons and synthesized DNA fragments were digested with KpnI and EcoRI (NEB) to enable directional ligation into similarly-digested expression plasmid pRMC2 (24) for transformation of Escherichia coli XL10-Gold (Agilent Technologies). DNA-sequence verified constructs were then introduced into S. aureus RN4220 (25) by electroporation (12). Transformants were grown in cationadjusted MHB at 37 • C with vigorous aeration to an OD 625 of 0.6, and expression induced with anhydrotetracycline hydrochloride (ATc) (Sigma-Aldrich) at a final concentration of 100 ng/ml for 3 h. Susceptibility testing of these induced cultures was carried out as above, using MHB supplemented with ATc (100 ng/ml).
Sequence alignment, phylogenetic analysis and gene neighbourhood analysis
Staphylococcal sequences in the ARE6 (Sal) subfamily were extracted from an existing database of ABC-F proteins (18). Additional Sal and cysteine desulfurase (gene immediately downstream of sal(A)) sequences were identified in complete staphylococcal genomes deposited in NCBI using, respectively, HMMR (26) (in the strategy described in (18)), and Blastp with an E value cut-off of 1e -100 (27). Other ARE ABC-F proteins were retrieved from the CARD database (28). Sequences were aligned using MAFFT version v6.861b with default settings (29). Maximum Likelihood phylogenetic analysis was carried out with RaxML version 8.2.12 (30) on the Cipres Science Gateway (31) with the LG model of substitution and 100 bootstrap replicates. Alignment positions with >50% gaps were removed, as well as the ambiguously aligned C-terminal domain, prior to phylogenetic analysis. For gene neighbourhood analysis, FlaGs (Flanking Genes) (32) was run with default settings, with six flanking genes either side of the query gene encoding either Sal or cysteine desulfurase.
Generation and purification of Sal(B)•ribosome complexes
A DNA fragment encoding the EQ 2 mutant of the Sal(B) protein fused with a C-terminal FLAG 3 tag was obtained by synthesis (Genewiz), and introduced into S. aureus SH1000 (33,34) on plasmid pRMC2, essentially as described above. A 400 ml culture of this strain was grown at 37 • C in LB media supplemented with 20 g/ml chloramphenicol to an OD 600 of ∼0.5, before inducing expression of the protein with 100 ng/ml ATc for 60 min. Bacteria were harvested by centrifugation and the resulting cell pellet resuspended
Cryo-EM structure determination of the Sal(B)•ribosome complex
A Quantifoil grid (R1.2/1.3, 400 Cu mesh, with a 2 nm carbon overlay) was glow discharged (Quorum GloQube; 10 mA, 30 s) and then transferred to the humidity-and temperature-controlled chamber of a Vitrobot Mark IV (Thermo Fisher Scientific; 100% humidity, 4 • C). An aliquot (3 l) of the Sal(B)•ribosome elution fraction was applied to the grid, excess sample immediately removed by blotting, and vitrification performed by plunging the grid into liquid nitrogen-cooled liquid ethane.
Data were collected on a Thermo Fisher Scientific Titan Krios electron microscope (Astbury Biostructure Laboratory, University of Leeds) at 300 kV. The sample was exposed to an electron dose of 60 e -/Å 2 across 8.0 s. 847 micrograph movies were recorded by a Gatan Bioquantum-K2 detector in counting mode, split into frames which each received a dose of 1.20 e -/Å 2 . A nominal magnification of 130 000× was applied, resulting in a final object sampling of 1.07Å/pixel. A defocus range of −0.8 to −2.6 m was used.
The cryo-EM image processing pipeline is summarised in Supplementary Figure S2. Drift-corrected and dosecorrected averages of each movie were created using MO-TIONCOR2 (36), and the contrast transfer functions estimated using Gctf (37). All subsequent image processing steps were carried out using RELION 3.1 (38). 99 615 particles were picked using Laplacian-of-Gaussian autopicking, which were extracted with 4× binning. Reference-free 2D classification was used to prune this dataset by removing particles contributing to lowly populated classes lacking high-resolution features. The remaining 67,391 particles were re-extracted without binning and subjected to 3D classification to remove further junk particles, leaving 67 139 particles that were aligned and refined in 3D using a 60 A low-pass filtered 3D class as a starting model. Rounds of Bayesian polishing and CTF refinement were performed until the resolution of the map stopped improving. 3D classification without particle alignment was performed to remove further poorly-aligned particles, leaving 64, 101 particles. Focussed 3D classification was then performed using a mask around the E and P sites of the ribosome to yield classes containing E-and P-site density. 59 889 particles were assigned to these classes, which were aligned and refined in 3D, yielding a reconstruction with a global resolution of 2.9Å after solvent masking (Supplementary Figure S3A, C). Multibody refinement was performed using soft extended masks to define the 50S, 30S body and 30S head as rigid bodies, yielding reconstructions for the 50S, 30S body and 30S head at estimated resolutions of 2.8, 3.0 and 3.0Å respectively (Supplementary Figure S3B, D-F). Final resolutions were estimated using the gold-standard Fourier shell correlation (FSC = 0.143) criterion.
The sharpened reconstructions were low-pass filtered according to local resolution, estimated using RELION's own implementation. These maps were used to make figures containing maps coloured by local resolution and for model building and refinement. Specifically, the consensus map was used to build models for the 50S subunit rRNA and ribosomal proteins, Sal(B) and the P-site tRNA, and the 30S body and 30S head multibody maps used to build models for the 30S subunit rRNA and ribosomal proteins.
Atomic model building of the Sal(B)•ribosome complex
The cryo-EM structure of the S. aureus ribosome (PDB 6S0X) (39) was used as a starting model for the ribosomal proteins and rRNAs, E. coli P-site initiator tRNA i fMet (PDB 5MDZ) (40) as a starting model for the distorted P-site tRNA, and a homology model was generated for EQ 2 -Sal(B) using the SWISS model server (41). These were rigid-body fitted into the cryo-EM reconstructions using UCSF Chimera (42), and the P-site tRNA was mutated to S. aureus tRNA i fMet . A short mRNA was built de novo at the P-site codon. COOT (43) was used to manually adjust the models to improve map fit and fix rotamer and Ramachandran outliers, before iterative rounds of model refinement and manual model editing were carried out using PHENIX real space refine (44) and COOT, respectively. Note that the model of the whole ribosome was kept intact, and the 50S, 30S body and 30S head regions were each refined into the appropriate consensus or multibody recon-struction whilst keeping the rest of the model fixed. Regions where the protein or rRNA backbone could not be traced were deleted. The model was validated using MolProbity (45) within PHENIX. The resolution of the model was estimated as 3.0Å, according to the model vs map FSC = 0.5 criterion (Supplementary Figure 3G).
Atomic model analysis and figure making
Figures of atomic models and cryo-EM maps were made using UCSF ChimeraX. Virtual amino acid mutation was carried out using the 'swapaa' function in ChimeraX, which picks the best rotamer based on clash score, hydrogen bonding and prevalence according to the Dunbrack library (46,47). Cryo-EM consensus and multibody refinement maps used for model building are available in the EMDB (EMD-13191), along with half-maps and masks. The atomic model is available in the PDB (7P48).
Sal(B) mutagenesis
DNA corresponding to sal genes containing mutations of interest was obtained from Genewiz. Cloning of these DNA fragments in S. aureus RN4220 using plasmid pRMC2, and susceptibility testing of the resulting constructs, was performed as described above.
sal-type determinants as a common source of PLM resistance in non-aureus staphylococci
The emphasis in studies on PLM resistance in staphylococci has to date been on S. aureus; the starting point for the present study was to explore the nature of PLM resistance in other members of this genus, which are collectively an important cause of infection in humans and animals. Of a collection of 363 non-aureus staphylococci, 53 (∼15%) were found to be capable of growing on agar containing the PLM retapamulin at a concentration corresponding to the proposed epidemiological cut-off (ECOFF) value for resistance in staphylococci (2 g/ml) (20). The majority of these resistant isolates originated from veterinary sources (n = 41), whilst the remainder were isolated from humans. Susceptibility testing established that the minimum inhibitory concentration (MIC) of retapamulin for these isolates ranged from 2 to 32 g/ml, with the majority (∼70%) associated with an MIC of 8 g/ml ( Figure 1). To assess whether these isolates also exhibited reduced susceptibility to other members of the PLM class, we performed susceptibility testing with tiamulin and lefamulin ( Figure 1). On the basis of suggested breakpoint/ ECOFF values for tiamulin (2 g/ml) (48) and lefamulin (0.25 g/ml) (6), ∼96% of the retapamulin-resistant isolates exhibited cross-resistance to both of these agents.
The genetic basis for resistance in these isolates was investigated by PCR amplification using oligonucleotide primers designed to amplify known PLM resistance determinants. Six isolates yielded a PCR product with primers directed to vga(A), and subsequent DNA sequencing of these amplicons revealed that they corresponded to vga(A) LC or closely-related variants thereof (data not shown). For the remaining 47 isolates, a PCR amplicon was generated with primers targeted to sal(A). Sanger sequencing of these amplicons confirmed that they corresponded to the sal(A) gene and closely-related variants (>98% identity in the encoded protein) in 28 of the 47 PCR-positive isolates, all of which were subsequently determined to be S. sciuri. Of the remaining 19 isolates, three were Staphylococcus lentus strains that all harboured a near-identical saltype gene exhibiting considerable polymorphism relative to sal(A); across the length of the ∼325 bp amplicon generated, the sequence showed only ∼68% predicted amino acid identity to Sal(A). To obtain the full sequence of this sal gene, a representative isolate (S. lentus B3) was subjected to WGS (sequence deposited under NCBI Accession number JAHWBZ000000000). The complete sal determinant encodes a protein (MBW0770001) that exhibits 68% identity to Sal(A). Based on the precedent of 80% amino acid identity to represent the dividing line between a known resistance determinant and a novel one (49), we designated this resistance protein Sal(B) (Supplementary Figure S1). Of the remaining 16 isolates that yielded a PCR product with the sal(A) primers--all of which were determined to be Staphylococcus fleuretti--sequencing of the amplicon revealed a gene encoding a Sal protein distinct from both Sal(A) and Sal(B). The full sequence was obtained by WGS of a representative isolate (S. fleurettii A6; Genbank Accession JAAQPD000000000). The encoded protein (MBW0764195) exhibits 71% and 68% identity to Sal(A) and Sal(B), respectively, and was designated Sal(C) (Supplementary Figure S1).
To confirm that these novel sal-type genes are capable of conferring the PLM resistance phenotype detected in the strains that harbour them, regulated expression constructs carrying these determinants were introduced into the PLMsusceptible cloning host, S. aureus RN4220. Susceptibility testing of the resulting strains established that sal(B) and sal(C) conferred substantial reductions in susceptibility to PLMs that were comparable to or greater than those observed for an equivalent construct expressing sal(A) ( Table 1). In addition to PLM resistance, sal(A) is reported to confer resistance to lincosamides and group A streptogramins, but does not impact susceptibility to macrolides or group B streptogramins; this same resistance profile was also observed for sal(B) and sal(C) ( Table 1).
In silico detection of further novel sal-type determinants
The finding that several distinct sal determinants confer PLM resistance among the strains examined here led us to investigate whether further, novel sal-type PLM resistance genes might also exist within this genus. BLAST searching of the deposited genome sequence data for nonaureus staphylococci identified a range of additional homologues, all of which have amino acid sequence identities to Sal(A) of <50%. Five diverse representatives were selected from these homologues for further analysis; WP 082039181.1 from Staphylococcus gallinarum (45% identity to Sal(A)), WP 107546009.1 from Staphylococcus xylosus (41% identity to Sal(A)), WP 096809342.1 from Staphylococcus nepalensis (43% aa identity to Sal(A)), Figure 1. Pleuromutilin susceptibility profiles of the 53 staphylococcal isolates identified in this study that exhibit reduced susceptibility to retapamulin. Retapamulin is shown in black, tiamulin in dark grey, and lefamulin in light grey. ). The selected genes were obtained by synthesis and introduced into S. aureus RN4220 for susceptibility testing as described above.
The sal-type gene from S. gallinarum conferred a comparable reduction in PLM susceptibility to that associated with sal(A), and was given the designation sal(D). Intriguingly, Sal(D) was less effective in reducing susceptibility to lincosamides compared with Sal(A)-(C); this protein mediated only a 4-fold decrease in lincomycin susceptibility (4-8-fold lower than that seen for the other sal genes), and had no effect on clindamycin susceptibility ( Table 1). The gene from S. nepalensis also conferred a reduction in PLM susceptibility, but to a lesser degree than generally seen for the other sal genes tested, and had no apparent effect on susceptibility to lincosamides or group A streptogramins. This determinant was given the designation sal(E) ( Table 1). None of the other three sal-type genes examined caused a change in susceptibility to the antibiotics tested. Collectively, we have therefore distinguished five Sal-type ABC-F proteins showing considerable sequence diversity (sequence alignment in Supplementary Figure S1) that all mediate PLM resistance, but which vary in the level of protection they offer against PLMs, and in their ability to mediate resistance to other antibiotic classes. Furthermore, it appears that some sal-type genes do not mediate antibiotic resistance.
Phylogenetic analysis and genetic environment of sal-type determinants in staphylococci
According to a recently established classification scheme for ABC-F proteins, Sal(A) resides within a subfamily designated ARE6 (18). Phylogenetic analysis shows that this subfamily comprises a distinct group with a bipartite structure (Figure 2), and confirms that all sal-type determinants identified in this study--including those with only low sequence identity to Sal(A)--are true members of the subfamily. Reflecting the observation above that Sal(D)-(E) do not exhibit the classical Sal(A) antibiotic resistance profile, these two proteins cluster in a clade distinct from Sal(A)-(C) (Figure 2). As Sal proteins are not universally encoded in staphylococcal genomes, we examined whether this might be indicative of mobility by comparing the genomic regions where these are encoded. Genomic context is well conserved around sal genes, with only minor gene neighbourhood differences between sal(A)-(C)-type and sal(D)-(E) type (Figure 3), as has been observed previously for sal(A) (16). To rule out the possibility that a larger region of the genome containing sal is being horizontally transferred (e.g. on a transposon), we retrieved protein homologues encoded by the downstream gene (cysteine desulfurase) and ran neighbourhood analysis on these sequences. Cysteine desulfurase is a near-universal protein encoded within the genomes of staphylococci, the gene for which resides in the same wellconserved gene cluster with or without sal as the upstream gene ( Figure 3) (17). This implies that sal genes are not mobilising, supporting the suggestion made previously regarding sal(A) that these genes are intrinsic to the species in which they are found (17). Furthermore, the finding that Sal phylogeny (Figure 2) is congruent with species phylogeny ( Figure 3) supports the idea that sal genes are not routinely spread among staphylococci by horizontal gene transfer. Rather, it implies that sal genes are ancestral to the genus, and that the discontinuous distribution of saltype genes across the staphylococci is due to gene loss. In fact, this gene loss appears to have happened--and still be happening--independently in multiple lineages; in some strains that carry sal, this gene has become pseudogenised ( Figure 3).
Structural and functional insights into Sal-type proteins
To begin to explore the molecular detail of Sal-type antibiotic resistance and determine whether Sal-type proteins mediate resistance in a manner analogous to other ARE ABC-F proteins (i.e. by ribosomal protection), we first examined whether we could detect interaction between a representative Sal-type protein (Sal(B)) and the staphylococcal ribosome. It has been shown for other ABC-F proteins that, when defective in NTPase activity, they are unable to dissociate from the ribosome once bound (12,(50)(51)(52). On that basis, we engineered an NTPase-deficient (EQ 2 ) mutant of Sal(B), which was expressed in S. aureus; affinity purification of this FLAG-tagged Sal(B) from cell lysates successfully pulled down 70S ribosomes, as determined by negative stain EM (data not shown).
The structure of the resulting Sal(B)•ribosome complex was subsequently solved by cryo-EM to 2.9Å, and reveals a globular density bound to the S. aureus ribosome with a protrusion of density extending towards the P-site tRNA (Figure 4). We ascribed this additional density to Sal(B). The local resolution for this Sal(B) density ranged from 2.6 to 4.6Å, and for the P-site tRNA from 2.8 to 3.4Å (Supplementary Figure S4). This allowed an unambiguous atomic model to be calculated for the entire region, with the exception of residues 80-109 in Sal(B) that interact with the L1 stalk of the ribosome. Map and model details and validation statistics are found in Table 2.
Sal(B) comprises an N-terminal nucleotide-binding domain (NBD) (NBD1: Figure 5, in blue) and a second, Cterminal NBD2 ( Figure 5; in red), which together bind to the E-site of the ribosome between the L1 stalk and Psite tRNA, in a similar way to other ARE ABC-F proteins (12,50,53). The two NBDs are joined by an interdomain linker (in purple), formed from two alpha helices joined by an interhelix loop; this region of ABC-F proteins is also known as the P-site tRNA Interaction Motif (PtIM) (51,52) or--in the specific case of the ARE ABCF proteins--the antibiotic resistance domain (ARD) (50). The interdomain linker of Sal(B) extends from the NBDs towards the PTC, the catalytic heart of the 50S ribosomal subunit and the site targeted by lincosamide, group A streptogramins and PLM (LS A P) antibiotics ( Figure 4). While the domain structure of Sal(B) is similar to that of other ABC-F proteins, there are some localised structural differences, most notably in the interhelix loop (Supplementary Figure S5). Sal(B) has a C-terminal extension that contacts uS7 and uS11 as it wraps around the 30S subunit towards the mRNA exit channel, with residues Asp533, Asn536 and Lys537 closest to the duplex between the mRNA and 16S rRNA in this channel. However, these residues appear to be >7Å away from the duplex, making an interaction unlikely, and suggesting that the C-terminal extension of Sal(B) is not involved in mRNA recognition (Supplementary Figure S6A, B) (note that this distance is approximate as the density is too weak to model the side chains of Sal(B) or the mRNA:16S rRNA duplex with high confidence (Supplementary Figure S6A, B)). This extension is positioned similarly to the interhelix loop of the C-terminal extension of VmlR (50) (Supplementary Figure S6C).
Two ATP molecules are sandwiched between NBD1 and NBD2 of Sal(B) (green in Figure 5), one proximal to the interdomain linker and the ribosome, and one distal. In each site, the ATP is bound between the Walker A P-loop motif (residues 34-41 of NBD1 at proximal site; 346-353 of NBD2 at distal site) and Walker B -strand motif (residues residues 131-135 of NBD1 at distal site). A number of other interactions also occur. For example, the adenine ring of the proximal ATP molecule is sandwiched between Ile12 of NBD1 and Gln430 of NBD2, and a magnesium ion coordinates the and ␥ -phosphates of ATP with the sidechains of Ser42 and Gln61 from NBD1. Similarly, the adenine ring of the distal ATP is sandwiched between Thr130 of NBD1 and Tyr324 of NBD2, and a magnesium ion coordinates its and ␥ -phosphates with the sidechains of Ser354 and Gln384 from NBD2. The density is well resolved for both ATP molecules, their coordinated magnesium ions and the surrounding protein residues, as well as for the loop joining the two helices of the interdomain linker ( Figure 5).
When Sal(B) binds the ribosome, it distorts the acceptor stem of the P-site tRNA away from the PTC, moving the 3 -CCA end by 22Å compared with its position in an elongation-competent complex (PDB 6O9J) (54) to allow for the interdomain linker loop of the protein to interact with the PTC (Supplementary Figure S7A). This distortion is near identical to that caused by the binding of most other ARE ABC-F proteins whose structures have been determined in complex with the ribosome (12,50) (Supplementary Figure S7B). By contrast, the ARE ABC-F MsrE causes a stronger distortion throughout all regions of the Psite tRNA, with a movement of 28Å at the 3 -CCA end (53) (Supplementary Figure S7C). As for other ARE ABC-F proteins (12) bound to the ribosome, the structure observed appears to be an initiation complex; the atomic model of S. aureus fMet-tRNA i fMet fits well into the P-site cryo-EM density, and the model of an AUG mRNA start codon fits into density at the P-site decoding centre (Supplementary Figure S8). The interdomain linker of Sal(B) interacts directly with two 23S rRNA loops at the PTC. First, the backbone of the rRNA loop containing residues A2477, A2478, and C2479 (2450-2452 E. coli numbering) interacts with the backbone of Sal(B) residues Arg261 and Ser262, and the ring of Pro263. The closest contacts are made by the ring of Pro263 and the carbonyl oxygen of the backbone of Arg261, which are situated 3.2 and 3.5Å from the sugar backbone of A2478 (2451), respectively. Second, the base of U2612 (2585) stacks with the aromatic ring of Tyr264 of Sal(B). The aromatic rings of these two residues are situated about 3.4-3.8Å apart, facilitating astacking interaction. The sugar oxygen of U2612 (2585) is 3.8Å from the hydroxy group of Tyr264, which may also allow for weak hydrogen bonding (Supplementary Figure S9). Importantly, no region of the Sal(B) interdomain linker reaches sufficiently close to the drug-binding site to mediate direct displacement of a bound PLM molecule. For example, the distance between Pro263, the closest residue to the antibiotic binding site, and tiamulin (superimposed from PDB 1XBP) is ∼8Å; too great a distance to allow for any direct interaction, let alone steric displacement ( Figure 6A).
Consequently, it seems likely that Sal proteins drive dissociation of PLMs from the ribosome through an allosteric mechanism. There are three regions of the 23S rRNA affected by Sal(B) binding: residues A2477-C2479 (2450-2452), which interact with Sal(B) residues Arg261-Pro263 as discussed ( Figure 6B and Supplementary Figure S9C); residues A2530-G2532 (2503-2505), which may interact indirectly with Sal(B) through 23S rRNA residues A2477-C2479 (2450-2452) ( Figure 6E); and residue U2612 (U2585), which stacks with Tyr264, as discussed ( Figure 6H). Differences in these regions between the apo S. aureus ribosome and the Sal(B)•ribosome complex were examined and compared with the published structure of tiamulin bound to the Deinococcus radiodurans ribosome (1) to explore how these changes might affect the binding of PTCtargeting antibiotics. First, there is a small shift in the backbone of residues A2477-C2479 (2450-2452) away from the tricyclic core of tiamulin on Sal(B) binding ( Figure 6C,D and Supplementary Figure S10A-C), which presumably occurs due to the proximity of Sal(B) residues Arg261-Ser262 and the ring of Pro263. This subtly shifts this region of the 23S rRNA away from the tiamulin molecule, likely weakening binding between the two ( Figure 6D). Second, on Sal(B) binding, there is a modest shift in 23S rRNA residues A2530-G2532 (2503-2505), which also form part of the tiamulin binding site ( Figure 6F, G and Supplementary Figure S10D-F). Finally, on Sal(B) binding, U2612 (2585) is brought close to Sal(B) Tyr264 such that the aromatic rings of the two residues can form astack. The density for U2612 (2585) is very weak in the apo ribosome map, suggesting that this residue is conformationally flexible when no Sal protein is bound ( Figure 6I and Supplementary Figure S10G-I). On tiamulin binding, this residue moves towards the C-14 glycolic acid chain of tiamulin ( Figure 6J). Such an interaction with the drug may not be possible when it stacks with Tyr264, potentially leading to weaker drug binding when Sal(B) is bound.
Structural and mutational analysis of resistance profiles exhibited by Sal variants
The existence of Sal sequence variants that differ considerably in their antibiotic resistance profiles offered us a useful starting point to interrogate structure-function relationships in this protein family. Thus, we mapped sequences corresponding to the five ARE Sal variants (Sal(A)-(E)) and the three non-ARE Sal variants (Sal proteins from S. xylo- Figure S11B-D), and a negativelycharged aspartate in the non-ARE Sal proteins (Supplementary Figure S11E-G). However, it should be noted that the sidechain at position 262 is not sufficiently close to interact with 23S rRNA in the Sal(B)•ribosome complex, re-gardless of the residue present. Indeed, even the backbone of residue 262 is further away than the backbone of Arg261 and the ring of Pro263, making it unlikely that this residue plays a major role in antibiotic resistance (Supplementary Figure S9). Nevertheless, it is possible that a change at residue 262 might alter the overall conformation of the interdomain linker loop, in turn affecting the interaction of Sal with the 23S rRNA.
The identity of residue 264 also differs across the variants. It is an aromatic tyrosine residue in Sal(A), Sal(B) and Sal(C) (Supplementary Figure S11A,B), allowing the formation of a -stack with 23S rRNA residue U2612 (2585), Figure S12). Therefore, it is difficult to see from this structural snapshot how changes in Tyr264 would affect lincosamide resistance.
To further probe the role of Sal residue 264 in mediating antibiotic resistance, mutagenesis was undertaken. We reasoned that if Tyr264 plays a key role in the resistance associated with Sal variants A-C, then introducing this residue in place of asparagine in the non-ARE Sal variant from S. saprophyticus should result in a gain of function (i.e. the ability to mediate PLM resistance). Reciprocal, lossof-function mutagenesis experiments were also performed at this same site in Sal(B), replacing Tyr264 with either leucine, isoleucine, serine or asparagine, with the expectation of bringing the resistance profiles in each case more in line with those of Sal(D) (leucine), Sal(E)/ the Sal protein from S. xylosus (isoleucine), the Sal protein from S. equorum (serine) and the Sal protein from S. saprophyticus (asparagine), respectively. The effect of these mutations on resistance profile is shown in Table 3.
Introducing Tyr264 into the non-ARE Sal variant from S. saprophyticus did indeed result in gain in function, yielding a 4-fold reduction in tiamulin susceptibility ( Table 3). The fact that this substitution transformed a protein that does not mediate any level of phenotypic antibiotic resistance into one that does suggests that Tyr264 plays a role in resistance in Sal(B)--and by extension, Sal(A) and (C)--at least in the case of tiamulin. However, resistance to other LS A P drugs was unaffected, indicating that the interaction mediated by Tyr264 is only one factor in resistance. The loss-of-function experiments had mixed effects on the antibiotic profile of Sal(B) ( Table 3). In the case of retapamulin, all mutants showed a reduced ability to mediate resistance compared with wild-type Sal(B), though resistance was not abolished. A similar effect was observed in the case of the lincosamides, lincomycin and clindamycin; for several of the mutants some reduction in resistance was observed, though not for Sal(B) Y264N . Surprisingly, substitution of Tyr264 in Sal(B) had no effect on tiamulin resistance, underscoring the idea that other residues within this region are important for PLM resistance.
DISCUSSION
Collectively, our results provide considerable insight into the nature of Sal-type ABC-F proteins and their role in PLM resistance in the staphylococci. From a mechanistic perspective, we have established that they do indeed function as target protection proteins to mediate resistance to PLM and other antibiotic classes; like other ARE ABC-F proteins, they bind into the E site of the 70S ribosome to effect dissociation of bound drug molecules (12,50,53). The molecular detail of how ARE ABC-F proteins in general achieve this appears to vary among family members and even by antibiotic class, but is the result of the interdomain linker mediating either direct steric displacement of the antibiotic or triggering allosteric change in the antibiotic binding site that prompts drug release (12,55). In the case of Sal(B) -and by implication, other Sal proteinsthe interdomain linker does not overlap with the PLM binding site, indicating that resistance is mediated through an allosteric mechanism. A similar conclusion has recently been reached for the mechanism of the other two ARE ABC-F families that mediate PLM resistance in the staphylococci, the Vga-and Lsa-type proteins (12). Whilst we have shown here that the nature of the residue at position 264 of Sal proteins has an important role in PLM resistance, it is nonetheless clear from our data that there must be other residues within the interdomain linker that also contribute to the resistance mechanism.
In addition to the canonical Sal protein, Sal(A), we have now distinguished four other Sal proteins (Sal(B-E)) mediating PLM resistance in staphylococci that differ by ∼30% to >55% in amino acid sequence from Sal(A) and each other, and which vary in their ability to protect the ribosome from antibiotics; members of the Sal(A-C) clade exhibit the typical resistance profile associated with Sal(A) (resistance to PLMs, group A streptogramins and lincosamides), whilst the phylogenetically-distinct Sal(D-E) group shows lower or no resistance to group A streptogramins and lincosamides. Despite the fact that multiple representatives of the Sal proteins mediate antibiotic resistance, several lines of evidence underscore the idea that this is not their original, evolved function. These include the observation made here that several such proteins do not mediate resistance to antibiotics, indicating that resistance is not a universal feature of Saltype proteins. Furthermore, our analysis of phylogenetic and genomic context strongly implies that sal is ancestral to the genus Staphylococcus, thereby arguing instead for a housekeeping role for the encoded protein.
The uneven distribution of sal across the genus is apparently the result of lineage-specific loss; in some lineages, this seems to be a work in progress, with sal in the process of becoming pseudogenised. The simplest explanation for this loss is a modest fitness cost associated with maintenance of Sal that serves to drive its counter-selection over time. As a ribosome-binding ABC-F protein that presumably samples the ribosomal PTC to perform its native cellular role, this fitness cost could conceivably result through competition with other translation factors and/or a reduction in overall translational efficiency. It is not apparent at present why the evolutionary pressures favouring retention or loss of sal appear to differ across staphylococcal species, or whether decades of PLM and/or streptogramin use in veterinary (and more recently, human) medicine has latterly made any contribution to selecting for maintenance of this gene in particular lineages. It is however clear that, since sal will routinely be present in a particular staphylococcal lineage unless and until it becomes lost, Sal-mediated antibiotic resistance is an intrinsic -rather than acquired -mechanism of resistance, and the presence (or otherwise) of sal-type resistance would generally be expected to be uniform across a species.
Our results therefore imply that multiple staphylococcal species are intrinsically resistant to PLMs (and in a proportion of cases, group A streptogramins/ lincosamides) as a consequence of harbouring sal-type genes. This includes species that are known to cause disease in humans, including S. sciuri (56,57) and S. lentus (58,59). Fortunately, the most medically-significant pathogen of the genus, and a major clinical target for PLM therapy in humans--Staphylococcus aureus--is a species that has lost sal. Whilst we identified a single case in GenBank of a sal-type gene annotated within a S. aureus genome (strain C0603; Figures 2 and 3), this appears to represent a misidentification of a strain of S. sciuri (all five ABC-F proteins found in this strain have top Blastp hits to proteins from S. sciuri; data not shown). However, the well-documented ability of S. aureus to recruit antibiotic resistance determinants from non-aureus staphylococci (e.g. mecA (60), cfr (61), and fexA (62)) means that this species could recapture sal in the future, an event that will be under significant selection by an antibiotic class that is now in both veterinary and human use.
DATA AVAILABILITY
The cryo-EM map of the Sal(B)•ribosome complex and the associated molecular model have been deposited in the Electron Microscopy Data Bank and Protein Data Bank with the accession codes EMD-13191 and PDB-7P48, respectively.
|
2022-02-10T06:17:08.792Z
|
2022-02-07T00:00:00.000
|
{
"year": 2022,
"sha1": "fbd7636cc50c321e4ff00753b65460825c07d4fe",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nar/article-pdf/50/4/2128/42618271/gkac058.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c84337c7821401b8b8978673d1c0910fb6d10789",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
197402549
|
pes2o/s2orc
|
v3-fos-license
|
A case of asymptomatic complete tracheal rings in an adult: case report
To the Editor, Complete tracheal rings are a rare congenital defect that can cause tracheal stenosis. There are only a few reports of symptomatic adult cases, with many associated with difficult intubation [1, 2]. We report a case of a patient with complete tracheal cartilage rings without symptoms or histories of difficult intubation. The patient was a 70-year-old man with right arteriosclerosis obliterans (ASO). He had also undergone a right iliac artery stenting procedure for ASO at age 67 under general anesthesia without difficult intubation. Left external iliac artery stenting and femoral endarterectomy were scheduled. After anesthesia was induced, direct laryngoscopy was conducted using a Macintosh laryngoscope. We attempted tracheal intubation but were unable even with an endotracheal tube of 7.0 mm (Parker®, Parker Medical, Highlands Ranch, CO, USA), due to subglottic resistance. A laryngeal mask airway (igel® #4, Intersurgical Ltd., Liverpool, NY, USA) was then utilized to provide airway management, and the surgery was performed as scheduled. Intraoperative bronchoscopy showed complete tracheal rings with the absence of the membranous trachea immediately below the cricoid cartilage to the tracheal bifurcation. Normal membranous tissue was confirmed below the bifurcation (Fig. 1). On CT imaging (Fig. 2), the trachea was observed as specific O shape and the inner tracheal diameter at the site of the complete rings was greater than the outer diameter of the 7.0-mm endotracheal tube. The coronal section of multiplanar reconstruction CT showed the trachea as an upside-down bottle neck shape and revealed complete tracheal rings narrowing the trachea 21 mm caudad to the vocal cord. Although the transverse diameter at the site of transition to the tracheal rings was relatively smaller than at the cricoid cartilage level, there was no significant stenosis to cause respiratory symptoms. Complete tracheal rings are usually detected in the neonatal period or infancy as congenital tracheal stenosis, with symptoms of stridor, cyanosis, retractive breathing, or suffocation. Cases diagnosed in adulthood are extremely rare. There have been only 13 reported cases of complete tracheal rings with tracheal stenosis discovered in adults; of these, seven were found only when intubation failed [2]. Boiselle et al. suggested that thoracic CT images can be used to diagnose tracheal rings as concentric narrowing of the trachea with an O-shaped lumen [3]; contrarily, the normal trachea appears C-shaped. In our case, although a tracheal tube could not be passed below the vocal cords, intubation had been successful at the previous general anesthesia. The tip of the tracheal tube was probably impinged at a caudad end of the normal trachea which was transition zone to the tracheal rings. To date, there have been no reports of cases of complete tracheal rings without symptoms such as tracheal stenosis and history of difficult intubation. This case suggests that the complete tracheal rings may be hidden even in normal adults for whom there has been no trouble with intubation. Multiplanar reconstruction CT helps to assess difficult intubation.
A case of asymptomatic complete tracheal rings in an adult: case report Tomoko Hayasaka 1* , Takafumi Kobayashi 2 , Yoshida Ako 3 , Yasuhiro Endo 2 and Yuko Saito 2 To the Editor, Complete tracheal rings are a rare congenital defect that can cause tracheal stenosis. There are only a few reports of symptomatic adult cases, with many associated with difficult intubation [1,2]. We report a case of a patient with complete tracheal cartilage rings without symptoms or histories of difficult intubation.
The patient was a 70-year-old man with right arteriosclerosis obliterans (ASO). He had also undergone a right iliac artery stenting procedure for ASO at age 67 under general anesthesia without difficult intubation. Left external iliac artery stenting and femoral endarterectomy were scheduled. After anesthesia was induced, direct laryngoscopy was conducted using a Macintosh laryngoscope. We attempted tracheal intubation but were unable even with an endotracheal tube of 7.0 mm (Parker®, Parker Medical, Highlands Ranch, CO, USA), due to subglottic resistance. A laryngeal mask airway (i-gel® #4, Intersurgical Ltd., Liverpool, NY, USA) was then utilized to provide airway management, and the surgery was performed as scheduled. Intraoperative bronchoscopy showed complete tracheal rings with the absence of the membranous trachea immediately below the cricoid cartilage to the tracheal bifurcation. Normal membranous tissue was confirmed below the bifurcation ( Fig. 1).
On CT imaging ( Fig. 2), the trachea was observed as specific O shape and the inner tracheal diameter at the site of the complete rings was greater than the outer diameter of the 7.0-mm endotracheal tube. The coronal section of multiplanar reconstruction CT showed the trachea as an upside-down bottle neck shape and revealed complete tracheal rings narrowing the trachea 21 mm caudad to the vocal cord. Although the transverse diameter at the site of transition to the tracheal rings was relatively smaller than at the cricoid cartilage level, there was no significant stenosis to cause respiratory symptoms.
Complete tracheal rings are usually detected in the neonatal period or infancy as congenital tracheal stenosis, with symptoms of stridor, cyanosis, retractive breathing, or suffocation. Cases diagnosed in adulthood are extremely rare. There have been only 13 reported cases of complete tracheal rings with tracheal stenosis discovered in adults; of these, seven were found only when intubation failed [2]. Boiselle et al. suggested that thoracic CT images can be used to diagnose tracheal rings as concentric narrowing of the trachea with an O-shaped lumen [3]; contrarily, the normal trachea appears C-shaped.
In our case, although a tracheal tube could not be passed below the vocal cords, intubation had been successful at the previous general anesthesia. The tip of the tracheal tube was probably impinged at a caudad end of the normal trachea which was transition zone to the tracheal rings. To date, there have been no reports of cases of complete tracheal rings without symptoms such as tracheal stenosis and history of difficult intubation. This case suggests that the complete tracheal rings may be hidden even in normal adults for whom there has been no trouble with intubation. Multiplanar reconstruction CT helps to assess difficult intubation.
|
2019-07-14T06:44:32.860Z
|
2019-07-12T00:00:00.000
|
{
"year": 2019,
"sha1": "0a67093a850e97f2f228c945d1801050b11234ec",
"oa_license": "CCBY",
"oa_url": "https://jaclinicalreports.springeropen.com/track/pdf/10.1186/s40981-019-0265-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a67093a850e97f2f228c945d1801050b11234ec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6986219
|
pes2o/s2orc
|
v3-fos-license
|
Impaired interleukin 4 signaling in T helper type 1 cells.
Cluster of differentation (CD)4+ T helper cells (Th)1s fail to produce interleukin (IL)-4. Even if restimulated in the presence of IL-4, a condition that induces IL-4-producing capacity in naive CD4+ T cells, Th1s fail to become IL-4 producers. We report that Th1 cells have a major impairment in IL-4 signaling. When compared to both Th2s and naive T cells, they display a striking diminution in phosphorylation of Stat6. They also show reduced phosphorylation of Janus kinase (JAK)-3 and insulin receptor substrate (IRS)-2 when compared to Th2s. Stat6 and JAK-3 are present in equivalent amounts in Th1s and Th2s, but IRS-2 protein levels are much lower in Th1s than in Th2s. Altered sensitivity to IL-4, the major inducer of the Th2 phenotype, may explain the stability of the Th1 state.
N aive CD4 ϩ T cells develop into either of two major subsets of Ths that produce distinct sets of cytokines. Th1s generally produce IFN-␥ , IL-2, and TNF-␣ , whereas Th2s produce IL-4, IL-5, IL-10, and IL-13 (1). The polarization of responses to Th1 or Th2 dominance often determines resistance or susceptibility of hosts to infections and the degree of tissue damage in many autoimmune diseases (2)(3)(4).
Mice deficient in IL-4R ␣ chain (24) or in Stat6 (25)(26)(27), the major regulator of IL-4-mediated gene activation (28), fail to generate Th2s in vitro and show a marked diminution in the development of IL-4-producing CD4 ϩ T cells in response to infection with Nippostrongylus brasiliensis (24,26). Furthermore, Stat6 activation has been mainly implicated in the regulation of IL-4-mediated gene activation.
Thus, Stat6-deficient mice fail to upregulate class II MHC molecules, CD23, and Thy1 in mouse B cells, and fail to support class switching to IgE. The major regulators of IL-4-mediated growth are the phosphotyrosine binding domain substrates IRS-1/IRS-2 and Shc, whose phosphorylation is determined by a domain in the IL-4 receptor distinct from that responsible for Stat6 activation (19,(29)(30)(31)(32)(33)(34). However, thymocytes and B cells from Stat6-deficient mice also display diminished uptake of [ 3 H] thymidine in response to IL-4-dependent stimuli (25)(26)(27). This may indicate a possible role for Stat6 in growth regulation; alternatively, a principal role of Stat6 may be the induction of factors that are critical in transmitting growth-inducing signals. Indeed, the IL-4R ␣ chain is itself upregulated by IL-4 (9,35).
Th1s and Th2s differ from one another in several important respects. For example, Th1s not only fail to produce IL-4 when stimulated, but cannot support the transcription of transfected reporter genes under the control of the IL-4 promoter (36)(37)(38)(39). It has recently been demonstrated that they fail to express c-maf , a transcription factor that binds to the IL-4 promoter (40). Transfection of Th1 lines with c-maf allows them to transcribe reporter genes under the control of the IL-4 promoter. This transcription is further enhanced by the cotransfection of the cDNA for NIP-45, a protein that interacts with nuclear factor of activated T cells (41). Furthermore, Th1s fail to express GATA-3, which is expressed in naive and in Th2s (42). GATA-3 transgenic mice primed under Th1 conditions (in the absence of IL-4) can develop into cells capable of inducing IL-4 messenger RNA (mRNA), indicating an important role for GATA-3 in the differentiation of naive cells into Th2s.
What is particularly striking is that Th1s not only fail to produce IL-4 upon challenge, but they fail to develop into IL-4-producing cells even if they are restimulated with antigen in the presence of IL-4 (43)(44)(45). These are conditions that cause the development of naive T cells into IL-4 producers. Thus, Th1s have lost the ability to induce IL-4producing capacity. In addition, it has been reported that long-term lines of Th1s are incapable of growth in response to IL-4 despite the fact that Th1 and Th2 lines display essentially equal capacity to bind IL-4 with high affinity (46). It has been reported that these lines differ in their sensitivity to IL-1 and it has been suggested that this explains their differential sensitivity to IL-4 as a growth stimulus (47).
We wished to examine the possibility that the inability of Th1s to respond to IL-4 with the induction of IL-4-producing activity might reflect an insensitivity of these cells to IL-4. Here we show that while Th1s express amounts of IL-4R comparable to Th2s, they lose their capacity to phosphorylate Stat6 as they differentiate and that such loss is correlated with the inability of these cells to upregulate CD30 in response to IL-4 and to develop into IL-4 producers. Furthermore, we demonstrate that the level of expression of IRS-2, a principal regulator of IL-4-mediated growth, is enhanced upon development of naive cells into Th2s but not Th1s; this enhancement does not occur in cells from Stat6-deficient mice. In differentiated Th1s, IL-4 does not enhance expression of IRS-2. These results indicate that a defect in the activation of Stat6 in Th1s can explain many of the phenotypic properties of these cells.
Materials and Methods
Animals and Cell Cultures. B10.A mice were obtained from Jackson Laboratory (Bar Harbor, ME). Mice transgenic for TCR-␣ and - chains encoding a receptor specific for pigeon cytochrome C peptide 88-104 in association with the IE k class II MHC molecule were maintained in our animal quarters. Lymph node cells were depleted of CD8 ϩ cells, B220 ϩ cells, and IA k ϩ cells by negative selection using magnetic beads. The purified CD4 ϩ cells were then centrifuged on a discontinuous 50, 60, and 70% Percoll gradient. Cells with a density of Ͼ 70% were collected and used for priming (7). Primary stimulation of TCR transgenic cells was carried out by culturing 10 6 naive CD4 ϩ T cells in the presence of 10 7 irradiated T cell-depleted spleen cells from B10.A, 1 M pigeon cytochrome C peptide (prepared by National Institute of Allergy and Infectious Diseases Biologic Resources Branch) and IL-2 (10 U/ml) for 7 d. For differentiation of Th1s, monoclonal anti-IL-4 antibody (11B11; 10 g/ml) and IL-12 (10 ng/ml) were also added to the culture; for the differentiation of Th2s, IL-4 (0.5 ng/ml) and anti-IL-12 antibody (R&D Systems, Inc., Minneapolis, MN) were added. In some instances, the priming culture was repeated one or two times. T and B cell-depleted APCs were prepared from splenocytes by depleting Thy 1.2 and B220 positive cells using the magnetic bead method. Stat6 knockout mice and littermate controls were provided by Dr. James Ihle (St. Jude Children's Research Hospital, Memphis, TN). These (129 ϫ C57BL/6) mice were backcrossed to normal C57BL/6 mice.
Immunoprecipitation and Western Blot Analysis. Th1s or Th2s were washed with HBSS twice, recultured with 10 U/ml rIL-2 overnight, and then deprived of serum and cytokines for 2 h. Cells (5 ϫ 10 6 ml) were stimulated with 5 ng/ml of IL-4 in complete RMPI at room temperature for 12 min. The reaction was stopped with cold PBS containing 100 M Na 3 VO 4 . Cells were then lysed with 0.5 ml lysis buffer (50 mM Hepes, 0.5% Nonidet P-40, 5 mM EDTA, 50 mM NaCl, 10 mM Na pyrophosphate, and 50 mM NaF) freshly supplemented with 1 mM Na 3 VO 4 , 1 mM PMSF, and 10 g/ml aprotinin, leupeptin, and pepstatin. Lysates were incubated with 5-10 l antibody for 2 h on ice. Immune complexes were precipitated with protein G agarose beads (Pierce Chemical Co., Rockford, IL), eluted with SDS-PAGE loading buffer, separated in a 7.5% acrylamide gel, and transferred onto Immobilon-P membranes (Millipore, Bedford, MA), which were probed with specific antibodies (17) Reverse Transcriptase PCR. TCR transgenic Th2s were washed extensively at the end of the priming culture and recultured in IL-2 supplemented complete medium for 14 d until these cells had become small and round shaped. These well-"rested" cells (10 6 ) were then either unstimulated or stimulated with 2 ϫ 10 5 irradiated T and B cell-depleted APCs, IL-2, and cytochrome C peptide with or without IL-4 (0.5 ng/ml) for 6 or 22 h. Total RNA were prepared by the guanidinium method and reverse transcribed into cDNA. PCRs were performed using IRS-2 primers (forward: TGGTGAGGCAGGTACCCGTCT; reverse: TCTGCACGGATGACCTTAGCA; reference 48) and actin primers (forward: GATGACGATATCGCTGCGCTG; reverse: GTACGACCAGAGGCATACAGG). Amplification was performed (geneAmp PCR system 9600; Perkin-Elmer Corp., Foster City, CA). The cycling conditions were 94 Њ C for 15 s, 59 Њ C for 15 s, and 72 Њ C for 30 s for 35 cycles, followed by extension at 72 Њ C for 5 min. PCR products were analyzed by electrophoresis in a 2% agarose gel.
Flow Cytometry Analysis and ELISA. For IL-4R staining, cells were incubated with 10% goat serum for 5 min to block nonspecific binding. M1 anti-IL-4R monoclonal antibody or a rat isotype control (Genzyme, Cambridge, MA) were added to the cells and incubated for 20 min in FACS ® buffer (Becton Dickinson, Mountain View, CA; PBS/3% fetal calf serum/0.1% sodium azide). Cells then were washed with FACS ® buffer. FITC-labeled goat Fab Ј fragment against rat IgG (Southern Biotechnology Associates, Birmingham, AL) were added to stained cells for 20 min. For CD30 staining, cells were first blocked with 10 l of 2.4G2 rat anti-mouse Fc ␥ RII/III ascitic fluid for 5 min before staining with PE-labeled monoclonal antibody against mouse CD30 (PharMingen, San Diego, CA) for 20 min in FACS ® buffer. The stained cells were washed twice with FACS ® buffer and analyzed in a FACScan ® . IL-4 and IFN-␣ production were measured using commercial ELISA detection kits (Endogen, Woburn, MA).
Results
Th1 Cells Display a Marked Diminution in IL-4-induced Stat6 and JAK-3 Phosphorylation. We prepared dense CD4 ϩ T cells from mice transgenic for TCR-␣ and - chains specific for a cytochrome C peptide in association with IE k (7). These cells were primed in vitro by culturing for two rounds of 7 d each with T cell-depleted spleen cells from B10.A mice together with 1 M cytochrome C peptide and IL-2. To generate Th1s, we added monoclonal anti-IL-4 antibody and IL-12; to generate Th2s, we added IL-4 and anti-IL-12 antibody. Extracts prepared from Th2s that had been stimulated with IL-4 for 12 min were immunoprecipitated with anti-Stat6. When analyzed by SDS-PAGE and Western blotting with antiphosphotyrosine, we observed striking tyrosine phosphorylation of Stat6. By contrast, immunoprecipitates of extracts from IL-4-stimulated Th1s showed no detectable Stat6 phosphorylation. Western blots with anti-Stat6 antibody revealed comparable amounts of Stat6 in both Th1s and Th2s (Fig. 1 A ).
Naive CD4 ϩ T cells respond to IL-4 with Stat6 phosphorylation comparable to that observed for Th2s (Fig. 1 B ). Unstimulated cells expressed no detectable phosphorylation. The degree of such IL-4-induced phosphorylation remains stable in the course of priming CD4 ϩ T cells with cytochrome C peptide, APCs, IL-2, IL-4, and anti-IL-12 (i.e. to the Th2 phenotype). When cells are primed with anti-IL-4 and -IL-12 (i.e., to the Th1 phenotype), IL-4induced Stat6 phosphorylation remains detectable during the initial round of priming (i.e., at 3 d). Stat6 phosphorylation is sometimes completely lost by the end of the first round of priming, but in most experiments (as illustrated in Fig. 1 B ), IL-4-induced Stat6 phosphorylation is not lost in Th1s until two rounds of priming.
IL-4-induced phosphorylation of JAK-3 was also diminished in "two-round" primed Th1s (Fig. 1 A ). The relative degree of phosphorylation of JAK-3 in Th1s was reduced by ف 3.7-fold compared to Th2s. The amount of JAK-3 protein in Th1s was no different than the amount in comparably primed Th2s. The diminution in IL-4-induced JAK-3 phosphorylation in Th1s did not reflect a generalized inactivation of this kinase or of the ␥ c chain in Th1s since IL-2 caused striking phosphorylation of JAK-3 in these cells (Fig. 1 A ). This is particularly important since IL-2 signaling, like IL-4 signaling, uses both JAK-1 and JAK-3.
Th1s and Th2s Have Comparable Numbers of IL-4Rs. The defect that Th1s show in IL-4-induced signaling could not be explained by the failure of these cells to express IL-4Rs. Two-round-primed Th1s and Th2s from transgenic donors displayed comparable numbers of IL-4Rs as demonstrated by flow cytometric analysis with the anti-mouse IL-4R antibody M1; the specificity and high affinity binding of IL-4 was established by the capacity of IL-4 (1.1 ϫ 10 Ϫ 10 M) to inhibit binding by both cell types (Fig. 2 A ). Indeed, in four independent experiments, the relative degree of expression of IL-4Rs by Th1s was 1.01 Ϯ 0.14 that of Th2s (Fig. 2 B ). In contrast to naive T cells, IL-4 does not further upregulate IL-4R expression in recently primed Th1s or Th2s (data not shown).
Th1s Have Diminished Expression of IRS-2.
To get a fuller picture of the signaling abnormality of IL-4Rs in Th1s, we examined the phosphorylation of a molecule that has been implicated in IL-4-mediated growth, IRS-2. Fig. 3 demonstrates that IL-4-induced phosphorylation of IRS-2 is strikingly diminished in Th1s compared to Th2s. However, in contrast to Stat6 and JAK-3, which are present in similar amounts in Th1s and Th2s, the level of expression of IRS-2 protein, as detected by immunoblotting of anti-IRS-2 immunoprecipitates, is markedly diminished in Th1s compared to Th2s. Furthermore, although Stat6 is present in relatively similar amounts in unprimed CD4 ϩ T cells, in Th1s and Th2s, IRS-2 is present in very limited amounts in unprimed cells (Fig. 4 A ). The sustained induction of IRS-2 expression in Th2s appears to be Stat6 dependent. Thus, when CD4 ϩ T cells from Stat6-deficient mice are primed with soluble anti-CD3, anti-CD28, APCs, IL-2, IL-4, and anti-IL-12 for one round, they fail to express detectable IRS-2, whereas littermate heterozygotes show induction of IRS-2 under Th2-but not Th1-priming conditions (Fig. 4 B ).
An analysis of the induction of IRS-2 protein using CD4 ϩ T cells from TCR transgenic donors in the course of priming with cytochrome C peptide reveals that cells primed in the Th1 and Th2 "directions" both initially induce IRS-2 (i.e., at 3 d), but IRS-2 is only sustained in the cells primed in the presence of IL-4, so that by 5 d of priming, cells primed under Th1 conditions have less IRS-2 than do cells primed under Th2 conditions (Fig. 5). Thereafter, IRS-2 levels in Th1s are lower than in Th2s, particu- Lysates were prepared from purified TCR transgenic lymph node CD4 ϩ (Unprimed) or TCR transgenic lymph node CD4 ϩ cells that were stimulated with T and B cell-depleted APCs, cytochrome C, and IL-2 under either Th1 or Th2 conditions for 3, 5, or 7 d, or for two rounds of 7 d each. IRS-2 phosphorylation and content was analyzed by immunoprecipitation with anti-IRS-2 antibody and Western blot analysis using antiphosphotyrosine or anti-IRS-2 antibody. Lysates were prepared from equal numbers of cells. larly by 2 wk of priming. The levels of phosphorylation of IRS-2 in Th1s are also strikingly diminished compared to Th2s from 5 d of priming and after (Fig. 5).
Th1s not only fail to express substantial amounts of IRS-2, they fail to show stable induction of IRS-2 when restimulated for 7 d in the presence of cytochrome C peptide, APC, IL-2, IL-4, and anti-IL-12 (Th2-inducing conditions; Fig. 6 A). Thus, in fully primed Th1s, IL-4 does not appear to be able to induce stable expression of IRS-2, in keeping with the "desensitization" of the IL-4R in these cells. Interestingly, Th2s restimulated for 7 d in Th1-inducing conditions continue to express IRS-2, although possibly at somewhat lower levels than Th2s that have been restimulated in Th2 conditions. Indeed, IRS-2 expression in Th2s is not entirely stable. If Th2s are rested by culture in IL-2 for 14 d, their expression of IRS-2 falls markedly. Upon stimulation with cytochrome C peptide and APC without IL-4, they show a transient induction of IRS-2 mRNA (Fig. 6 B). By contrast, if IL-4 is included in the culture, IRS-2 mRNA expression is more sustained, indicating the importance of IL-4 in regulating IRS-2 expression in Th2s.
Failure of Th1s to Respond to IL-4 Correlates with the Lack of IL-4-dependent Upregulation of CD30. To examine further whether the diminished Stat6 phosphorylation in-duced by IL-4 in Th1s correlated with a failure to upregulate IL-4-dependent molecules, we examined the capacity of IL-4 to enhance expression of CD30 (49) in Th1s and Th2s. Naive C57BL/6 CD4 ϩ T cells were primed with soluble anti-CD3 and anti-CD28 with irradiated APCs under Th1 or Th2 conditions for two rounds of 7 d each. They were then stimulated for 3 d with anti-CD3, anti-CD28, and irradiated APCs with IL-4 or anti-IL-4. A subset of Th2 cells from C57BL/6 donors stimulated in the presence of IL-4 show induction of CD30 at 3 d (Fig. 7), whereas stimulation of these cells in the presence of anti-IL-4 failed to induce CD30. Th1s from C57BL/6 mice failed to induce CD30 when challenged in the presence of IL-4 or of anti-IL-4.
Discussion
Our results show that as naive CD4 ϩ T cells differentiate to the Th1 phenotype, a major impairment of signaling through the IL-4R occurs. This is most apparent in the degree of induced phosphorylation of Stat6, which is the principal regulator of IL-4-mediated gene activation. The level of activation of JAK-3 is also impaired, whereas differences in degree of expression of the IL-4R are minimal. This result could explain the stability of the Th1 phenotype. The molecular basis of this impaired sensitivity of the IL-4R in Th1 cells is not known. The decreased phosphorylation of JAK-3 and the presence of normal amounts of Stat6 and JAK-3 protein suggest that it may reflect an overall effect in the receptor signaling activity rather than a spe- cific effect on Stat6. The capacity of IL-2 to induce JAK-3 phosphorylation in Th1s indicates that the defect is not specific for JAK-3 and the ␥c chain. In light of the recent discovery of a new family of cytokine-induced JAK/STAT inhibitors (50)(51)(52)(53), it is conceivable that during the development of Th1s one or more such inhibitors are induced and prevent JAK-3/Stat6-dependent IL-4-mediated signaling in these cells. However, we have not observed any difference in levels of expression of mRNA for SOCS 2 (suppressor of cytokine signaling 2) in Th1s and Th2s; we could not detect expression of SOCS 1 or 3 in Th1s or Th2s, although we could detect these mRNAs in bone marrow cells treated with IFN-␣ and IL-3 for 1 h (Huang, H., unpublished data).
It is also striking that IL-4 regulates the level of expression of IRS-2 in recently differentiated Ths. Naive T cells display very low levels of IRS-2 protein and mRNA. Initial stimulation with antigen and APC causes a transient induction of IRS-2, which is only sustained in the presence of IL-4. Thus, Th1s that have been stimulated through two rounds of priming display little or no IRS-2, whereas Th2s that have been primed for two rounds show substantial amounts of IRS-2 and vigorous phosphorylation in response to IL-4 stimulation.
The diminished sensitivity of Th1s to IL-4-mediated signaling may explain the failure of upregulation of IRS-2 in these cells. The regulation of IRS-2 expression by IL-4 and particularly the failure of Stat6 knockout cells to show such a response might also explain the diminished IL-4stimulated growth of thymocytes from Stat6-deficient mice (25)(26)(27). These cells should fail to respond to IL-4 with en-hanced expression of IRS-2 and IL-4Rs, since both are controlled by the IL-4/Stat6 signaling pathway; low levels of both IL-4R and IRS-2 might then limit IL-4 induced growth.
Kubo et al. (54) have recently reported that a Th1 T cell clone and a hybridoma with a Th1 phenotype failed to phosphorylate Stat6 in response to IL-4. Thus, the lack of IL-4-mediated signaling by Th1 cells is likely to be a very stable property of these cells. Kubo et al. (53) argue that Th1s fail to secrete IL-4 because of a silencer element located 3Ј of exon 4 of the IL-4 gene that contains a Stat6binding site. They further argue that Stat6 binding to this site inactivates the silencer element and thus allows IL-4 to be made in response to phorbol ester and ionomycin stimulation. Thus, the failure of Th1s to produce IL-4 is explained by the lack of activated Stat6 in these cells, whereas the capacity of Th2s to secrete IL-4 depends upon the activation of Stat6 and the consequent blocking of silencer function. Although this concept has attractive features, our previous results (55) have shown that once naive T cells have been differentiated into Th2s, IL-4 is no longer required for the production of IL-4 or other Th2 cytokines by these cells. Thus, Th2s stimulated with antigen and APC produce IL-4 mRNA in the presence of neutralizing concentrations of anti-IL-4 or anti-IL-4R antibody. Furthermore, IL-4 does not enhance IL-4 production by these cells even when limiting concentrations of antigen are used for stimulation. Finally, Th2s prepared from IL-4 knockout mice produce normal amounts of IL-13 mRNA in response to stimulation with anti-CD3; IL-4 does not enhance production of IL-13 by these differentiated Th2s.
Our results, indicating a failure of recently differentiated Th1s to phosphorylate Stat6, and those of Kubo et al. (53) describing absent Stat6 phosphorylation in a Th1 clone and a Th1 hybridoma, contrast with results that have been described showing that extracts from IL-4-stimulated Th1s can form complexes with a gamma-activated sequence (GAS) element as detected by electrophoretic mobility shift analysis (EMSA). Szabo et al. (56) and Lederer et al. (57) observed an IL-4-inducible EMSA in cultures that had been differentiated in the Th1 direction for one round of stimulation; cells from such cultures retained the capacity to develop IL-4-producing activity if cultured for an additional 4-7 d with antigen and IL-4. Indeed, it was subsequently demonstrated that cells that had been primed in the Th1 direction for several rounds lost the capacity to develop into IL-4-producing cells (43), consistent with our observation that IL-4-induced phosphorylation of Stat6 is often not extinguished until cells have undergone two rounds of Th1 priming. Pernis et al. (58) reported that in a Th1 clone (AE.7), IL-4 induced a factor (then designated STF-IL-4) that formed a complex with the GAS element from Fc␥R1. The significance of this finding remains to be evaluated further. This could be explained by differences among individual long-term Th1-like clones and recently differentiated Th1s. Alternatively, the IL-4R may retain some signaling properties in Th1s that nonetheless do not allow full activation of Stat6. Indeed, EMSA often reveals that IL-4 induces multiple bands containing the GAS element, only one of which is supershifted by anti-Stat6 antibody.
Understanding the molecular basis of the diminished sensitivity of IL-4Rs in well-differentiated Th1s might pro-vide interesting targets for the development of agents that could reverse the polarity of these cells. Such agents could be important in the development of strategies for treatment of tissue damaging autoimmunity or to reverse inappropriate responses to certain pathogenic agents.
|
2014-10-01T00:00:00.000Z
|
1998-04-20T00:00:00.000
|
{
"year": 1998,
"sha1": "4b2803290ac42501fdd87bfed5ea10f019a2ab5b",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/187/8/1305.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ebbb38d8447f142e2c835183df54c4975240a3e1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
250033889
|
pes2o/s2orc
|
v3-fos-license
|
Osteocalcin reduces fat accumulation and inflammatory reaction by inhibiting ROS-JNK signal pathway in chicken embryonic hepatocytes
Osteocalcin (OCN) has a function in preventing fatty liver hemorrhagic syndrome (FLHS) in poultry. The aim of this study was to investigate the effects of OCN on fat emulsion stimulated chicken embryonic hepatocytes and related signaling pathways. The primary chicken embryonic hepatocytes were isolated from the incubated 15-day (E15) pathogen free eggs and cultured with dulbecco's modified eagle medium (DMEM). After the hepatocyte density reached 80%, the cells were divided into 5 groups: control group (CONT), fat emulsion group (FE, 10% FE, v/v), FE with ucOCN at 1 ng/mL (FE-LOCN), 3 ng/mL (FE-MOCN), and 9 ng/mL (FE-HOCN). In addition, 2 mM N-Acetyl-L-cysteine (NAC) a reactive oxygen species (ROS) scavenger, and 5 μM SP600125, a Jun N-terminal kinase (JNK) inhibitor, were added separately in to the DMEM with 10% FE to test effects of FE on the function of ROS-JNK signal pathway. The number of hepatocytes, cell ultra-microstructure, viability, and apoptosis were detected after 48 h treatment, and the protein expressions and enzyme concentrations were detected after 72 h treatment. The results showed that, compared to the control group, FE increased the triglyceride (TG) concentration and lipid droplets (LDs) in chicken embryonic hepatocytes (P < 0.05), and induced hepatocytic edema with obviously mitochondrial swelling, membrane damage, and cristae rupture. FE also decreased ATP concentration, increased ROS concentrations and mitochondrial DNA (mtDNA) copy number, promoted inflammatory interleukin-1 (IL-1), IL-6, tumor necrosis factor-alpha (TNF-α) concentrations and hepatocytic apoptosis rate, and raised phospho-c-Jun N-terminal kinase (p-JNK) protein expressions. Compared to the FE group, ucOCN significantly increased hepatocyte viability, reduced hepatocytic TG concentrations and LDs numbers, and alleviated hepatocytic edema and mitochondrial swelling. Furthermore, ucOCN significantly decreased ROS concentrations, increased ATP concentrations, reduced IL-1, IL-6, TNF-α concentrations and hepatocytic apoptosis rate, and inhibited p-JNK protein expressions (P < 0.05). NAC had the similar functions of ucOCN reduced the ROS concentration and inhibited the TNF-α protein expression and p-JNK/JNK ration. Similarly, SP600125 reduced p-JNK/JNK protein expression, IL-1, IL-6, TNF-α, and TG concentrations without effects on ROS concentration and hepatocytic apoptosis. These results suggest that ucOCN alleviates FE-induced mitochondrial damage, cellular edema, and apoptosis of hepatocytes. These results reveal that the functions of ucOCN in reducing fat accumulation and inflammatory reaction in chicken embryonic hepatocytes are mostly via inhibiting the ROS-JNK signal pathway.
INTRODUCTION
Fatty liver hemorrhagic syndrome (FLHS) is one of the main metabolic diseases in hens, which is characterized by increased lipid accumulation, fragile liver, rupture bleeding, and sudden death (Rozenboim et al., 2016). Approximately 40% of the hens died due to FLHS, and the data even up to 74% in caged laying hens (Shini et al., 2019). Multiple factors, such as nutrition, environment, hormone, metabolism, and gene, are involved in occurrence of FLSH (Choi et al., 2012;Song et al., 2017;Li et al., 2020b), and nutrition is the most important factor for hens, especially commercial hens fed over-nourishment diet for maintaining high production (Choi et al., 2012). Hens' FLHS has similar nosogenesis and pathophysiological mechanism as mammal nonalcoholic fatty liver disease (NAFLD) (S anchez-Polo et al., 2015;Hamid et al., 2019), which also inculdes a spectrum of disorders ranging from the simple fatty liver to steatohepatitis, with increasing fibrosis leading to cirrhosis (Leoni, et al., 2018). Simple fatty liver induced by over-nourishment is the first critical step of FLHS. The experiment hens' FLHS model can be successfully induced by high-fat and high-fat (energy) low-protein diets (Zhu et al., 2020;Qiu et al., 2021;Tan et al., 2021). Moreover, research showed that 97% FLHS hens were obese, which had a large amount of subcutaneous and intracoelomic fat accumulation (Trott et al., 2014). The expression of hepatic de novo lipogenesis genes are increased and fatty acid b-oxidation genes are decreased in the high-energy and low-protein diet-induced FLHS in laying hens (Miao et al., 2021), resulting in great accumulations of free fatty acid, triglyceride (TG) and lipid drops (LDs) in the FLHS hens' liver Miao et al., 2021). Therefore, preventing simple fatty liver in laying hens is the key step to prevent FLHS.
Osteocalcin (OCN) is a 49-amino noncollagenous bone matrix protein produced by osteoblasts and has functions in regulating energy metabolism in an active undercarboxylated form (ucOCN) (Lin et al., 2018). Clinical data have showed that both total and ucOCN are inversely associated with liver steatosis, inflammation, ballooning, and fibrosis grades in NAFLD patients (Yilmaz et al., 2011;Du et al., 2015;Xia et al., 2021). ucOCN can prevent NAFLD development in mice by enhancing hepatocytic insulin sensitivity and promoting proliferation and functions of pancreatic b-cells (Zhang et al., 2020), inhibiting hepatic lipid synthesis, promoting lipolysis processes , preventing inflammation and fibrosis (Gupte et al., 2014). Similarly, our previous study in chickens has reported that ucOCN alleviates FLHS process through reducing hepatic haemorrhage and fibrosis, and inhibiting insulin resistance, inflammation, and oxidation stress in high-fat diet (HFD)-fed aged laying hens (Wu et al., 2021b). However, the effect of ucOCN on the simple fatty liver has not been fully elucidated.
Reactive oxygen species, including superoxide anion radicals (O 2 À ) and hydrogen peroxide (H 2 O 2 ), are continuously produced intracellularly as byproducts of energetic metabolism in hepatocytes of NAFLD Dabravolski et al., 2021). Hepatic lipid overload induces the overproduction of oxidants by affecting the mitochondria, peroxisomes, and endoplasmic reticulum. The nonelectron transport chain sources of ROS, especially the b-oxidation of fatty acids, appear to be the major source of ROS in hepatic metabolic disorders . The dysregulation of liver lipid metabolism in NAFLD mice generates higher levels of ROS (Ma et al., 2016). Palmitic acid induces rat primary hepatocytes simple steatosis associated with excessive ROS production (Moravcov a et al., 2015). At high concentrations, ROS causes oxidative modifications to cellular macromolecules (DNA, lipids, proteins, etc.) (Jakubczyk et al., 2020) and leads to activation of the c-Jun N-terminal kinase (JNK), consequently inducing liver damage (Schwabe and Brenner, 2006;Li et al., 2020a).
The c-Jun N-terminal kinase is a mitogen-activated protein kinase (MAPK) family member that mediates cellular responses to a variety of intra-and extracellular stimulations (Czaja, 2010). The JNK isoforms are encoded by three genes, two of which are JNK1 and JNK2 expressing in all cells including hepatocytes (Weston and Davis, 2002). Investigations have indicated that overactivation of JNK is crucial to NAFLD process (Weston and Davis, 2002;Schattenberg et al., 2006;Yan et al., 2017). JNK mediates NAFLD development by involving in obesity, insulin resistance, lipid accumulation, and liver fibrosis (Czaja, 2010). Inhibition of JNK attenuates insulin resistance in NAFLD rats (Yan et al., 2017). JNK1 null mice have significantly low levels of steatohepatitis after fed the methionine-and choline-deficient (MCD) diet (Schattenberg et al., 2006). In addition, therapeutic effects of ucOCN in NAFLD mice may be intervened by activating the nuclear factor like-2 (Nrf2) pathway to alleviate oxidative stress and to inhibit the JNK pathway in hepatocytes (Du et al., 2016). Melatonin improves NAFLD by reducing inflammation in HFD-induced obese mice through modulating the MAPK-JNK/P38 signaling pathway (Sun et al., 2016). The ROS-JNK signal pathway participates in the NAFLD process. We spectacled ROS-JNK involves in chicken embryonic hepatocyte steatosis, and ucOCN prevents the chicken embryonic hepatocyte steatosis via regulating the ROS-JNK signal pathway. Therefore, the effect of ucOCN on chicken embryonic hepatocytes steatosis induced by 10% fat emulsion (FE) was further investigated.
Primary Chicken Embryonic Hepatocytes Isolation and Culture
Primary hepatocytes were prepared as described before (Arias, 2012). Briefly, the livers were dissected out from ten 15-day (E15) old chicken embryo without specific pathogens (Shandong Haotai Experimental Animal Breeding Co., Shandong, China). The liver samples were cut into small pieces (1 mm £ 1 mm) and digested by 0.25% trypsin-EDTA (Gibco, New York, USA) at 37°C for 30 minutes. The hepatocytes were collected by lowspeed centrifugation (1000 r/min £ 5 min), and further purified by Percoll gradient centrifugation (60 % v/v, Biosharp, Anhui, China). chicken embryonic hepatocytes at 2 £ 105 cells/mL were cultured with dulbecco's modified eagle medium (DMEM, Gibco, New York, USA) in a humidified atmosphere of 95% air and 5% CO 2 at 37°C.
Experimental Design
The hepatocytes were grown in 6-well cell culture plates. After the hepatocytes density reached 80%, the cells were harvested and treated by adding FE with or without ucOCN (Mybiosource, San Diego, CA) into the DMEM for studying the effect of ucOCN on chicken embryonic hepatocytes. There were 5 groups: control group (CONT), FE group (FE, 10% FE, v/v), FE with ucOCN 1 ng/mL (FE-LOCN), 3 ng/mL (FE-MOCN), and 9 ng/mL (FE-HOCN). SP600125 (a JNK inhibitor, Beyotime, Shanghai, China) and NAC (a ROS scavenger, Beyotime, Shanghai, China) were used for investigating the effects of FE on the function of ROS-JNK signal pathway of the hepatocytes. Based on our pilot study, in which 0.5, 1, and 2 mM NAC and 2.5, 5, and 10 mM SP600125 were used, in this study, 2 mM NAC and 5 mM SP600125 were added with 10% FE at the same time in the cell culture fluid for testing the effects on FE on the function of ROS-JNK signal pathway. The number of hepatocytes, cell ultra-microstructure, viability, and apoptosis were detected after 48 h treatment, and protein expressions and enzyme concentrations were detected after 72 h treatment.
Cell identification and Viability Assay
Cells were washed with phosphate buffer saline (PBS) for 3 times, fixed with 4% paraformaldehyde for 30 min, then stained with Periodic acid-Schiff (PAS, Beyotime, Shanghai, China) for 15 min. The cells with the red granules were identified as hepatocytes under light microscope (Leica DM500, Leica microsystems, Wetzlar, Germany). The hepatocytes viability detected by CCK-8 (Biosharp, Hefei, China) method. Hepatocytes were cultured in 96-well plates for 4 h with 10 mL CCK8 per well. Afterwards, Optical density (OD) values were measured using a microplate reader (ThermoFisher, Waltham, USA) at the 450 nm wavelength.
Cell Ultra-Microstructure Analyses
The ultra-microstructure of hepatocytes was observed and analyzed at the Wuhan Servicebio technology CO., LTD (Wuhan, China). The hepatocyte precipitation was collected by centrifugation, re-suspended in the TEM fixation (Servicebio, Wuhan, China) for 120 min, rinsed by 0.1M phosphoric acid buffer (PB, pH7.4) for 15 min, then postfixed with 1% osmic acid 0.1M PB for 120 min. Followed centrifuged, the fixed hepatocytes precipitation was dehydrated with serially diluted ethanol, penetrated with acetone and EMBed 812 (1:1, SPI, West Chester, PA), and embedded with EMBed 812, then cut to 60 to 80 nm by ultra-microtome (Leica microsystems, Wetzlar, Germany). The sections were stained by using 2% uranium acetate saturated alcohol solution and 2.6% lead citrate. The specimens were observed under transmission electron microscope (Hitachi, Japan).
Concentrations of Oxidative Damage Factors of Hepatocytes
Oil red O staining kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) were used for light microscopic examination and quantitative analysis according to the manufacturer's instructions. The concentrations of TG, malondialdehyde (MDA), glutathione peroxidase (GSH-Px), superoxide dismutase (SOD), and adenosine triphosphate (ATP) in the hepatocytes were measured by the relative assay kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) by followed the company's protocols.
Total DNA of hepatocytes were extracted by followed the genomic DNA preparation protocol (Beyotime, Shanghai, China). The primer of ND1 of mitochondrion specific gene (F:5'GAGCCAATCCGACCATCTAC, R:5'GGGACTCAAATAGTCAGGGC) and 18s RNA of nuclear genome (5'GTCTAAGTACA-CACGGGCGG, R:5'CCTTGGATGTGG-TAGCCGTT) were designed using the Primer 5.0, synthesized by the Invitrogen Biotechnology (Shanghai, China). The copy number of ND1 and 18s RNA were analyzed by real-time PCR (BIO-RAD, California, USA), and the relative mitochondrion DNA (mtDNA) copy number was calculated according to the ratio of ND1 and 18s RNA (Abu-Amero et al., 2014).
Cell Apoptosis by Flow Cytometry
Cell apoptosis was detected using the Annexin Vfluorescein isothiocyanate/propidium iodide (Annexin V-FITC/PI, BestBio, Shanghai, China) method (Zhang and Liang, 2014). Briefly, the hepatocytes were washed with cold PBS for 3 times, digested with 0.25% trypsin-EDTA, centrifuged, then resuspended with 300 mL Binding Buffer in a 1.5 mL EP tube. Added 5 mL Annexin V-FITC in each tube, gently mixed, then placed in a dark place at room temperature (20−25°C) for 15 min. After added 5 mL PI, the stained hepatocytes were detected by the FITC and PI channels of the flow cytometry (ACEA NovoCyte, Hangzhou, China) within 1 h. Flow cytometry data was analyzed by the NovoExpress software (Beijing Aiqinghai Co., Beijing, China). ROS activity was detected by using fluorescent probe 2,7-Dichlorodihydrofluorescein diacetate (DCFH-DA, BestBio, Shanghai, China). DCFH-DA was diluted with no serum medium to 10 mM. The hepatocytes were collected, counted, and suspended using DCFH-DA solution to 10 7 cells/mL. The hepatocytes were incubated for 20 min at 37°C, then washed with PBS. DCFH entering the cells is oxidized by ROS to fluorescent DCF, and the density of DCF fluorescence positively correlated with ROS activity was detected through FITC channel of the flow cytometry (ACEA NovoCyte, Hangzhou, China). Flow cytometry data was analyzed by the NovoExpress software (Beijing Aiqinghai Co., Beijing, China).
Enzyme-Linked Immunosorbent Assay (ELISA)
The levels of interleukin-1 (IL-1), IL-6 and tumor necrosis factor-alpha (TNF-a) were measured by using relative ELISA kits (Xiamen Huijia Biotechnology Co., Ltd, Fujian, China) which were specific for chickens. Each absorbance value was read by using a microplate reader (ThermoFisher, Waltham, USA).
Statistical Analyses
The data were analyzed by using SPSS 22.0 (IBM Co., New York, USA). A One-way ANOVA was used to analyze the differences among the groups. Data normality was checked. Post hoc multiple comparisons were performed using LSD's test. Values were expressed as mean § SEM, and P-value <0.05 was defined statistically significant.
Effects of ucOCN on FE-Induced Chicken Embryonic Hepatocytes Viability and Fat Accumulation
The reddish granules in the cytoplasm of cells identified as hepatocytes were isolated and purified for the following analyses. CCK-8 analysis showed that the FE group hepatocytes had higher viability (P < 0.05; Figure 1A1) than the cells of CONT, and the different ucOCN concentrations further improved the cell viability compared to the FE group (P < 0.05). Oil red O staining and cellular TG content analysis showed that compared to the control cells, FE significantly increased the chicken embryonic hepatocytes fat accumulation (P < 0.05; Figure 1A2) and the TG concentrations (P < 0.05; Figure 1A3). ucOCN regardless of its concentration decreased the hepatocytes fat and TG accumulation by FE, however, trended to be the lowest in FE-LOCN group.
The transmission electron microscopic image observation indicated that compared with CONT ( Figure 1B1, BⅠ), the hepatocytes of the FE group ( Figure 1B2, BⅡ) had more damage with cellular edema, higher intracellular electron, cell membrane smooth, pseudopodia disappeared, great mitochondrial swelling with part membranolytic, cracked or decreased of crista, increased dilated or degranulated rough endoplasmic reticulum, and Golgi hypertrophy. Oil red O staining also showed that FE caused hepatocytes ultrastructure damage, resulting in more LDs in hepatocyte plasma than control cells (P < 0.05; Figure 1B6), and most of them fused to form large LDs, and the % Total LD area/cytosol area increased significantly (P < 0.05; Figure 1B8). ucOCN reduced the LDs quantity to some extent without obvious fusion and 3 ng/mL ucOCN could significantly reduce the % Total LD area/cytosol area (P < 0.05).
Effects of FE and ucOCN on Mitochondrial Function and Oxidative Stress in Chicken Embryonic Hepatocytes
Compared with the CONT cells, FE treated hepatocytes had an increased ROS concentration (P < 0.05; Figure 2A) and a decreased ATP concentration (P < 0.05; Figure 2B). ucOCN administration alleviated these effects, especially, FE-LOCN restrained ROS concentration (P < 0.05) and enhanced ATP concentration (P < 0.05) compared to the FE group. Compared to the control group, FE (P < 0.05; Figure 2C), 3 or 9 ng/mL ucOCN (P < 0.05) elevated mtDNA level in hepatocytes. There was no treatment effect on the hepatocyte GSH-Px, SOD, MDA concentrations and Nrf2 protein levels ( Figure 2D−G).
Effects of ucOCN on Inflammatory and Apoptosis in Chicken Embryonic Hepatocytes
Compared with the control group, all inflammatory factors including IL-1 (P < 0.05; Figure 3A1), IL-6 (P < 0.05; Figure 3A2), and TNF-a (P < 0.05; Figure 3A3) were significantly increased in the FE group cells. Both ucOCN at 1 and 3 ng/mL inhibited FE effects on all measured inflammatory factors (P < 0.05, respectively), while ucOCN at 9 ng/mL affected FE effects on TNF-a only (P < 0.05). Flow cytometry analysis showed that FE increased the intercellular complexity due to the higher SSC-H level ( Figure 3B2) and the apoptosis rate (P < 0.05; Figure 3B6), while the 1 ng/mL ucOCN could relieved these effect (P < 0.05; Figure 3B2, B6). In addition, compared with the FE group, 1 and 9 ng/mL ucOCN significantly suppressed the expression of p-JNK (P < 0.05; Figure 3C1-2).
ucOCN Reduces the Fat Accumulation and Inflammatory Reaction by Inhibiting ROS-JNK Signal Pathway
The performance of NAC (ROS scavenger) at different concentrations was detected by flow cytometry. The results showed that 2 mM NAC could eliminate ROS concentration to the control levels ( Figure 4A). The further detection showed that compared with the FE hepatocyte, the 2 mM NAC significantly decreased TNF-a (P < 0.05; Figure 4B1-2) and p-JNK protein levels (P < 0.05; Figure 4B1, B4) and p-JNK/JNK ratio (P = 0.05; Figure 4B5), which was similar as the effects of OCN.
Although 2.5, 5, and 10 mM SP600125 (JNK inhibitor) did not influence the hepatocytes viability ( Figure 5A1), 5 mM SP600125 significantly inhibited FE caused increase of oil and TG concentrations in hepatocytes (P < 0.05; Figure 5A2-3). Based on the results, 5 mM SP600125 was used to following analyses. Westernblotting analysis showed that compared with the FE group, both of 1 ng/mL ucOCN and 5 mm SP600125 significantly reduced p-JNK levels and p-JNK/JNK ratio ( Figure 5B1-4). FE increased ROS concentration (P < 0.05; Figure 5C1-5) and apoptosis rate (P < 0.05; Figure 5D1-5) in the hepatocytes, which was significantly inhibited by ucOCN but not SP600125 Similar to the function of OCN, SP600125 significantly suppressed the increase of IL-1, IL-6 and TNF-a concentrations (P < 0.05; Figure 5E1-3) in hepatocytes induced by FE.
DISCUSSION
Primary chicken hepatocytes can be isolated from freshly perfused chicken liver, but the mixed cells from chicken liver also contain hepatic nonparenchymal cells including macrophages, stellate cells or biliary endothelial cells (Mackei et al., 2020). It is easier to get purified hepatocytes using chicken embryonic livers (Li et al., 2018;Wu et al., 2021a). The purified hepatocytes can be identified by PAS staining which is used to specifically identify them by the markers of carbohydrates and stored glycogens (Hui et al., 2017;Tao et al., 2021). In the current study, FE resulted in a high level of TG and a mass of LDs in hepatocytes, which suggests that FE successfully induces hepatocytic steatosis. In addition, FE significantly increased the hepatocyte viability. It may indicate that FE provides an additional nutritional supply to induce proliferation of hepatocytes (Urso and Zhou, 2021). And ucOCN further promoted the hepatocyte viability, which means ucOCN is advantageous to proliferate hepatocytes. In mice, ucOCN alleviates fatty accumulation in the NAFLD livers and decreases the TG concentration in the hepatocytes (Xia et al., 2021;Zhang et al., 2021). Similarly, ucOCN decreased TG concentration and LDs number in hepatocytes, suggests that ucOCN inhibits hepatocytic steatosis. Moreover, Zhang et al. (2020) reported that ucOCN had a dose-dependent (0, 3, and 30 ng/mL) attenuation of mice hepatocytic steatosis (Zhang et al., 2020). However, in our study, 1 ng/mL was the optimal The changes of superoxide dismutase concentrations. F, The changes of malondialdehyde concentrations. G1-2, The change of Nrf2 relative protein expression. The data represent Mean § SEM (n = 6 per group). Differences were determined by one-way ANOVA followed by LSD test. *P < 0.05 compared with the control group, # P < 0.05 compared with the FE group. Abbreviations: OCN, osteocalcin; FE, fat emulsion; ROS, reactive oxygen species; ATP, adenosine triphosphate; mtDNA, mitochondrion DNA; GSH-Px, glutathione peroxidase; SOD, superoxide dismutase; MDA, malondialdehyde; Nrf2, nuclear factor like-2. concentration of ucOCN for improving hepatocyte viability and reducing fat accumulation, which may clue that chicken embryonic hepatocytes may be more sensitive to ucOCN than that of mammals.
Liver biopsies of NASH patients and NAFLD mice show mitochondria function defection in hepatocytes (Pirola et al., 2013;Einer, et al., 2018). Accumulated fatty acids induce harmful ROS with lipotoxicity which occurs as a process of altered mitochondrial oxidative metabolism in rat hepatocytes (Egnatchik et al., 2014). Mitochondrial dysfunction leads to low ATP and high mtDNA (Chistiakov et al., 2018;Singh et al., 2019;Wang et al., 2019). For example, ATP was depleted in the liver of fructose-induced NAFLD mice (Softic et al., 2016), and mtDNA is released into the cytosol in palmitic acid-induced primary mouse hepatocytes . In the current study, the transmission electron microscopic images displayed that FE caused great mitochondrial swelling with membranolytic, cracked or decreased of crista without effects on the mitochondria number, which indicates FE leads to the damage of mitochondrial structure. FE increased the ROS concentration, decreased the ATP concentration, and raised the copy number of mtDNA, which implies that FEinduced excessive fat accumulation produces lipotoxicity with damaged mitochondrial function. The loss of mitochondrial integrity may cause dysfunction of cell membrane sodium and potassium pump, by which it further leads to excessive accumulation of sodium ions and water within the cells, inducing hepatocyte edema (Hasan et al., 2018;Garner et al., 2021). Yilmaz et al. (2011) reported that patients with biopsy proven NAFLD had significant reduction in serum ucOCN concentrations compared with that of health people, which was significantly associated with the extent of hepatocyte ballooning. The current study displayed that 1 ng/mL ucOCN significantly reduced the ROS concentrations and mtDNA copy number, increased the ATP concentration, alleviated the mitochondrial swelling and hepatocytes edema, which may indicate that ucOCN improves mitochondrial dysfunction. Mitochondrial dysfunction is closely related to oxidative stress (Egnatchik et al., 2014). However, FE and ucOCN had no significant effects on MDA concentration, Nrf2 protein level, and antioxidant enzyme (SOD and GSH-Px) activity in the current study. It is possible that mitochondrial damage can lead to hepatocytes edema without significant effects on oxidative stress. Therefore, ucOCN can play a key role in maintaining mitochondrial structural and functional homeostasis.
In a vitro study, FE and ucOCN have opponent functions in regulating p-JNK protein expression in rat hepatocytes (Egnatchik et al., 2014). EF enhances while ucOCN alleviates p-JNK protein expression, which may indicate that FE activates JNK signaling pathway and ucOCN inhibits thepathway. Excessive fatty acids Figure 3. Effects of ucOCN and FE on inflammatory reaction, apoptosis and p-JNK in chicken embryonic hepatocytes. A1-3, The change of IL-1, IL-6 and TNF-a concentrations. B1-6, Apoptotic ratio analyzed by flow cytometry. C1-2, The change of p-JNK relative protein expression. The data represent Mean § SEM (n = 6 per group). Differences were determined by one-way ANOVA followed by LSD test. *P < 0.05 compared with control group, # P < 0.05 compared with FE group. Abbreviations: OCN, osteocalcin; FE, fat emulsion; IL-1, Interleukin-1; IL-6, Interleukin-6; TNF-a, Tumor Necrosis Factor-alpha; p-JNK, phosphorylated c-Jun N-terminal kinase.
produce harmful ROS with lipotoxicity in rat hepatocytes (Egnatchik et al., 2014). FE and CON also show the opponent function in ROS production. FE increases while ucOCN reduces the ROS concentration in chicken embryonic hepatocytes, suggesting that ucOCN can suppress the ROS damage in steatosis hepatocytes. The current study showed that NAC, a ROS scavenger, had the similar function as ucOCN that not only clears ROS but also decreases p-JNK protein expression, which further supports the previous findings that ROS directly stimulates JNK pathway Suzuki et al., 2015). SP600125, a JNK inhibitor, without effects on ROS production in FE-induced hepatocytes, may clue that JNK may regulate ROS with negative feedback. Therefore, ucOCN can regulate hepatocytic functions via inhibiting ROS-JNK signal pathway.
Inhibition of JNK reduces steatosis and steatohepatitis of the liver in HFD-induced rats (Yan et al., 2017), and decreases fat accumulation in human HepG2 hepatoma cells via the ROS/JNK/AP-1 signaling (Xie et al., 2021). Similar to the outcomes of these studies, both SP600125 and ucOCN alleviated FE-induced high TG concentration in the current study, indicating that ucOCN suppresses fat accumulation by ROS-JNK signaling in chicken embryonic hepatocytes.
Our previous study had showed that the HFD promoted the FLHS development charactered by increased liver hemorrhage score and fibrosis in aged laying hens (Wu et al., 2021b). In a current parallel study, it further showed the HFD enhanced the liver TNF-a concentrations, indicating the HFD-induced FLHS has severe inflammatory reactions (Unpublished data). Meantime, these studies showed that ucOCN reduced the gene and protein expressions of TNF-a in the liver, suggesting that ucOCN inhibits the occurrence of steatohepatitis in hens. These results are consistent with mammal NAFLD. In obese mice, ucOCN reduces the expression of inflammation and inflammasome related genes, Figure 4. Effects of NAC (a ROS scavenger) on TNF-a and p-JNK/ JNK in chicken embryonic hepatocytes. A, The effects of ROS after NAC (0.5, 1, and 2 mM) treatment. B1-5, The change of TNF-a, JNK, and p-JNK relative protein expression. The data represent Mean § SEM (n = 6 per group). Differences were determined by one-way ANOVA followed by LSD test. *P < 0.05 compared with the control group, # P < 0.05 compared with the FE group. Abbreviations: OCN, osteocalcin; FE, fat emulsion; NAC, N-Acetyl-L-cysteine, ROS scavenger; TNF-a, Tumor Necrosis Factoralpha; JNK, c-Jun N-terminal kinase.
Several studies have shown that the activation of ROS-JNK signaling pathway promotes apoptosis (Li et al., 2020c;Wang et al., 2020) and JNK is an activator of apoptosis . FE increased hepatocytes apoptosis rate, which can be contributed to the activation of ROS-JNK (Chang et al., 2006;Wang et al., 2014). In the current stuyd, low-dose (1 ng/mL) but not high-dose (9 ng/mL) of ucOCN inhibited apoptosis in hepatocytes It indictes that 1 ng/mL ucOCN is the suitable concentraton for regulating hepatocytes. The result is also consistent with the 1 ng/mL ucOCN improving hepatocyte viability and reducing fat accumulation. ucOCN but not SP600125 has significantly relieved the effect of FE on hepatocytes, suppressing hepatocyte apoptosis via both ROS-JNK signal pathway as well as other regulating pathways such as lipotoxicity (Kusminski et al., 2009). The studies examining the Figure 5. ucOCN reduces the inflammatory reaction and fat accumulation by inhibiting ROS-JNK signal pathway. (A1-3), The cell viability and fat accumulation detected by oil red O and TG. B1-4, The change of JNK and p-JNK relative protein expression. C1-5, The ROS concentrations detected by flow cytometry. D1-5, Apoptotic ratio analyzed by flow cytometry. E1-3, The change of IL-1, IL-6, and TNF-a concentrations. The data represent Mean § SEM (n = 6 per group). Differences were determined by one-way ANOVA followed by LSD test. *P < 0.05 compared with the control group, # P < 0.05 compared with the FE group. Abbreviations: OCN, osteocalcin; FE, fat emulsion; SP600125, JNK inhibitor; TG, triglyceride; JNK, c-Jun N-terminal kinase. ROS, reactive oxygen species; IL-1, Interleukin-1; IL-6, Interleukin-6; TNF-a, Tumor Necrosis Factor-alpha.
underlying mechanisms of ucOCN effects on preventing hepatocytic damage are ongoing ( Figure 6).
CONCLUSION
The results of this study suggest that fat emulsion promotes lipid droplets accumulation, mitochondrial damage, cellular edema, inflammatory reaction, and apoptosis in primary chicken embryonic hepatocytes. Osteocalcin functionally alleviates hepatocytes damage, mitochondrial damage, ROS concentration, fat accumulation, inflammatory cytokine production and apoptosis via inhibiting ROS-JNK signaling pathway.
|
2022-06-26T15:13:54.416Z
|
2022-06-01T00:00:00.000
|
{
"year": 2022,
"sha1": "08ecb7c936902955db3a92de6a91ce7123985e87",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.psj.2022.102026",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8eeda3d12225e9ec746e4aa7dca1aaa3c636e8a1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234743664
|
pes2o/s2orc
|
v3-fos-license
|
Targeting Fibrosis: The Bridge That Connects Pancreatitis and Pancreatic Cancer
Pancreatic fibrosis is caused by the excessive deposits of extracellular matrix (ECM) and collagen fibers during repeated necrosis to repair damaged pancreatic tissue. Pancreatic fibrosis is frequently present in chronic pancreatitis (CP) and pancreatic cancer (PC). Clinically, pancreatic fibrosis is a pathological feature of pancreatitis and pancreatic cancer. However, many new studies have found that pancreatic fibrosis is involved in the transformation from pancreatitis to pancreatic cancer. Thus, the role of fibrosis in the crosstalk between pancreatitis and pancreatic cancer is critical and still elusive; therefore, it deserves more attention. Here, we review the development of pancreatic fibrosis in inflammation and cancer, and we discuss the therapeutic strategies for alleviating pancreatic fibrosis. We further propose that cellular stress response might be a key driver that links fibrosis to cancer initiation and progression. Therefore, targeting stress proteins, such as nuclear protein 1 (NUPR1), could be an interesting strategy for pancreatic fibrosis and PC treatment.
Introduction
Pancreatitis is triggered by a variety of factors including the activation of pancreatic enzymes, resulting in pancreatic tissue self-digestion accompanied by pancreatic tissue edema, bleeding, and inflammatory necrosis [1][2][3]. Patients with pancreatitis suffer fever, nausea, vomiting, abdominal distension, abdominal pain, and other symptoms [4]. Pancreatitis can be divided into two types: acute pancreatitis (AP) and CP. AP is an acute pancreatic disease with strong pain, which has greater occurrence in middle-aged adults [5][6][7]. On the contrary, most CP patients develop mild disease, and the symptoms are usually not noticeable [8].
In the course of CP, patients often suffer pain and exocrine and endocrine insufficiency [9]. Most patients with AP recover completely after receiving the right treatment. Adjusting one's diet and stopping smoking and alcohol ingestion can completely restore pancreatic homeostasis [10]. However, without the correct treatment, AP may deteriorate into CP, and the pathological changes caused by CP are often irreversible [11].
PC is a kind of gastrointestinal tumor with high malignancy, which is difficult to diagnose and treat [12]. Pancreatic ductal adenocarcinoma (PDAC) represents 90% of PC, with a poor prognosis (the 5-year survival rate of PDAC is less than 10%) [13]. Most PDAC patients have metastases after diagnosis, which cannot be treated by surgery [14]. In addition, the etiology of PDAC is complex, and both genetic and environmental factors are involved in the disease progression. Specific mutations in genes, such as tumor protein P53 (TP53), V-Ki-ras2 Kirsten rat sarcoma viral oncogene homolog (KRAS), cyclin-dependent kinase inhibitor 2A (CDKN2A), or SMAD family member 4 (SMAD4) increase the risk of developing PDAC [15]. Other significant risk factors for PDAC development that have been associated with this disease are smoking, diabetes, alcoholism, and obesity [16].
Pancreatitis and PC are two diseases of varying degrees in the pancreas, with similar symptoms. It is frequently necessary to exclude PC in the diagnosis of pancreatitis. From an imaging perspective, pancreatitis and PC are easily confused on magnetic resonance imaging (MRI) or computerized tomography (CT), as only specific imaging features allow one to discriminate between these diseases in the differential diagnosis [17]. Therefore, the screening for PC should combine biochemical tests and pathological diagnosis of tumor markers, such as carbohydrate antigen 199 (CA199), cancer antigen 125 (CA125), carcinoembryonic antigen (CEA), and carbohydrate antigen 50 (CA50) [18]. Most PC patients have a history of CP, which shows that there is a high correlation between PC and CP [19]. Although PC has high genetic factors, pancreatitis and PC have a variety of common pathogenic factors, such as long-term smoking, alcohol abuse, or high protein and high-fat diets [20,21]. In recent years, different works have demonstrated that the processes of wound healing and tumor fibrosis have strong similarities. Interestingly, pancreatic fibrosis is also one of the main pathological features of CP, suggesting a strong relationship between PC and CP [22]. However, whether CP increases the risk of PC and promotes the occurrence and development of PC through tissue fibrosis remains an open question. Thus, in this work, we discuss the role of pancreatic fibrosis in pancreatitis and PC development as well as the recent findings targeting this process ( Figure 1).
The Role of Fibrosis in Pancreatitis
CP is a pancreatic fibrotic syndrome associated with genetic, environmental, and/or other risk factors [23]. Clinically, CP patients have recurrent abdominal pain, nausea, dyspepsia symptoms, and different complications including fat-soluble vitamin deficiency, exocrine dysfunction, metabolic bone disease, and diabetes [24]. The pathological features of CP include pancreatic fibrosis, acinar injury, pancreatic calcification, and exocrine and endocrine dysfunction [25]. CP is a persistent pathological response to substantial injury or stress. Irreversible fibrosis is one of the most typical pathological features of CP and deeply affects the physiological function of the pancreas [26]. Thus far, there is no clinical treatment that can reverse the inflammatory damage associated with CP. Therefore, CP treatment is focused on relieving the symptoms and screening and treating disease-related complications [27].
Pancreatic Fibrosis Promotes Inflammation
Numerous studies have shown that pancreatic fibrosis not only is a feature of pancreatitis disease but also has an active role in CP development [28][29][30]. For instance, alcohol consumption triggers pathological changes in the pancreas, leading to pancreatic fibrosis, causing alcoholic CP [31]. Interestingly, increased trypsinogen content in the pancreas is a pivotal event in the initiation of alcoholic CP, although the mechanism remains elu-sive [32,33]. Serine protease inhibitor Kazal type 1 (SPINK1) acts as a trypsin inhibitor, and its mutation dramatically increases the risk of alcoholic CP [34]. Recent evidence shows that SPINK1-associated pancreatitis or alcohol-induced CP can be characterized by progressive parenchymal fibrosis [35,36]. In addition, a variety of immune cell types, such as monocytes and macrophages, have been detected in the dense fibrotic areas in pancreatic cancer. Indeed, as part of the innate immune response, these immune cells can be recruited by inflammatory signals [37][38][39][40]. However, previous studies have shown that monocytes are recruited into damaged tissues, subsequently differentiating into macrophages, stimulating the synthesis of collagen and fibronectin (FN), and participating in the process of pancreatic fibrosis [41]. Furthermore, macrophages interact with neighboring cells in a cytokine-dependent manner to accelerate the formation of pancreatic fibrosis during pancreatitis [42,43]. Recent studies have shown that the oral administration of camostat mesilate (CM), a drug for CP treatment, reduces pancreatic fibrosis and subsequent inflammation by inhibiting the activity of monocytes [44]. In sum, macrophages, as the major infiltrating immune cells in the tumor microenvironment (TME), play an important role in pancreatic fibrosis and inflammation.
Pancreatic Stellate Cells (PSCs) Are Key Mediators of Fibrosis in Pancreatitis
Based on the close relationship between pancreatitis and fibrosis, some studies have shown that pro-inflammatory cytokines induce PSCs activation [45]. The activated PSCs further secrete more inflammatory factors, such as monocyte chemotactic protein 1 (MCP-1), which regulates fibrosis through its cognate CC chemokine receptor 2 (CCR2) [46]. Therefore, PSCs activation is considered to be the core event in the development of pancreatic fibrosis, suggesting that targeting PSCs is a potential strategy for CP therapy. Hydrogen peroxide-inducible clone-5 (Hic-5) is a member of the paxillin family, which acts as a molecular scaffold, and its expression leads to a poor prognosis in PC patients [47]. In caerulein-induced CP, the expression of Hic-5 was strongly up-regulated in activated PSCs in the fibrotic tissue [48]. As such, decreasing the expression of Hic-5 significantly attenuated pancreatic fibrosis and PSCs activation in experimental CP mice. Therefore, Hic-5 is an important therapeutic target to reduce pancreatic fibrosis and delay CP [48]. Vitamin deficiency is usually present in patients with pancreatitis and PC and may result from pancreatic insufficiency [49,50]. Interestingly, dietary interventions, such as long-term consumption of vitamin-rich vegetables and fruits, also slow down the CP caused by pancreatic fibrosis [27]. Vitamins C and E act as classical antioxidants, and both have shown a potent anti-fibrotic and anti-inflammatory action by preventing oxidative damage in several organs, including the pancreas [51][52][53][54]. In some studies, vitamins A, D, and K also exert a protective role in the inflammatory response, probably through their antioxidant properties, implying the importance of oxidative stress in inflammation [55][56][57]. Fibrosis is usually considered irreversible in CP, but some studies have shown that pancreatic fibrosis can be reversed in the early stage [20,26,58,59]. However, fibrosis prevention is an effective strategy to reduce CP either by drug treatment or dietary adjustments [60].
The Role of Fibrosis in PC
Currently, there is increasing evidence that PC is a chronic inflammatory disease, as with fibrosis being one of the main pathological characteristics [61]. It is well known that there is a close relationship between pancreatic fibrosis and PC [62]. Many pro-fibrotic cells and cancer-associated fibroblasts (CAFs) are abundantly present in PDAC [63]. Most studies have shown that pancreatic fibrosis level is closely related to the survival rate of the patients after chemotherapy [64,65]. Thus, having a quantitative and reproducible method, evaluating fibrosis might be more accurate than either pathologic regression grade or response evaluation criteria in solid tumors (RECIST) score [66].
Pancreatic Fibrosis Promotes PC Progression
Pancreatic fibrosis is a defining hallmark of PDAC occurrence and prognosis. Interestingly, a set of genes were recently reported to be involved in the development of pancreatic fibrosis and PC. For example, C-X-C motif chemokine receptor 2 (CXCR2), functionally expressed in leukocytes, such as neutrophils, natural killer cells (NK cells), monocytes, macrophages, and T cells, regulates the migration of neutrophils to inflammatory sites by binding to Interleukin-8 (IL-8) [67,68]. CXCR2 knockout mice showed higher levels of pancreatic fibrosis and increased the malignancy of PDAC in vivo, indicating that CXCR2 played an important role in the transition from pancreatic fibrosis to PC [69]. Besides chemokine receptors, some metabolic enzymes are also involved in regulating pancreatic fibrosis. Long-chain acyl coenzyme A synthase 3 (ACSL3) is a lipid metabolizing enzyme, which is up-regulated in PC and related to the increased fibrosis. Interestingly, Sebastiano and colleagues demonstrated that ACSL3 knockout prevents pancreatic fibrosis and delays the PDAC development in mice [70]. A disintegrin and metalloproteinase domaincontaining protein 10 (ADAM10) also correlates with the occurrence and development of PC [71]. Both gene-targeting and drug-targeting ADAM10 reduced radiotherapy-induced pancreatic fibrosis and tissue tension, decreasing the migration and invasion of tumor cells, increasing the tumor sensitivity after radiation, and ultimately prolonging the survival of mice [72]. Furthermore, a recent study confirmed that pancreatic fibrosis reduces the lethality and immunity of immune cells to pancreatic tumor cells, thus promoting PDAC progression [73]. Altogether, pancreatic fibrosis is not only a marker in the formation, development, and prognosis of PC, but also has active participation in cancer disease.
CAFs Contributes to Drug Resistance
Currently, treatment of PC is a big challenge; thus, patients face a poor prognosis, and drug resistance is a major problem in PC therapy [74]. The tumor tissues in PC are composed of a small proportion of cancer cells. In fact, there is an extensive amount of proliferative matrix surrounding the cancer cells, that represents up to 90% of the tumor mass [75]. These proliferative matrices include ECM, CAFs, endothelial cells, and invasive immune cells [76]. These abnormally rich matrices act as a tight blockade to prevent chemotherapeutic drugs from penetrating into the tumor and playing their anti-cancer role, which is one of the important factors that endow cancer cells with chemotherapeutic resistance. Among them, CAFs are the most critical part of TME regulation.
Therapeutic Targeting of the Crosstalk between CAFs and PC
FN assembled by CAFs is an ECM integrin-binding protein. FN promotes fiber formation, provides a track for the migration of cancer cells, and mediates the directional migration of cancer cells [77,78]. Moreover, many signaling molecules produced by CAFs directly participate in regulating nearby cancer cells, thereby stimulating proliferation, invasion, and chemical resistance, which promotes PC malignancy [79]. The consumption of matrix in PDAC blocks some signaling pathways, so it significantly improves the effect of chemotherapy [80]. In vivo studies have shown that the Hedgehog receptor Smoothened (SMO) overexpression in CAFs is an important mechanism of Hedgehog (Hh) signal transduction in pancreatic stromal cells, and the Hh signaling pathway has a close interaction with the tumor matrix [81]. N-myc downstream-regulated gene 1 (NDRG1) is considered to be a potential anticancer gene, and its expression correlates with the differentiation of tumors [82]. Recent studies showed that targeting NDRG1 blocks the crosstalk between PC cells and matrix [83]. The TGF-β/Smad4 signaling axis plays an important role in regulating the TME and mediating tumor-stroma crosstalk [84]. The Met/HGF pathway not only involves the interaction between cancer cells and activated PSCs but also takes part in the crosstalk between tumor and matrix [85]. The complex NT-S100A8/TGF-β1 is also involved in the crosstalk between PDAC and stromal cells in some specific PDAC cell lines [86]. In addition, many studies have confirmed that microRNAs (MIRs) related to epigenetic regulation is a key factor in the formation of the TME, because MIRs are involved in the transformation of normal fibroblasts (NFs) to CAFs, and MIRs released from CAFs affect various features of cancer cells such as tumor migration, tumor invasion, metastasis, and drug resistance [87,88]. Pasireotide (Som230) is a novel multireceptor-targeted somatostatin analog, which inhibits the secretion of symbiotic sulfate transporter 1 (SST1) in CAFs, thus eliminating the interaction between CAFs and PDAC [89]. Insulin/IGF-1R signal is also involved in the crosstalk between cancer cells and matrix, and research on compounds targeting Insulin/IGF signal in the treatment of PDAC entered into clinical trials in phase II [90].
Targeting CAFs in Combination with Chemotherapy, a Field to Explore
Besides the heterogeneity of PDAC itself, the complex matrix crosstalk of tumor cells in the TME also endows cancer cells with resistance to anticancer drugs, which makes the current targeted therapy for some oncogenes weak [91]. Therefore, depletion of dense matrix or destruction of its crosstalk with tumor tissue overcomes the resistance of cancer cells to chemotherapy agents and enhances the anti-cancer effect.
For instance, IL-1β/IRAK4 is the feedforward signal of the tumor matrix with a very high expression level in cancer development, and disrupting the tumor-stroma IL-1β/IRAK4 feedforward circuitry improves the chemotherapy in PDAC [92]. Reducing perlecan in the matrix decreases the contact and communication between the matrix and cancer cells. Depleting perlecan in the stroma and combining with chemotherapy drugs such as gemcitabine (GEM) or Abraxane can prolong the survival rate of PDAC mice [93]. Erdafitinib, a selective pan-fibroblast growth factor receptor (FGFR) inhibitor approved by FDA, reduces the drug resistance of PC cells by targeting tumor fibroblast receptors to prevent the crosstalk between CAFs and cancer cells [94]. In clinical studies, Vismodegib, an orally bioavailable small molecule, has been found to inhibit Shh pathway and to reduce the production of stromal cells, thereby enhancing the anti-cancer effect of GEM on PC [95]. The combination of the least toxic anti-cancer drugs and anti-matrix drugs has gradually become a promising new cancer treatment [96].
CAFs Activation Suppresses Tumor Immune Response
Recent studies showed that the matrix in the tumor stroma also participates in the immune response [97]. For example, high expression of Caveolin-1 (CAV1), a membraneassociated scaffold protein, enhances the secretion of Interleukin-6 (IL-6) and IL-8 in CAFs and promotes PC invasion, while the down-regulation of CAV1 slows down the proliferation of PC cells [98]. Netrin-G1 (NETG1), a lipid anchored protein promotes CAFs to secrete glutamine, glutamate, and cytokines through p38/Fra-1 and Akt/4E-BP1 pathways, thus supporting the survival of PDAC cells under low nutritional conditions and reducing the antitumoral effect of NK cells on PDAC cells [99]. Hypoxia-induced fibrosis can inhibit the infiltration of T cells into the tumor, and the continuous activation of hypoxiainducible factor 1 alpha (HIF-1α) can negatively regulate the signal transduction function of T cell receptors [100]. In human pancreatic fibrosis, macrophages are closely linked to PSCs, which may promote the activation of CAFs during chronic pancreatitis [43]. Tumorassociated macrophages promote cancer fibrosis by regulating ECM [101]. Monocytes can be recruited by CAFs via the IL-8/CXCR2 pathway and differentiate into macrophages that support tumor growth [102]. Therefore, CAFs can not only directly contact PC cells and secrete metabolites but also participate in the immune regulation of the TME. The strategy of targeting the interaction between CAFs, the immune system, and cancer cells can enhance the anti-tumor activity. Recently, it has been reported that knockout of the adhesion molecule, cadherin 11 (CDH11), which is mainly produced by CAFs, inhibits the growth of the pancreatic tumor, increases the response to GEM, reverses the immunosuppression of CAFs, and ultimately significantly prolongs the survival of mice [63].
CAFs Heterogeneity Is a Challenge in Cancer Therapy
Several subtypes of CAFs have been identified, including myofibroblastic CAFs (my-CAFs), inflammatory CAFs (iCAFs), and antigen-presenting CAFs (apCAFs) [103]. My-CAFs with a high expression of actin alpha 2 (ACTA2) were first identified in PC. For a long time, MyCAFs were considered to be the only CAF population, because α-SMA is widely expressed in CAFs [104,105]. MyCAFs play a major role in regulating the deposition and remodeling of ECM, highlighting the important role of myCAFs in promoting pancreatic fibrosis and pancreatitis [106]. ICAF is a subtype of CAFs with a high expression of Ly6C. ICAFs are driven by tumor secretory factors, such as Interleukin-1 alpha (IL-1α) and Interleukin 1 beta (IL-1β), and gather at the edge of the tumor [107]. IL-1α signaling also drives the autocrine signaling in iCAFs, which helps to maintain the inflammatory phenotype. ICAFs produce a variety of cytokines and chemokines (CCL2, CCL7, IL-6, and CXCL12) and may have a stronger tumor-promoting effect than myCAFs [108,109]. ICAF stimulates the proliferation and angiogenesis of PC cells and promotes PC development [110]. Furthermore, iCAFs inhibit the immune response through recruit regulatory T cells (Tregs) and myeloid-derived suppressor cells (MDSCs) [111]. ApCAFs are the CAFs with antigen-presenting function, and the expression of major histocompatibility II (MHC II) is the major feature of apCAFs [110]. ApCAFs present antigens to T cells and affect T cell immunity [110]. ApCAFs can induce CD4+ Tregs differentiation through antigendependent T cell antigen receptor (TCR) ligation, reduce the anti-tumor immune response by changing the ratio of CD8+ T cells to Tregs and prevent PC cells from being monitored by immune cells [110,112,113]. According to the latest research, all three of these CAFs can transform into each other, which emphasizes the dynamic process of TME [114,115].
The majority of studies suggested that CAFs promote the development of PDAC. Recently, some unexpected results showed that myCAFs depletion may also promote PDAC development and metastasis [116]. It possibly depends on whether these CAFs are invasive (carcinogenic) or tumor suppressor CAFs because they might play different or even opposite roles in PDAC development [117]. Considering these studies, it would be necessary to distinguish the subtypes of CAFs, identify different markers, and explore the reasons for the high CAFs heterogeneity.
Cellular Stress Response Led to Pancreatic Fibrosis
When healthy cells suffer constant damage such as genotoxicity, protein, or lipid damage, cellular stress response confers the cellular adaptation that prevents cell death and promotes the transformation of healthy cells into tumor cells. Previous studies have clearly shown that stress proteins play an important role in maintaining the homeostatic microenvironment in both healthy and tumor cells [118][119][120]. Moreover, oxidative stress is required for driving metabolic reprogramming and the re-establishment of antioxidant systems in cancer cells [121]. Therefore, it is necessary to address how to target pancreatic fibrosis by reducing reactive oxidative species (ROS) or targeting important stress proteins.
ROS Scavengers for Treating Pancreatic Fibrosis
A large number of studies support oxidative stress as a triggering factor for pancreatic fibrosis. Oxidative stress directly promotes the activation of quiescent PSCs and the formation of an extensive amount of ECM, and subsequently promotes excessive fibrosis [122,123]. Meanwhile, oxidative stress also aggravates the damage of pancreatic cells in pancreatitis [124]. During the inflammatory phase, CAFs are recruited and activated under oxidative stress, which induces changes in the morphology and the functions of CAFs. However, this activated phenotype was prevented by several antioxidants [125]. For example, the ROS induced by H 2 O 2 promotes the activation of PSCs, while resveratrol prevents the activation of PSCs by reducing the production of ROS [126]. Moreover, melatonin, at pharmacological concentrations, has shown a concentration-dependent decrease in cell viability in rat [127] and human [128] PSCs by modifying the redox state of the cells. Dimethyl fumarate (DMF) promotes the activation of nuclear factor erythroid 2-related fac-tor 2 (NRF2) and the expression of downstream antioxidant genes, eliminating intracellular ROS, inhibiting the activation of PSCs, and reducing the pancreatic fibrosis level [129]. ROSinduced inflammation caused pancreatic cell death through RF2/NF-kB and SAPK/JNK pathways. The antioxidant N-acetyl cysteine (NAC) rescues cell viability by decreasing oxidative stress and inflammation in primary pancreatic cells [130]. Diethyldithiocarbamate is a kind of superoxide dismutase (SOD) inhibitor, was able to induce pancreatic fibrosis by increasing ROS in treated rats [131]. Vitamin E reduces oxidative stress and collagen deposition during CP, thereby reducing pancreatic fibrosis in cerulein-treated mice [124]. Theobromine and scoparone reduce the oxidative stress of pancreatic cells, inhibiting the activation of PSCs and attenuating pancreatic fibrosis through the TGF-β/Smad signaling pathway [132]. In mice, long-term administration of antioxidants prevents PSCs activation (by high glucose-diet) and subsequent fibrosis cascade. Coenzyme Q10 (CoQ10) reduces oxidative stress response, blocks ROS-induced PI3K/Akt/mTOR signaling pathway, decreases pancreatic fibrosis, and prevents the activation of PSCs [133]. Therefore, CoQ10 may be a drug candidate to treat pancreatic fibrosis [134].
Clinical research data show that antioxidant supplementation reduces the level of oxidative stress in patients with idiopathic CP and alcoholic pancreatitis and then weakens the process of pancreatic fibrosis [135]. More evidence supports that oxidative stress has an essential role in the progression of pancreatic fibrosis. Compared with other human organs, such as the liver or the kidney, the pancreas is more sensitive to long-term oxidative stress, triggering inflammation [123]. It has been suggested that ROS-induced oxidative stress can cause persistent damage to the biomacromolecules, such as DNA, RNA and proteins in pancreatic cells, promoting metabolic reprogramming and antioxidant system remodeling. Therefore, the use of antioxidants can reduce oxidative stress response, inhibit pancreatic fibrosis, and reduce the transformation of cells, compromising tumor development.
Targeting Stress-Inducible Protein NUPR1 for Treating Pancreatic Fibrosis and PC
NUPR1 is a stress-induced protein, which is over-activated in the damaged pancreas cells in AP and CP and plays an important role in PC development [136][137][138]. In addition, NUPR1 plays a crucial role in the fibrosis of multiple organs and tissues. For example, NUPR1 activated in the fibroblasts and the renal tubular epithelial cells promotes renal interstitial fibrosis [139]. Similarly, type I collagen and FN promote glioma progression by the activation of NUPR1 [140]. A recent study also found that knockout of NUPR1 decreases cardiac fibrosis and partially restores cardiac function [141]. In a spontaneous mouse model of CP, the oral protease inhibitor CM inhibits CP and pancreatic fibrosis by reducing the expression of NUPR1 [142]. Therefore, we proposed that NUPR1 plays an indispensable role in the progression of fibrosis, and inactivation of NUPR1 is a promising strategy for preventing fibrosis.
In this line, our recent studies have shown that ZZW-115, a powerful inhibitor of NUPR1, is able to kill cancer cells from different tumors, including PC. ZZW-115 is extremely effective in every subtype of PC, but also enhances the sensitivity of cancer cells to chemotherapeutic drugs [143,144]. Importantly, ZZW-115 cannot improve the sensitivity of the untransformed fibroblasts to chemotherapeutic drugs [145]. Interestingly, NUPR1 as a transcriptional factor is not only activated in oxidative stress but also activated in response to other cellular stress, such as endoplasmic reticulum stress (ER stress) and metabolic stress [146][147][148]. Our recent research shows that ZZW-115 treatment triggers ROS production, thus highlighting the role of NUPR1 in the oxidative stress response [143]. In conclusion, NUPR1 inhibitors have a variety of interesting effects in the treatment of PC, including attenuating fibrosis to slow the PC progression, killing PC cells directly through a variety of ways of death, and fighting drug resistance of cancer cells [149].
Conclusions
CP is characterized by persistent and permanent damage in pancreatic tissue [23]. The endocrine and exocrine compartments of the damaged pancreas are gradually lost and replaced by atrophy or fibrosis [23]. The CP development leads to organ dysfunction and increases the risk of PC development. Pancreatic fibrosis is also a typical feature of PC, which promotes the recruitment and activation of CAFs [150]. Pancreatic fibrosis can be used as a diagnostic marker of PC. Besides the traditional imaging methods, evaluating the level of pancreatic fibrosis could improve the diagnosis. Moreover, detecting inflammatory and oxidative stress indicators will contribute in the future to a better understanding of PC development, but also the diagnosis, prevention, and prognosis of the disease [151]. Furthermore, the development and application of the new generation of histology needles provide the possibility to analyze the TME of PC via endoscopic ultrasound [152][153][154], collecting the tumor tissue and allowing the analysis of the tumor matrix.
In addition, pancreatic fibrosis leads to hypoxia in the pancreatic tumor, which causes more oxidative stress response, promoting tumor aggressiveness, increasing drug resistance in cancer cells, and thereby causing higher patient mortality rates [155]. Interestingly, NUPR1, a stress protein activated in pancreatitis, promotes fibrosis, inflammation, and cancer initiation and development, indicating that NUPR1 is essential for TME. Collectively, cellular stress response drives fibrosis, playing a vital role in the transformation from CP to PC. Therefore, prevent fibrosis or targeting stress proteins, such as NUPR1, could be a promising therapeutic strategy for PC and CP therapy.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-05-18T05:17:40.719Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "b2fb5a18f590bded50f5f5d9cf198c4dfa1dfd53",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/9/4970/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b2fb5a18f590bded50f5f5d9cf198c4dfa1dfd53",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
43879523
|
pes2o/s2orc
|
v3-fos-license
|
Different Interaction Modes for Protein-disulfide Isomerase (PDI) as an Efficient Regulator and a Specific Substrate of Endoplasmic Reticulum Oxidoreductin-1α (Ero1α)*
Background: Ero1α and PDI constitute the pivotal oxidative protein folding pathway in mammalian ER. Results: Both catalytic domains of PDI and PDI homologues rapidly regulate Ero1α activity while Ero1α asymmetrically oxidizes PDI. Conclusion: The modes for PDI as efficient regulator and specific substrate of Ero1α are different. Significance: This study reveals how Ero1α-PDI interplay ensures oxidative protein folding homeostatically. Protein-disulfide isomerase (PDI) and sulfhydryl oxidase endoplasmic reticulum oxidoreductin-1α (Ero1α) constitute the pivotal pathway for oxidative protein folding in the mammalian endoplasmic reticulum (ER). Ero1α oxidizes PDI to introduce disulfides into substrates, and PDI can feedback-regulate Ero1α activity. Here, we show the regulatory disulfide of Ero1α responds to the redox fluctuation in ER very sensitively, relying on the availability of redox active PDI. The regulation of Ero1α is rapidly facilitated by either a or a′ catalytic domain of PDI, independent of the substrate binding domain. On the other hand, activated Ero1α specifically binds to PDI via hydrophobic interactions and preferentially catalyzes the oxidation of domain a′. This asymmetry ensures PDI to function simultaneously as an oxidoreductase and an isomerase. In addition, several PDI family members are also characterized to be potent regulators of Ero1α. The novel modes for PDI as a competent regulator and a specific substrate of Ero1α govern efficient and faithful oxidative protein folding and maintain the ER redox homeostasis.
Disulfide bonds play important roles in the structure and function of many secretory and membrane proteins. The correct formation of disulfides during the folding of nascent peptides to native proteins, namely oxidative protein folding, takes place mainly in the endoplasmic reticulum (ER) 4 in eukaryotic cells (1). Protein-disulfide isomerase (PDI) and sulfhydryl oxidase ER oxidoreductin-1 (Ero1) constitute the pivotal pathway for oxidative protein folding from yeast to mammals. PDI contains four thioredoxin (Trx) domains arranged as a-b-bЈ-aЈ, with two -CGHC-active sites respectively located in domain a and aЈ. PDI can directly catalyze disulfide formation in reduced substrates, as well as the isomerization reaction to convert aberrant disulfides to correct ones (2). Ero1 flavoproteins can catalyze the re-oxidation of reduced PDI for continuous transfer of disulfides to substrate proteins. The -CXXXXC-outer active site located in an intrinsically flexible loop of Ero1 transfers electrons from the active site of PDI to the buried -CXXCinner active site, and the electrons are then used to reduce oxygen into hydrogen peroxide via flavin adenine dinucleotide cofactor (3,4).
There are two isoforms of Ero1 in mammalian cells: Ero1␣ is widely expressed (5) and Ero1 is abundantly expressed in select secretory tissues such as the pancreas (6). Both Ero1␣ and Ero1 activities are regulated by regulatory disulfides formed between catalytic and non-catalytic cysteines to avoid futile oxidation cycles with excess hydrogen peroxide production (7)(8)(9). For Ero1␣ particularly, the formation of two regulatory disulfides Cys94-Cys131 and Cys99-Cys104 in the inactive resting state blocks disulfide transferring from the inner active site (Cys394-Cys397) to PDI via the outer active site (Cys94-Cys99). These two regulatory disulfides need to be reduced to liberate the outer active site for activation of Ero1␣. A conserved longrange disulfide Cys85-Cys391 was suggested to also participate in the activity regulation of Ero1␣ (7,8), which was challenged later (10).
In cells, the poise of active and inactive Ero1␣ at steady state was demonstrated to be dependent on the level of PDI (7,11). Thus, PDI seems not only a substrate but also a physiological regulator of its oxidase Ero1␣. However, the dynamics of the transition between active and inactive Ero1␣ during the fluctuation of ER redox environment and the role of PDI in these processes remain largely unknown. For the interplay between Ero1␣ and PDI, it has been elucidated that the catalytic active Ero1␣ preferentially oxidizes the C-terminal active site in domain aЈ of PDI, rather than the N-terminal active site in domain a (8,12), although the reduction potentials of the two active sites are very similar (13). Also we and others have provided evidence that the primary substrate binding domain bЈ of PDI plays a critical role in binding with Ero1␣ for functional disulfide relay (10,12,14). On the other hand, the molecular mechanism of the reduction/oxidation of the regulatory disulfides of Ero1␣ by PDI is little understood. There are at least twenty PDI family members (PDIs) in mammalian ER (2), but Ero1␣ as well as its hyperactive isoform Ero1 poorly catalyzes the oxidation of other PDIs (9,15). Meanwhile other PDIs at steady state unsuccessfully modulate the redox states of Ero1␣ (15). Altogether, to reveal the molecular mechanism underlying the interplay between Ero1␣ and PDIs is central and crucial for understanding how efficient oxidative folding and redox balance in the ER are maintained in mammalian cells.
In this study, we report that (i) Cys85-Cys391 disulfide in Ero1␣ is stable and remains intact during the physiological activation of the enzyme; (ii) Cys94-Cys131 regulatory disulfide responds to the redox fluctuation in ER very sensitively, and its reduction/oxidation can be facilitated by not only PDI but also some other PDIs; (iii) either catalytic domain of PDI is able to facilitate the regulation of Ero1␣, and the substrate binding domain bЈ of PDI is not essential for activation/inactivation of Ero1␣; (iv) the functional oxidation of PDI catalyzed by Ero1␣ is asymmetric to make the aЈ domain act primarily as an oxidase and the a domain as an isomerase. The above findings shed great light on the mechanism underlying the interplay between Ero1␣ and PDI proteins, which ensures the efficiency and fidelity of oxidative protein folding and maintains the thiol-disulfide redox homeostasis in the ER.
Recombinant Ero1p (18) and Ero1␣ (12) proteins were expressed and purified as described. PDI proteins and Pdi1p proteins were purified as for PDI (19). For reduced protein preparation, PDI proteins at 100 M or Ero1␣ at 10 M with 100 mM DTT and Pdi1p proteins at 100 M with 10 mM GSH were incubated in buffer A (50 mM Tris-HCl, 150 mM NaCl, 2 mM EDTA, pH 7.6) for 1 h at 25°C. Excess reductants were then removed using a HiTrap desalting column (GE Healthcare) pre-equilibrated with buffer A, and the reduced proteins were kept on ice for use only in the same day. For oxidized protein preparation, PDI proteins at 100 M or Ero1␣ at 50 M was incubated with 50 mM potassium ferricyanide in buffer A for 1 h at 25°C, and then chromatographed through a Superdex-200 10/300 GL column (GE Healthcare) pre-equilibrated with buffer A. Monomeric protein fraction was collected, concentrated and stored at Ϫ80°C in aliquots.
Cell Culture, Transfection, and Antibodies-HeLa cells were cultured in DMEM (Invitrogen) containing 5% fetal bovine serum, 100 units/ml penicillin and 100 g/ml streptomycin (Invitrogen) at 5% CO 2 . Plasmids were transfected using Lipofectamine 2000 (Invitrogen) according to the manufacturer's instructions. After 48 h, the transfected cells were harvested, or as needed treated with 50 M DTT and 50 M 16F16 (Sigma-Aldrich) dissolved in DMSO for 8 h before harvest.
RNA Interference-The pSUPER-retro-puro vector (Oligoengine) expressing the shRNA targeting PDI sequence 5Ј-GAG-TGTGTCTGACTATGAC-3Ј (20) was constructed according to the manufacturer's instructions. The pSUPER-shEro1␣ plasmid was used as described (17). pSUPER plasmids were transiently transfected into HeLa cells on day 1. Puromycin was added into the culture medium to a final concentration of 2 g/ml on day 2 to kill the negative cells. Then pSUPER plasmids were co-transfected with pcDNA3.1-Ero1␣ C99/104A on day 3. Cells were harvested on day 5.
Assay for the in Vivo Redox States of Ero1␣, Immunoprecipitation, and Western Blotting-For Ero1␣ activation, the harvested cells were re-suspended in DMEM containing 150 M DTT at 25°C. Aliquots were taken and immediately blocked by 20 mM N-ethylmaleimide (NEM, Sigma-Aldrich) at different times to trap disulfide bonds. For Ero1␣ inactivation, after incubation with 10 mM DTT in DMEM at 25°C for 10 min, cells were quickly washed twice by ice-cold phosphate-buffered saline to remove excess DTT and re-suspended in DMEM at 25°C. Aliquots were then taken and immediately blocked by 20 mM NEM at different times. Cells were lysed in radio immunoprecipitation assay buffer (Beyotime) containing 1 mM phenylmethanesulfonyl fluoride and 20 mM NEM. Post-nuclear super-natants were resolved by nonreducing SDS-PAGE to analyze the redox states of Ero1␣. Immunoprecipitation was carried out by incubation of cell lysates with ␣HA for 2 h at 4°C, followed by addition of protein A ϩ G Agarose (Beyotime) and rotation for another 2 h at 4°C. The beads were then washed for 3 times with RIPA buffer and the resuspended samples were analyzed by nonreducing SDS-PAGE. Proteins were transferred to polyvinylidene difluoride membranes (Millipore) using a semi-dry transfer apparatus, and the membranes were blocked in 5% milk and decorated by antibodies and enhanced chemiluminescence (Thermo Scientific), and visualized by using a Chemi-Scope mini imaging system (Clinx Science).
Redox State Determination of Activated and Inactivated Ero1␣ in Vitro-For the activation of Ero1␣, oxidized Ero1␣ was either incubated with 10-fold reduced PDI proteins or treated by 10-fold PDI proteins in the presence of GSH/GSSG at different ratios. For the inactivation of Ero1␣, reduced Ero1␣ was mixed with 10-fold oxidized PDI proteins. To study the self-oxidation of Ero1␣, reduced Ero1␣ proteins were incubated with equimolar GST-Ero1␣ proteins. All experiments were carried out in buffer A at 25°C, and aliquots were taken at different times and immediately quenched with 20 mM NEM. The samples were then analyzed by nonreducing SDS-PAGE followed by immunoblotting with ␣Ero1␣. Band intensity was quantified using Image J software (National Institutes of Health).
Oxygen Consumption Assay-Oxygen consumption was measured at 25°C using an Oxygraph Clark-type oxygen electrode (Hansatech Instruments) as described (9). Briefly, reactions were initiated by adding Ero1 proteins to a final concentration of 2 M into buffer B (100 mM Tris-HAc, 50 mM NaCl, 2 mM EDTA, pH 8.0) containing 20 M PDI proteins and various concentrations of GSH and GSSG.
Gel-based PDI Oxidation Assay-Oxidation of 20 M reduced PDI proteins by 3 M Ero1 proteins were carried out at 25°C in buffer B. At different time points, aliquots were taken for immediate mixing with equal volume of 2ϫ SDS-PAGE loading buffer containing 5 mM mPEG-5k (Sigma-Aldrich) followed by incubation at 33°C for 20 min. The samples were subjected to SDS-PAGE after consuming excess of mPEG-5k by 25 mM DTT.
PDI Isomerase Activity Assay-Scrambled bovine pancreatic RNase A (Sigma-Aldrich) was prepared as described (21). Ero1 and PDI proteins at equimolar were pre-incubated at 25°C for 30 min in buffer B, and the isomerase activities of PDI proteins at 3 M were then assayed by adding scrambled RNase A and cCMP (Sigma-Aldrich) to a final concentration of 8 M and 4.5 mM, respectively. The absorbance increase at 296 nm due to the hydrolysis of cCMP by refolded RNase A was monitored at 25°C, and the concentration of reactivated RNase A was calculated with details described elsewhere (22). The linear slope of RNase A reactivation was taken as the isomerase activity of PDI.
BPTI Oxidative Folding Assay-BPTI (Sigma-Aldrich) of 2.5 mg was denatured and reduced by incubation with 20 mM DTT and 6 M guanidine hydrochloride in 0.1 M Tris-HCl (pH 8.0) at 50°C for 5 h. Excess reductants and denaturants were removed using a HiTrap desalting column pre-equilibrated with 0.01 M HCl. The refolding assays were carried out at 25°C by adding denatured and reduced BPTI to a final concentration of 30 M into buffer B containing pre-incubated Ero1 and PDI proteins making a final concentration of 3 M of each. Aliquots were taken at different time points and quenched by adding 0.1 volumes of 5 M HCl. The samples were then loaded onto a Vydac C18 analytical HPLC column (250 ϫ 4.6 mm, GRACE) and eluted with a flow rate of 1 ml/min using a linear gradient of acetonitrile from 15% to 50% at a rate of 0.7%/min in 0.05% trifluoroacetic acid. The absorbance at 229 nm was monitored. The percentage of each folding intermediate during the refolding was quantified using Chromeleon software (Thermo Scientific).
We next examined the influence of PDI on the redox states of Ero1␣ in HeLa cells. Ero1␣ C99/104A mutant was used because this mutant retains both long-range disulfides of Cys85-Cys391 and Cys94-Cys131 for redox state examination but lacks intact outer active site (Cys94-Cys99) for catalyzing substrate oxidation, so that the interference from the re-oxidation of Ero1␣ regulatory disulfides by oxidized substrates can be excluded. As shown in Fig. 1E, Ero1␣ C99/104A migrated in both Ox1 and Ox2 forms, implying that in cells it exists in both activated and inactivated forms, similar to Ero1␣ WT. Co-expression of PDI at steady state led to moderate but significant increase of Ox1 form in C99/104A as well as pronounced increase of the disulfide-linked Ero1␣-PDI heterodimer, but no fully reduced form was observed, indicating that in cells the increase of PDI level promotes the reduction of Cys94-Cys131 but not Cys85-Cys391. Moreover, mutation of Cys85-Cys391 disulfide dramatically impaired the formation of Ero1␣-PDI heterodimer, underlining the importance of this bond for maintaining the Ero1␣-PDI functional complex. In summary, during the activation of Ero1␣ the Cys94-Cys131 regulatory disulfide is reduced resulting in the mobility shift from Ox2 to Ox1, whereas the Cys85-Cys391 disulfide remains intact, which is important for the catalytic activity of Ero1␣.
Dynamic Regulation of Ero1␣ Activity in Cells by PDI-To gain further insights into the dynamic regulation of Ero1␣ during the fluctuation of the ER redox environment in cells, the activation/inactivation processes of Ero1␣ were studied by monitoring the interconversion between Ox1 and Ox2 forms of the aforementioned Ero1␣ C99/104A mutant. Firstly, the activation kinetics of Ero1␣ was examined by stressing the cells with a low concentration of DTT to mimic the burst of free thiols during protein synthesis. Just in 1 min after DTT challenge, a large portion of Ero1␣ quickly shifted to Ox1 state, but after 10 min there was still a small fraction of Ox2. When PDI was overexpressed, all Ero1␣ shifted from Ox2 to Ox1 in 1 min ( Fig. 2A). When cells were pre-treated by a small molecule 16F16, which specifically inhibits the thiol-disulfide oxidoreductase activity of PDI (24), the reduction of Ero1␣ upon DTT addition was almost completely inhibited, strongly suggesting that PDI plays a critical role in mediating the thioldriven activation of Ero1␣ (Fig. 2B).
Next, we studied the inactivation process of Ero1␣ C99/104A by using a DTT pulse-chase assay. In DTT-flooded cells, Ero1␣ was in Ox1 state with the Cys94-Cys131 regulatory disulfide reduced. At the end of the pulse, Ox2 species quickly re-emerged and increased during the 'chase', but not all Ox1 were re-oxidized to Ox2 after 10 min. Knockdown of PDI significantly delayed the re-emergence of Ero1␣ Ox2 following the DTT-pulse (Fig. 2C). Moreover, when cells were pre-treated with 16F16, the delay of Ero1␣ inactivation became more conspicuous (Fig. 2D), emphasizing the role of redox active PDI in FIGURE 1. Identification of the regulatory disulfide in Ero1␣. A, schematic representation of the disulfide pattern in active (Ox1) and inactive (Ox2) states of Ero1␣ (7,14,23). The cysteine residues are shown as white, gray (outer active site), and black (inner active site) circles with numbering, and disulfides are indicated as lines. Regulatory disulfides (Cys94-Cys131 and Cys99-Cys104) formed in Ox2 state are indicated by curves (7,8). Note that the long-range disulfide Cys85-Cys391 is explicitly identified to be a structural but not regulatory disulfide in this work. The flexible loop region is represented by an open bar. B, Ero1␣ WT (lanes 1-9) of 1 M was incubated with or without 10 M PDI in 10 mM glutathione redox buffer composed of various concentrations of GSH and GSSG as indicated at 25°C for 5 min, then analyzed by nonreducing SDS-9% PAGE and Western blotting (WB) with ␣Ero1␣. The distinct gel mobilities of three Ero1␣ Cys-to-Ala mutants (lanes 10 -12) and fully reduced Ero1␣ treated with 2-mercaptoethanol (2-ME) (lane 13) indicated the reduction of different disulfides. Reduced Ero1␣ (Red), two oxidized Ero1␣ species (Ox1 without Cys94-Cys131 disulfide and Ox2 with both Cys85-Cys391 and Cys94-Cys131 intact) were indicated. Asterisk indicates a species with only Cys85-Cys391 absent (23). C, oxygen consumption catalyzed by 2 M Ero1␣ WT was monitored in the presence of 20 M PDI and various concentrations of GSH/GSSG as indicated by numbering the same as in B. D, 2 M Ero1␣ WT and mutants were incubated in the absence (Ϫ) or presence of 20 M PDI and 10 mM GSH for indicated time, analyzed as in B. E, Myc-tagged Ero1␣ WT and mutants were co-expressed with (ϩ) or without (Ϫ) HA-tagged PDI in HeLa cells. After NEM blocking, cells were lysed and analyzed by nonreducing (NR) or reducing (R) SDS-8% PAGE, followed by ␣myc or ␣HA WB as indicated. Molecular size markers were shown on the left margin and redox states of Ero1␣ were indicated as in B. Double asterisks indicate the Ero1␣-PDI complex. The ratios of Ox1/Ox2 of Ero1␣ C99/104A from lanes 4 and 5 are shown on the right panel (mean Ϯ S.D., n ϭ 3).
Ero1␣ inactivation. As the recovery of ER redox homeostasis (25) and PDI redox balance (26) from a reductive challenge both largely depend on Ero1␣ oxidase activity, we next looked into whether modulating Ero1␣ level can affect the inactivation of Ero1␣ itself. Silencing endogenous Ero1␣ dramatically inhibited the recovery of Ox2 in Ero1␣ C99/104A (Fig. 2E). Conversely, co-expression of the hyperactive Ero1␣ C104/131A markedly accelerated the transition from Ox1 to Ox2 in Ero1␣ C99/104A and reset the redox poise eventually at 30 min (Fig. 2F). Taken together, in cells Ero1␣ is very sensitive to the fluctuation of thiol-disulfide redox states in the ER, and the regu-latory disulfides of Ero1␣ can be modulated by the redox active PDI.
The Two Catalytic Domains of PDI Can Activate Ero1␣ Independently-PDI contains two -CGHC-active sites respectively located in domain a and aЈ. To understand the contribution of the two catalytic domains to the activation of Ero1␣, five PDI mutants were prepared (Fig. 3A). We tested the in vitro ability of these PDI proteins at reduced form to reduce the Cys94-Cys131 regulatory disulfide in Ero1␣ C99/104/166A mutant. The mutation of the unpaired Cys166 to Ala on the background of C99/104A was to avoid aberrant formation of homodimer during the preparation of homogenous oxidized Ero1␣ monomer (14). Addition of 10-fold reduced PDI WT to oxidized Ero1␣ resulted in rapid appearance of Ox1 form within 15 s, but up to 5 min there was still Ox2 remained ( Fig. 3B), as the reduction potential of the Cys94-Cys131 regulatory disulfide is much lower than that of PDI as mentioned above. PDI C1 or PDI C2, in which both cysteines were replaced by serines in the active site of domain a or aЈ, reduced the Cys94-Cys131 regulatory disulfide as efficiently as PDI WT, indicating that either active site of PDI is sufficient to activate Ero1␣. PDI C1/2 with both active sites mutated as a negatively control, had little effect on the reduction of Ero1␣ even though excess GSH was supplied (Fig. 3, B and C). The ability of catalytic domains of PDI to activate Ero1␣ was further assessed by isolated PDI a and aЈc domains. Similar to PDI WT both isolated domains reduced ϳ40% of the Cys94-Cys131 regulatory disulfide in 5 min, although Ox1 form appeared somewhat slower during the initial stage (Fig. 3, B and C). The above results clearly demonstrated that PDI can directly modulate the fast transition of Ero1␣ from Ox2 to Ox1 state, and the two catalytic domains of PDI can independently perform the task of activating Ero1␣.
Autonomous and PDI-mediated Inactivation of Ero1␣-Next, the inactivation process of Ero1␣ was dissected to address whether the re-oxidation of the regulatory disulfides in Ero1␣ can occur autonomously. Purified Ero1␣ protein was firstly treated with excess amounts of DTT for full reduction of the regulatory disulfides. After DTT was washed out, most of Ero1␣ WT immediately converted to the Ox2 form, with only a small portion remained in fully reduced and/or Ox1 form, and a complete conversion to Ox2 form was achieved at 30 min (Fig. 4A, upper panel). The re-oxidation of Ero1␣ mutants lacking the outer active site (Ero1␣ C99/104/166A) or the inner active site (Ero1␣ C394A) was greatly impaired (Fig. 4A, middle panel) or completely inhibited (Fig. 4A, lower panel). Thus, the regulatory disulfides in Ero1␣ can be autonomously re-oxidized aerobically, and the oxidizing power of oxygen is likely expended by the inner active site (Cys394-Cys397) and transferred via the outer active site (Cys94-Cys99) to re-oxidize the regulatory disulfides. We then examined whether the autonomous re-oxidation of Ero1␣ is via intramolecular or intermolecular disulfide transfer. To test the latter possibility, reduced Ero1␣ C394A was incubated with GST-fused Ero1␣ WT or GST-fused hyperactive Ero1␣ C104/131A, because they could be clearly separated by distinct molecular weights on SDS-PAGE. As shown in Fig. 4B, little oxidized Ero1␣ C394A was observed up to 60 min after the addition of equimolar active GST-Ero1␣ proteins, suggesting that the intermolecular disulfide exchange between two Ero1␣ molecules is not favorable, and the autonomous re-oxidation of Ero1␣ occurs predominantly via intramolecular disulfide transfer.
The Ero1␣ C99/104/166A mutant with slow autonomous reoxidation was used to further study the contribution of PDI to the inactivation dynamics of Ero1␣. Consistent with the data in cells that rapid inactivation of Ero1␣ depends on active PDI (Fig. 2), oxidized PDI WT markedly accelerated the reappearance of Ox2, with ϳ70% of Ero1␣ being oxidized to Ox2 state at 15 s and complete oxidation at 5 min (Fig. 4, C and D). PDI C1 and PDI C2 promoted the re-oxidation of Ero1␣ as efficiently as PDI WT, suggesting that the two active sites functions also independently in the inactivation of Ero1␣. The catalytic inactive mutant PDI C1/2 did not show any effect as expected. Again, the isolated PDI a and PDI aЈc domains accelerated the re-oxidation of Ero1␣, albeit PDI aЈc was less efficient (Fig. 4, C and D). Taken together, the inactivation process of Ero1␣ as monitored by the re-oxidation of Cys94-Cys131 regulatory disulfide can be directly promoted by either active site of PDI.
Asymmetric Oxidation of PDI by Ero1␣ Ensures Efficient Oxidative Folding-The above results showed that the two catalytic domains of PDI contribute to the activation/inactivation of Ero1␣ equally, whereas we and others previously observed that Ero1␣ drives substrate oxidation preferentially through the aЈ domain of PDI (8,12). So we explored the biological significance of asymmetric oxidation of PDI by Ero1␣ catalytic activity. The gel-based methoxy polyethyleneglycol 5000 maleimide (mPEG-5k) modification assay can monitor the redox state change of PDI (7,27), and PDI oxidized by Ero1␣ would have less free cysteines to be modified by mPEG-5k and migrate faster. As shown in Fig. 5A after reacting with Ero1␣ for 30 min, PDI WT was not completely oxidized with significant fully/ semi-reduced species remained, PDI C1 was more oxidized, and PDI C2 was barely oxidized. As reduced PDI is required to rearrange the non-native disulfides (2), we speculated that the asymmetric and incomplete oxidation of PDI by Ero1␣ may be important for PDI to catalyze disulfide isomerization in addition to disulfide formation. To test this hypothesis, reactivation of scrambled RNase A with promiscuous disulfides in the Ero1␣/PDI system was used for isomerase activity assay. As shown in Fig. 5C, after oxidation by Ero1␣ the isomerase activities of PDI WT and PDI C1 were compromised, but not completely depleted. Of note, the isomerase activity of PDI C2 was little affected, supporting the idea that the a domain of PDI is resistant to Ero1␣ oxidation and can function as an efficient isomerase. To further examine this rationale, yeast Ero1p/ Pdi1p system was studied because the oxidation pattern of the two active sites in Pdi1p catalyzed by Ero1p is different from that in mammalian system (28). Under our experimental conditions Pdi1p was very efficiently oxidized by Ero1p with little fully/semi-reduced Pdi1p left at 30 min, and the oxidation rate of Pdi1p C2 by Ero1p was slightly faster than that of Pdi1p C1 (Fig. 5B), which is quite different from the Ero1␣/PDI system indeed (Fig. 5A). Consequently, the isomerase activities of Pdi1p WT and its active site mutants were fully suppressed upon Ero1p oxidation (Fig. 5D).
To gain further insights into the merits of asymmetric oxidation of PDI by Ero1␣, we monitored the de novo oxidative folding of another classical folding substrate, bovine pancreatic trypsin inhibitor (BPTI). Native BPTI contains three disulfide bonds, and its successful folding depends on the rearrangement of the kinetically trapped disulfide intermediates (29). The quantitative dissection of BPTI refolding steps showed that in the first 10 min Ero1␣/PDI catalyzed the oxidation of reduced BPTI to various intermediates, which were further oxidized to native BPTI at 60 min with the yield over 90% (Fig. 5E). In Ero1p/Pdi1p system, although the oxidation of reduced BPTI in the first 10 min was slightly faster than that in Ero1␣/PDI system, the one-disulfide and two-disulfide species remained in later steps and only ϳ50% BPTI was refolded to native state after 60 min (Fig. 5F). Thus, human Ero1␣/PDI system is more efficient than yeast Ero1p/Pdi1p system in proofreading nonnative disulfides. In conclusion, the asymmetry in Ero1␣ catalyzed oxidation of PDI is functionally significant, which makes the aЈ domain act primarily as an oxidase and the a domain act as an isomerase so as to ensure efficient oxidative folding of client proteins.
Role of Hydrophobic Interaction in Ero1␣-PDI Interplay-We further explored the mechanisms underlying the intriguing phenomenon that both catalytic domains of PDI can react with the regulatory disulfides of Ero1␣ without discrimination but only the aЈ domain can mediate efficient electron transfer to the outer active site of Ero1␣. We and others previously found that the principle substrate binding domain bЈ in PDI is critical for the Ero1␣/PDI disulfide relay (12,14), therefore we studied whether the substrate binding ability of PDI is also required for the activation/inactivation of Ero1␣. Here we took the advantage of the mutation of two residues Phe-275 and Ile-289 in the bЈ domain of PDI, which dramatically eliminates binding with peptide or Ero1␣ (10,30). Surprisingly, PDI binding mutant F275W/I289A at reduced form was able to reduce the Cys94-Cys131 regulatory disulfide in Ero1␣ as efficiently as PDI WT (Fig. 6A), and the oxidized binding mutant also facilitated the re-oxidation of Ero1␣ (Fig. 6B). Thus, the peptide binding ability of PDI bЈ domain is not required in the reduction/oxidation of the regulatory disulfides in Ero1␣.
To explore which form of Ero1␣ binds with PDI, HA-tagged PDI WT was further used to co-immunoprecipitate Ero1␣ in HeLa cells. Only Ox1 but not Ox2 species of Ero1␣ was detected with PDI under nonreducing conditions (Fig. 6C), indicating that PDI specifically recognizes the active form of Ero1␣ through non-covalent interaction. A surfactant Triton X-100 and a hydrophobic probe 1-anilinonaphthalene-8-sulfonate (ANS) both markedly inhibited the oxygen consumption by active Ero1␣ during the oxidation of PDI (Fig. 6, D and E), emphasizing the functional role of hydrophobic binding in Ero1␣ and PDI disulfide relay. Interestingly, Triton X-100 and ANS only slightly affected Ero1p/Pdi1p activity (Fig. 6, D and E), implying that the hydrophobic interaction between Ero1␣ and PDI is a critical requirement for mammals. Gel filtration chromatography of mixed PDI and Ero1␣ showed a new peak fraction with the molecular weight larger than that of the separated proteins (Fig. 6F), confirming the presence of a stable complex between PDI and Ero1␣. The majority of this complex was formed via non-covalent interaction and the minority was linked by intermolecular disulfide bridges (Fig. 6F). In line with activity assays, negligible complex of Ero1p and Pdi1p was FIGURE 5. Asymmetric oxidation of PDI by Ero1␣ endows PDI with both oxidoreductase and isomerase activities. A and B, loss of reactivity toward mPEG-5k upon disulfides formation in PDI (A) or Pdi1p (B) proteins catalyzed by Ero1␣ or Ero1p, respectively, was monitored by SDS-10% PAGE and visualized by Coomassie staining. a red , a ox , aЈ red , and aЈ ox indicate the PDI proteins with a domain reduced, a domain oxidized, aЈ domain reduced and aЈ domain oxidized, respectively. The doublet bands of each PDI species are resulted from one or both cysteines in domain bЈ being alkylated by mPEG-5k. Asterisks in B indicate the Pdi1p species with a structural disulfide (Cys90-Cys97) in domain a being reduced and modified by mPEG-5k. C and D, isomerase activities of PDI (C) and Pdi1p (D) proteins for the reactivation of scrambled RNase A were determined after incubation with or without Ero1 oxidases as indicated (mean Ϯ S.D., n ϭ 3). E and F, oxidative folding of denatured and reduced BPTI was catalyzed by Ero1␣/PDI (E) or Ero1p/Pdi1p (F) and analyzed by reverse-phase HPLC. The percentage of reduced BPTI, one disulfide containing species, two disulfides containing species and native BPTI at 0, 1, 5, 10, 30, and 60 min were quantified. Values were the mean of two independent experiments with very similar profiles. detected on gel filtration (Fig. 6G). Collectively, all the above data support our model that modulation of the regulatory disulfides in Ero1␣ by both PDI active sites is independent from the bЈ domain, while the hydrophobic interaction is necessary for the catalytic oxidation of PDI aЈ domain by active Ero1␣. In contrast, there is no stable hydrophobic binding between yeast Ero1p and Pdi1p, and both active sites of Pdi1p can freely react with the regulatory disulfides (31) as well as the catalytic disulfide ((28) and Fig. 5B) in Ero1p due to less conformational restriction.
Interplay between Ero1␣ and Other PDIs-It is known that Ero1␣ and its hyperactive homologue Ero1 poorly catalyze the oxidation of other PDI family members (9, 15), but transient mixed-disulfides between Ero1␣ and several PDIs were trapped in cells (32,33). We speculated these intermediates might be formed during the reduction/oxidation of the regulatory disulfides in Ero1␣ by PDIs. Therefore PDIp, P5, ERp46, ERp57, and ERp72 were overexpressed in HeLa cells to check whether they were able to function as regulators of Ero1␣. Overexpression of these PDIs to a similar level had only moderate effects on the Ox1/Ox2 ratio of Ero1␣ at steady state compared with PDI (Fig. 7A, first lane in upper panel). However, when the cells were treated with a low concentration of DTT, these oxidoreductases except ERp72 rapidly promoted the reduction of Ero1␣ (Fig. 7A), underscoring their roles in facilitating Ero1␣ activation against reductive challenge in the ER. Thus, the inefficient functional oxidation of other PDIs by Ero1␣ may not be attributed to poor Ero1␣ activation. As the substrate binding domain bЈ in PDI plays a critical role in enzymatic disulfide relay between PDI and Ero1␣, we realized that lack of the unique bЈ domain of PDI in other PDIs could be a reason for their inefficient oxidation by Ero1␣. To test this possibility, chimeric PDI-PDIs proteins were constructed by fusing one catalytic domain from each of the five PDIs to the C terminus of the rigid bbЈ base of PDI. The rates of Ero1␣ catalyzed oxygen consumption were dramatically increased in the presence of the chimeras PDI-PDIp, PDI-ERp57 and PDI-ERp72 (Fig. 7B). PDI-ERp46 only modestly increased the reaction rate and PDI-P5 was not effective (Fig. 7C). As the catalytic domains from PDIp, ERp57, and ERp72 used to generate the chimeras are aЈ-type while those from ERp46 and P5 are a-type, these results explained why other PDIs are poor substrates of Ero1␣, and strengthened the molecular mechanism we proposed that both the bЈ and aЈ domains are necessary for the functional disulfide relay between Ero1␣ and PDI (12). As expected, all the PDIs and PDI-PDIs chimeras efficiently promoted the transition of Ero1␣ from Ox2 to Ox1 state during Ero1␣ catalyzed reactions (Fig. 7D), indicating that all PDIs tested are capable to activate Ero1␣ in vitro.
Next, the abilities of PDIs to inactivate Ero1␣ were studied by using the reconstituted system. PDIp, P5, ERp46 and ERp57 re-oxidized reduced Ero1␣ as efficient as PDI, and ERp72 showed somewhat weaker effects in terms of the disappearance of fully reduced Ero1␣ (Fig. 7E). We were then interested in whether the -CGHC-active site containing motif, which is conserved among these PDIs, is sufficient to reduce/oxidize the regulatory disulfides in Ero1␣. A synthesized octapeptide (PWCGHCKA) derived from PDI in its reduced form indeed promoted the activation of Ero1␣ in a dose dependent manner, and the oxidized octapeptide facilitated the inactivation of Ero1␣ in a similar way (Fig. 7F), suggesting that the -CGHCactive site is the minimal element for regulation of Ero1␣. The high-dose of octapeptide used here implied that the intact catalytic Trx domain is optimized for efficient reduction/oxidation of Ero1␣. Altogether, the above data strongly suggested that PDIs with catalytic domain containing the -CGHC-active sites, if not all, are potent regulators of Ero1␣. However, active Ero1␣ can only efficiently catalyze disulfide production via PDI due to the specific recognition of PDI bЈ-aЈ domains.
DISCUSSION
Ero1␣ activity is tightly regulated in mammalian ER by a feedback mechanism. At resting state of cells, formation of the two regulatory disulfides (Cys94-Cys131 and Cys99-Cys104) in Ero1␣ restricts the availability of the Cys94-Cys99 outer active site (7,23). The inactive Ero1␣ must be promptly activated once robust protein oxidative folding capacity is required, and the activated Ero1␣ must be adequately inactivated to prevent over-oxidation within the ER. Here, we demonstrate that both the activation and inactivation of Ero1␣ by PDI in vitro occur very fast (Ͻ15 s). Similarly, when cells suffer or recover from a reductive challenge the reduction/oxidation of the regulatory disulfides in Ero1␣ happens rapidly within 1 min. Therefore, the interconversion between inactivated and activated Ero1␣ responding to the ER redox environment is prompt. Interestingly, we also find that after reductive challenge substantial oxi- dase activity of Ero1␣ is required to re-establish the thiol-disulfide balance in the ER on a relative longer time-scale. Previous studies have shown that after reductive challenge the resetting of the steady-state ratio of GSSG to total glutathione is very fast on a time-scale of seconds (25), but the re-oxidation of disulfides in protein substrates takes longer time to be completed (34,35). Thus, we propose when cells encounter a reductive challenge Ero1␣ is quickly activated by reduced PDI to catalyze the oxidation of GSH by cooperating with PDI, which results in fast reset of GSH/GSSG balance within 1 min, and Ero1␣ is then partly inactivated. The remaining active Ero1␣ drives the oxidation of protein substrates until the thiol-disulfide status in the ER reaches rebalance. After that, Ero1␣ is inactivated by oxidized PDI to prevent futile consumption of GSH and excessive peroxide production.
Previously, it was reported that several other PDIs (including ERp46, ERp57, ERp72, and P5) can hardly modulate the redox states of Ero1␣ at steady state, although their redox equilibrium constants with glutathione are close to that of PDI (15). By real-time monitoring the redox states of Ero1␣, we clearly show that the activation of Ero1␣ upon reductive challenge is indeed significantly promoted when PDI, PDIp, P5, ERp46, or ERp57 was individually overexpressed in cells, and these PDIs at oxidized form can accelerate the inactivation of Ero1␣ in vitro (Fig. 7). One exception with unknown reasons is ERp72, which seems less efficient than the other PDIs to regulate Ero1␣. Our new finding that many PDIs are potent regulators of Ero1␣ leads us to propose that the redox state of Ero1␣ is controlled by the redox balance of PDIs ensemble in the ER (Fig. 8). Since PDIs have substrate specificity in different tissue or cell types, the oxidase activity of Ero1␣ can thus be precisely regulated in cells with distinct proteomes. Recently, studies have reported the redox regulation of the different unfolded protein response (UPR) sensors was mediated by specific PDIs (36). Therefore, a redox-based feedback regulation loop for controlling the strength of UPR signaling likely exists in the ER, that fine-tuning Ero1␣ activity by PDIs maintains ER redox homeostasis, which will in turn keep balance of the redox states of PDIs and promote the adaptive UPR as well as attenuate the fatal UPR to avoid cell death.
In this report, we find that the reduction/oxidation of the regulatory disulfides in Ero1␣ relies on the catalytic activity of PDI but is independent from the peptide binding activity of PDI, and either active site of PDI can facilitate the regulation of Ero1␣ independently. This novel mode for PDI-mediated regulation of Ero1␣ is supported by several evidences: 1) single catalytic domain a or aЈ of PDI functions well to promote the reduction/oxidation of the regulatory disulfides of Ero1␣; 2) an octapeptide containing the -CGHC-active site of PDI alone is capable of regulating Ero1␣; 3) the activation and inactivation of Ero1␣ in cells are dramatically abolished by a PDI inhibitor 16F16, which modifies the active sites of PDI; 4) the substrate binding deficient mutant PDI F275W/I289A regulates Ero1␣ as efficiently as PDI WT. This mode also well explains why other PDIs containing the -CGHC-active site are also potent regulators of Ero1␣, even though they lack the unique bЈ domain of PDI. During preparation of this paper we noticed a very recent study claiming that the substrate binding ability of PDI is cru-cial for the inactivation of Ero1␣, according to the observation that a binding mutant PDI I272A/D346A/D348A (residue numbered without the 17-residue signal sequence) was not able to inactivate Ero1␣ under anaerobic conditions (37). This triple mutant was originally screened out for stabilization of the bЈx fragment of PDI in a capped conformation (38), but in fulllength PDI the existence of the neighboring aЈ domain limits the x-linker to be in a fully capped conformation (39). Thus, the inefficiency of the I272A/D346A/D348A mutant to inactivate Ero1␣ in their experiments may not be properly attributed to the loss of peptide binding activity.
Once being activated, the reduction of the regulatory disulfides probably increases the conformational flexibility of the regulatory loop between Cys-94 and Cys-131 (14), so that Ero1␣ can specifically bind to PDI via hydrophobic interactions (Fig. 6), like an unfolded nascent peptide being captured by the substrate binding site of PDI. By recognizing the bЈ-aЈ fragment of PDI, Ero1␣ specifically catalyzes the oxidation of the active FIGURE 8. Model for efficient oxidative protein folding in mammalian ER functioned by interplay between Ero1␣ and PDIs. At steady states, Ero1␣ in the ER lumen is predominantly in the inactive states, and PDIs oxidoreductases are present in a balanced reduced (blue) and oxidized (red) distribution (only the catalytic domain is shown for simplicity). Once the loading of reduced nascent substrates explodes, the redox homeostasis in the ER will be disturbed, and reduced substrates as well as increased GSH generate more reduced PDIs. These reduced PDIs quickly reduce the regulatory disulfides (Cys94-Cys131 and Cys99-Cys104) of Ero1␣, and liberate the outer active site located in the loop region (green arrow, right). The activated Ero1␣ specifically recognizes the bЈ-aЈ domains of PDI and preferentially oxidizes the active site in the aЈ domain, which further introduces disulfides into reduced substrates and GSH (red arrows). The asymmetric oxidation of PDI by Ero1␣ keeps the a domain in reduced state, which functions to catalyze efficient disulfide isomerization for the production of correctly folded substrates with complicated disulfides (blue arrow). Once the thiol-disulfide equilibrium in the ER is reestablished, the regulatory disulfides of Ero1␣ are easily re-formed either by self-oxidation or facilitated by oxidized PDIs to decrease the flow of oxidizing power into ER and avoid the futile oxidation cycles (green arrow, left). site in the aЈ domain of PDI. Other PDIs are poor substrates of Ero1␣ and cannot increase the ratio of activated/inactivated Ero1␣ at steady state because they cannot bind and stabilize the active form of Ero1␣ due to the lack of the peptide binding domain bЈ in PDI. Intriguingly, if the aЈ-type domain of PDIp, ERp57 or ERp72 is fused to the bbЈ base of PDI, the chimera becomes competent substrate for Ero1␣ oxidase activity. The two different interaction modes between Ero1␣ and PDIs described above ensures that the activity of Ero1␣ can be elegantly regulated by sensing the redox states of the PDIs ensemble, while Ero1␣ specifically produces disulfides through the aЈ domain of PDI, so that the ER is protected from over-oxidation caused by promiscuous oxidation of PDI a domain and other PDIs.
The biological implication of the strong preference on the aЈ domain by Ero1␣ oxidase activity has been revealed in this study. In the Ero1␣-driven oxidative folding, PDI aЈ domain acts primarily as an oxidase to transfer disulfides into folding substrate, while PDI a domain acts as an efficient isomerase to proofread incorrect disulfides. Under physiological conditions the semioxidized state of PDI is important for the efficiency and fidelity of oxidative protein folding (Fig. 8). Distinct from the mammalian system, there is little stable hydrophobic binding between yeast Ero1p and Pdi1p, which makes less conformational restriction for active Ero1p to transfer disulfides into either catalytic domain of Pdi1p. The strong oxidation of Pdi1p by Ero1p favors fast disulfide generation but compromises the isomerase activity required for catalyzing native disulfide formation (Fig. 5). Our results are in line with the observation that Pdi1p oxidase activity is critical to yeast growth and viability and less than 6% of its isomerase activity is needed (40). Actually, Pdi1p exists predominantly in oxidized state in yeast cells (27), whereas a majority of the active sites of PDI in human cells is in reduced state (41). It is known that only ϳ1% proteins (78 in 6,623 total proteins) in Saccharomyces cerevisiae are predicted to contain disulfides, much smaller than the percentage of ϳ16% (3,297 in 20,258 total proteins) in human (www.uniprot.org). Therefore, the less demand of isomerase activity in yeast than in mammals appears reasonable, and the Ero1␣/PDI system in mammals has evolved to adapt to the folding of larger and more complicated disulfide proteome.
|
2018-04-03T05:42:07.965Z
|
2014-09-25T00:00:00.000
|
{
"year": 2014,
"sha1": "906f8be2582eaae01cdadac362a5c9f244f9e00a",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/289/45/31188.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "8f7e22e43728f2ada32c1299c712fa19e85e5398",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
268441316
|
pes2o/s2orc
|
v3-fos-license
|
Tetraalkylammonium salts (TAS) in solar energy applications – A review on in vitro and in vivo toxicity
Tetraalkylammonium salt (TAS) is an organic salt widely employed as a precursor, additive or electrolyte in solar cell applications, such as perovskite or dye-sensitized solar cells. Notably, Perovskite solar cells (PSCs) have garnered acclaim for their exceptional efficiency. However, PSCs have been associated with environmental and health concerns due to the presence of lead (Pb) content, the use of hazardous solvents, and the incorporation of TAS in their fabrication processes, which significantly contributes to environmental and human health toxicity. As a response, there is a growing trend towards transitioning to safer and biobased materials in PSC fabrication to address these concerns. However, the potential health hazards associated with TAS necessitate a thorough evaluation, considering the widespread use of this substance. Nevertheless, the overexploitation of TAS could potentially increase the disposal of TAS in the ecosystem, thus, posing a major health risk and severe pollution. Therefore, this review article presents a comprehensive discussion on the in vitro and in vivo toxicity assays of TAS as a potential material in solar energy applications, including cytotoxicity, genotoxicity, in vivo dermal, and systemic toxicity. In addition, this review emphasizes the toxicity of TAS compounds, particularly the linear tetraalkyl chain structures, and summarizes essential findings from past studies as a point of reference for the development of non-toxic and environmentally friendly TAS derivatives in future studies. The effects of the TAS alkyl chain length, polar head and hydrophobicity, cation and anion, and other properties are also included in this review.
Introduction
Organic salts, also known as an engineered salt due to their flexible behaviour, are simple cation-anion mixtures that are formed by altering the ionic species and the functional groups in their organic chemical structure that may composed of a combination of organic cations such as carbon, nitrogen, phosphorus and/or sulphur based and inorganic or organic polyatomic anions [1].Tetraalkylammonium salts (TAS) are among the most frequently used quaternary ammonium salts in various industries and academia due to their unique physical and chemical properties, exceptional stability, tuneable properties and surface activity [2,3].These quaternary ammonium salt structure exhibits a tripartite classification, distinguishing itself into ionic liquids (ILs), liquid crystals (LCs) or plastic crystals (PCs) based on nuanced considerations of both chemical structure and thermal behaviour [4].It was discovered that cationic species have a considerable impact on the toxicity of TAS [5].In contrast, anionic species only influence the physical characteristics of these substances, such as the melting point or viscosity [6].In this review, our focus is on the functionality of TAS in the application of solar cell and its toxicity towards environment.To date, TAS is widely used in the solar cell industry [7] as an electrolyte [7,8], synthesis reactant [9] and catalyst for chemical reactions [10], as a corrosion inhibitor [11], and a medium for electrodeposition of metals [12,13].TAS also has been used in other field such as anticancer [14], antimicrobial [14,15] as well as dental restoration material [16].In addition, recent studies have demonstrated the potential use of TAS as precursors, additives, solvents, interfacial layers, and protective layers to enhance the power conversion efficiency (PCE) in perovskite solar cells (PSCs) [17][18][19][20].
Apart from lead (Pb) toxicity in PSCs, TAS cause a health hazard attention as well.In view of the widespread utilization of TAS, there is a strong need to evaluate the toxicity level of TAS in the best interest to preserve public health and prevent unfavourable impacts on the environment.Investigation of the level of toxicity of TAS is also critical so that the production disposal method of TAS can be carried out sustainably and safely.In the last 15 years, a collection of studies and review papers on the toxicity of TAS have been published to highlight the possible hazards of moderate-to-highly toxic TAS in industrial applications and identify efficient approaches to modify their chemical structures to minimize their toxicity.Thus, this review aimed to examine the in vitro and in vivo toxicity assays of TAS, including cytotoxicity, genotoxicity, in vivo dermal, and systemic toxicity, following its potential use in solar energy applications.It is essential to highlight that the focus of the review centres on quaternary ammonium ions, with a specific emphasis on the detailed examination of tetraalkylammonium ions [N R₄ ⁺][X − ].
Structure of TAS
Generally, TAS contains positively charged nitrogen atoms with the chemical formula of [N R₄ ⁺][X − ], in which R is represents the lengthy hydrocarbon chains either an alkyl or an aryl functional group, and X either organic or inorganic an anion [21].The chemical structure is depicted in Fig. 1 at the centred of the illustration [21].The alkyl or aryl functional groups are bonded directly to the nitrogen centre atom, forming a positively charged core nitrogen cation.In comparison to ammonium ion [NH 4 + ] and the primary, Fig. 1.Tetraalkylammonium salts (TAS) applications in solar energy.
N.M.Mustafa et al. secondary, or tertiary ammonium cations, the quaternary ammonium cations are constantly charged and indicate their pH level in solution [4].In 1890, Menschutkin developed the compound via nucleophilic replacement of tertiary amines with an alkyl halide, which is known as the "Menschutkin reaction", and is still considered the best technique for the preparation of quaternary ammonium salts [22].The basic chemical structure of TAS is composed of two parts, notably a hydrophobic alkyl group and a positively charged hydrophilic core that maintains its cationic feature regardless of the pH level [23].The physical and chemical properties of TAS are influenced by both of these components [24] as well as its substitutes, especially the alkyl chain [25].Although common TAS is soluble in water [26], the aqueous solubility of TAS reduces with the increase in hydrophobicity or molecular length of the alkyl chain [27].Similar to alcohols and water, TAS is extensively soluble in polar and protic solvents due to their ionic charges [13].As with the aqueous solubility of TAS in water, the solubility of TAS in polar and protic solvents drastically decreases as the chain length increases, while TAS with R greater than C 14 is almost insoluble in water with minimal solubility [28].Long-chain TAS also exhibits significantly higher solubility in non-polar solvents.Despite that TAS appear as solids, the length and structure of the attached R-residues can substantially influence their thermal properties [22,29].
TAS applications in solar energy
TAS has been widely used as precursors, additives, solvents, interfacial layers, and protective layers to enhance the power conversion efficiency (PCE), previously in dye-sensitized solar cells (DSSCs) and most recently in PSCs [16][17][18][19].The findings indicate that the roles of TAS demonstrated tangible outcomes, including satisfactory chemical and thermal stability, decent solvent properties, and unique solubility, specifically in solar cell development [29] as illustrated in Fig. 1.Generally, the remarkable enhancement of PSC's PCE, particularly on short-circuit current density (J sc ), open-circuit voltage (V oc ), PCE, stability, and low hysteresis is reflected by the degree of crystallinity and structural morphology of perovskite films which could be tune by TAS's structure [30].
i) TAS as precursor
Precursor are essentially applied to design and synthesize the desired crystallinity and morphological structure.Besides, comprehensive investigations have been carried out to evaluate numerous combinations and permutations of precursor, such as mixing various precursors and introducing different precursors [31].In a recent study, Bouich et al. [32] demonstrating the viable application of TAS ions to strengthen the stability of lead halide perovskites [30].Moreover, Li et al. [34] introduced tetrabutylammonium (TBA) cation ([N 4444 + ]) in mixed-cation lead halide perovskite.TBA cations were then installed at grain boundaries as sacrificial cations to enhance the stability of the respective solar cells.TBA cations are effective materials to enhance solar cell stability in the development of perovskite grains perpendicular to the substrate through heat in contrast to moisture [32].Due to the inherent instability of perovskite materials in ambient conditions and the environmentally undesirable presence of toxic Pb within perovskite layers, which challenges the principles of green energy technology, Banerjee et al. [18] reported the synthesis of TMASnI 3 using tetramethylammonium iodide (TMAI/[N 1111 + ][I − ]), which is a lead-free organic-inorganic halide perovskite (OIHP) layer, where the organic TMA cation/[N 1111 + ] was employed instead of the standard methylammonium ion [NH 3,1 + ].Based on the results, the photovoltaic response of a basic device structure recorded a PCE of ~1.92%.Hence, the study highlighted the successful synthesis of a lead-free, more environmentally friendly, moisture-resistant PSC, and excellent perovskite material.Meanwhile, Pandey et al. [5] addresses the challenges of lead toxicity in perovskite top cells, proposing a lead-free solution with a maximum conversion efficiency of 30.7 % using MAI/[NH 3,1 + ][I − ] in methylammonium tin mixed halide (MASnX 3 ) in a tandem solar cell configuration with silicon.
ii) TAS as solvent
Given the remarkable strides in cost-effectiveness and performance witnessed in solar cell structures, numerous solvents are undergoing development and gradual integration for enhancing the performance of PSC.Hence, Chao et al. [35] proposed the use of methylammonium acetate (MAAc/[NH iii) TAS as additive TAS additives have found extensive application in perovskite precursor solutions with the objective of enhancing the quality of the resulting perovskite film by passivating defects and regulating crystallinity.While there has been thorough exploration into the role of additives in defect passivation, a comprehensive understanding of their influence on the crystallization process of perovskites remains lacking.Yan et al. [37] successfully demonstrated the development of salt-doped films via the integration of 6,6-phenyl-C 61 butyric acid methyl ester (PCBM) with three TAS derivatives containing different counterions, including tetrabutylammonium hexafluorophosphate (TBAPF ] in the PCBM film increased the fill factor and J sc values, which may be contributed to the use of fluorine-rich salt counterions that improved the device performance. Another crucial property of TAS is their general characteristic of ILs, which significantly influence the crystallization of perovskite film.Shahiduzzaman et al. [19,38] explored the impact of 3 wt.%TAS at varying viscosity on the development and performance of perovskite crystals by employing a precursor solvent N, N-dimethylformamide (DMF).TAS recorded an improved light absorption with smoother film than DMF alone, while tetrabutylammonium chloride TBACl/[N 4444 + ][Cl − ], was considered the best performing ILs.The dispersion of tiny bunches to assemble and produce nanoparticles was hampered by the increased TAS viscosity, causing the increase in non-homogeneity during the film formation [39].Moreover, Carrillo et al. [40] demonstrated the potential use of alkyl ammonium cations (MAI/[NH 3,1 + ][I − ]) to increase moisture tolerance and surface damage by placing a perovskite screen in an methylammonium iodide solution in PSCs.
Very recently, Mohammed et al. [41] delves into hole transport material-free perovskite solar cells, where adding malonic acid as additive to methylammonium lead iodide (MAPbI 3 ) significantly enhances stability and efficiency, resulting in a remarkable power conversion efficiency of 14.14 %.
iv) TAS as interfacial and protective layer
Although perovskite materials exhibit fewer defects compared to other semiconductors, the presence of flaws at the grain boundary or interface could severely affect the effectiveness and stability of the device due to trap-assisted non-radiative recombination [42].According to Zheng et al. [43], tetraalkylammonium halides may passivate charged defects in OIHP with tetraalkylammonium and halide ions.They employed two different molecular choline zwitterions, also recognized as tetraalkylammonium halides, as the interfacial layer, including choline chloride, [N 1,1,1,2OH + ][Cl − ] and choline iodide, [N 1,1,1,2OH + ][I − ], which have no significant alkyl chain.In contrast to the approach using PCMB passivation, [N 1,1,1,2OH + ][Cl − ] and [N 1,1,1,2OH + ][I − ] passivation significantly reduces the trap density.In addition, it extends the carrier lifetime and increases the V oc of OIHP devices with various bandgaps, which leads to an increase in PCE from 10 % to 35 %.Moreover, the approach improved the stability of the OIHP device with practically zero efficiency loss following 800 h of PCE storage.Hence, the findings reiterated the relevance of the all-around passivation of charged ionic faults to enhance the lifespan and efficiency of OIHP devices [44].
v) TAS as electrolyte
In addition, TAS is employed as electrolytes owing to their low toxicity, good surface activity, and high resistance to corrosion and oxidation.In addition to developing essential heirloom applications due to electrolytes in solid supercapacitors and ion batteries owing to anisotropic conduction processes [46], research on TAS-based expansion of ion gels has gained more attraction with a broader electrochemical window [47].As a substitute, researchers have focused on employing organic iodide as the salt in DSSC to address this constraint.Despite these initial results, Jumaah et al. indicated a PCE of 1.0 %, the compound demonstrated solid-solid transitions N.M.Mustafa et al. characteristic of ionic liquid crystal behaviour [8].In addition to the frequently used inorganic salts, including NaI, LiI, and KI, TAS was used as the co-electrolyte in DSSCs.For instance, the combination of TBAI/[N 4444 + ][I − ] and KI increased the PCE of an artificial DSSC.However, the limited solubility of inorganic salts at ambient temperature presents a fundamental drawback to its use [8,48,49].The subsequent summary in Fig. 2 presents a chart depicting the efficiency trends in TAS-based solar technology.
Versatility of TAS in several applications
Apart from its prevalent application in the solar energy sector, TAS play a notable role is their potent and broad-spectrum killing effect, extensively utilized in diverse sectors such as in water treatment [50], textile [51], and oil production [52], coatings [53], sterilization of algae [54], safeguarding agricultural products from mould [55], insect and corrosion prevention in wood and building materials [56], sterilization of surgical and medical equipment [57], treating poultry eggs and meat, and cleaning and sterilizing household and food products [58,59].Furthermore, ionic conductivity is another outstanding property of TAS that makes them excellent electrolytics substances [60].Their amphiphilic properties allow them to be absorbed in the air-water interface such that the hydrophilic part is in the water and the hydrophobic part is in the air.This arrangement reduces the surface or interfacial tension, making them a unique class of surfactants.TAS possesses a wide range of biological activity, which explains their continuous application as bioactive agents.TAS structure featuring a lengthy alkyl chain segment functioning as a hydrophobic (nonpolar) can infiltrate the nonpolar cell membrane, altering its permeability and leading to cell (bacterial) death.Notably, this vital mechanism exhibits minimal impact on bacterial resistance and susceptibility.In fact, TAS is usually identified as a bioactive substance, as has been demonstrated for water-soluble alkyl chains ranging in length from C 8 to C 16 [22].The distinctive structure of TAS gives them various physical and chemical capabilities, including emulsification, dispersion, solubilization, and sterilization [52].TAS has also found widespread use in the field of antibacterial [16,[61][62][63].Overall, the unique physicochemical properties of TAS make them a preferable component in various applications and industrial production [64].
Potential toxicity impact of TAS
The toxicity of TAS has eventually drawn the attention of researchers, especially those in the field of renewable energy and green chemistry.Interestingly, the non-volatile properties of TAS, which contribute to their potential renewable alternatives in place of traditional volatile organic solvents, are the primary justification for their status as non-toxic substances [65].Unfortunately, this assumption is misleading and has triggered disputes over the toxicity level of TAS.While TAS have been shown to facilitate air pollution mitigation, the unregulated discharge of TAS into aquatic ecosystems could cause severe water contamination due to their possible toxic level and poor biodegradability [66].Globally, the concentration of TAS in wastewater was detected at a range of 1-60 μg/L and is predicted to multiply 10 times in influential wastewater [67].Hence, investigation of the toxicity of TAS through various approaches, such as the cytotoxicity test, has progressed rapidly with in-depth insights and new understanding despite a recent decline in the toxicity level of TAS.
New generation TAS have been identified as part of the grand strategy to design and synthesize bio-renewable degradable and ILs.The expansion of the TAS-based field is supported by the publication of numerous excellent articles that covered different topics concerning ILs.Thus, the present review explores the recent developments of toxicological TAS upon exposure to humans,
Table 1
In Vitro cytotoxicity of TAS.
HEK-293-human embryonic kidney cells IC 50 = 62.88 mM [74] (continued on next page) N.M.Mustafa et al. microorganisms, and animals.While TAS remains to be associated with "green" properties, a growing opposing viewpoint among researchers has emerged following a more comprehensive evaluation of the life cycle of TAS.A significant number of research studies have pointed out the toxic impact of ILs on different specimens, including the effect on cells (death rate, morphology, apoptosis, and viability), the toxicity on animals, and the rate of seed germination.The overall toxicity impact of TAS on the environment and human health can be assessed by obtaining toxicity data from different species and cell lines as well as describing its toxicity function [68].This would establish the foundation for the development of a new TAS that is fully non-toxic, environmentally friendly, and meets all the stipulated requirements.Most of the past studies assessed the physicochemical properties of TAS, which include decomposition temperature, ionic conductivity, viscosity, solubility, water, density, melting point, and surface tension.As a result, TAS has become a preferred material for various industrial purposes, especially as electrolytes in solar cell applications [8,69].However, the lack of toxicity information on TAS on the environmental implications limits its optimal use.Cytotoxicity refers to the toxic property of a substance toward living cells.The risk assessment via the screening of mammalian cell cultures is a ground-breaking method in environmental research.Basically, the cytotoxicity test of TAS can be performed in two ways, namely in vitro and in vivo method.The former method is based on precise cellular mechanisms to detect specific chemicals, while the latter assesses the influence of toxic integration on the whole organism and presents direct effects on environmental specimens.Accordingly, cytotoxicity assays using cell lines have been proven to be excellent indicators of the level of chemical toxicity.Regardless, cytotoxicity assays are comparatively simple, cheaper, and provide rapid results compared to the more costly and time-consuming conventional animal testing methods.Hence, the opportunity to explore the toxicity effects of TAS based on the hydrophilic and hydrophobic properties, which have yet to be studied to date, would provide valuable insights into toxicity information and hazard assessment of TAS.Several in vitro and in vivo studies on the effect of TAS toxicity on various biological models, particularly cell lines, aquatic models, microorganisms, and mammals, have provided extensive data sets and depict certain relationship trends between the TAS structure and organism activity.Thus, the following sections present a comprehensive overview of the available TAS and the in vitro and in vivo cytotoxicity analysis in living organisms.As a result, a substance's cytotoxic potential is often utilized as an early toxicity signal.The following discussion summarizes the available cytotoxicity test with several notable examples of TAS-related tests.
In vitro cytotoxicity
The study of in vitro cytotoxicity plays a pivotal role in unravelling the safety and biocompatibility of chemical compounds, particularly in the realm of potential applications in various fields, including solar applications and materials science.To assess the resulting cell death effectively, there is a need for cost-effective, dependable, and easily reproducible short-term assays for cytotoxicity and cell viability.The (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) (MTT) assay is the most common test to evaluate the effect of TAS cytotoxicity on cell viability [70].Generally, cell viability is measured by verifying the effect of mitochondrial enzymes on the cells [71].Subsequently, nicotinamide adenine dinucleotide phosphate (NADH) dependent cellular oxidoreductase enzymes induce the MTT to form purple formazan in which the intensity of the substance can be measured through light absorbance at a given wavelength.The procedure is improved given its simplicity, reliability, highly reproducible, and frequently used to examine both the cytotoxicity level and cell viability.The present in vitro cytotoxicity of TAS using MTT assays also summarized in Table 1.
Recently, the synthesis of TAS from natural sources, such as choline, has attracted the interest of scientific and industrial players due to their long-term exquisiteness and potentially minimal toxic level.These features unveil new possibilities in other field applications, highlighting the importance to understand the toxicity of these substances.As such, Wang et al. [74] structure of proteins on the cell membrane, leading to cell dysfunction, and in severe cases, causing disruption to the cell membrane and cell wall.Ahmadi et al. [76] also assessed the cytotoxicity of choline-based TAS against human HEK-293 cells and revealed a quadratic relationship between the number of carbon atoms and cytotoxicity level.The hydrophobicity and size of compounds can be correlated with the number of carbon atoms.The result suggests that both large and small number of carbon atom compounds exhibited the least cytotoxicity, while those of average hydrophobicity recorded the highest cytotoxicity.
Meanwhile, Karatas et al. [77] synthesized three tetraalkylammonium chlorides ((i), (ii) and (iii)) given the limited data on the synthesis and toxicity evaluation of ionic or TAS derivatives).Subsequently, cytotoxic characteristics of the three substances were evaluated on colorectal (Caco-2) and human liver (HepG2) cancer cell lines as well as non-cancer mice fibroblasts (L-929).Comparatively, the IC 50 value of compound (i) indicates higher toxicity against HepG2 cells and L-929 cells than those of compounds (ii) and (iii).Conversely, compounds (ii) and (iii) exhibited greater toxicity against Caco-2 cells than (i).Apart from cancer cell lines, epithelial cells have often been selected to study cytotoxicity since it is the most immediate layer exposed to harmful elements.Xie et al. [78] studied the toxicity of TAS on human immortalised epidermal (HaCaT) and human normal liver cells (LO2), which represent the human epithelial cells.It was observed that all six novel TAS compounds were non-toxic against LO2 and HaCaT.In fact, their cytotoxicity improved synchronously following the increased chain length from hexyl to dodecyl.The inclusion of long-chain structures and two flexible hydroxyethyl groups in TAS facilitates their efficient penetration through the cell membrane.This enables them to effectively interact with enzymes, thereby passivating them.Additionally, the introduction of the POT fragment in TAS proves to be effective in reducing cytotoxicity.
As mentioned earlier, the uncontrolled utilization of TAS has led to the frequent contamination of natural water bodies and aquatic environments.Although they may exist in the marine ecosystem, their possible toxicity effect on the aquatic biosphere is yet undetermined.Thus, Christen et al. [79] investigated the cytotoxicity of the biocidal disinfectant TAS barquat and benzalkonium chloride (BAC) in human hepatoma cell line (Huh7) and Danio rerio liver cells (ZFL).The half-maximum effective concentration (EC 50 ) value of BAC in both cells was 14.23 μg/mL in Huh7 cells and 82 μg/mL in ZFL, which was slightly lower compared to the EC 50 value of barquat at 3.4 μg/mL in Huh7 cells and 1.19 μg/mL in ZFL.The difference in cytotoxicity of the two substances was clarified by their various modes of action.Evidently, long alkyl chains TAS can penetrate the cell membrane and interfere with the physical and biochemical properties of the cells.The presence of charged nitrogen on the cell membrane surface disrupted the voltage distribution.
Additionally, Duque-Benitez et al. [80] prepared three TAS series, which include (i) halomethylated TAS (series I), (ii) non-halogenated TAS (series II), and (iii) halomethylated choline analog (series III), with the presence of halogen in place of one of the hydrogens and studied the impact of the chain length.The cytotoxicity of the three TAS series was then evaluated in vitro against human promonocytic cells U-937 cells.Based on the results, all three TAS series demonstrated high-level cytotoxicity in U-937 cells with LC 50 values varying between 21 and 45 μg/mL.The iodinated TAS also recorded high cytotoxicity with the LC 50 value range of 9-46 μg/mL.Surprisingly, a proportional study on N-iodomethyl and N-chloromethyl TAS indicates the opposite impact of chlorine with respect to the iodine atom compared to other intracellular amastigotes and axenic despite similar toxicity effects in terms of the tether chain length.Eventually, the effect of chain expansion on the cytotoxicity level is more responsive when the iodine atoms appeared in the cationic ammonium head instead of chlorine atoms.On the other hand, Inacio et al. [81] employed the C2BBe1 columnar epithelial cell line to examine the cytotoxicity of various decyltrimethylammonium bromide (C 10 TAB/[N 1,1,1,10 + ][Br − ]) compounds.While the microbicidal action of TAS has been discovered for some time, the precise processes underlying it has not been well-understood except for the fact that the membrane charge neutralized and degraded the bacterial membranes at greater doses.As a result, the therapeutic properties of TAS were assessed by comparing its toxic effects and bactericidal activity on mammalian polarized epithelial cells.
The varying toxicity trends in prokaryotic and eukaryotic cells imply the distinct mechanisms of TAS-induced toxicity.For instance, the toxicity rating of the C2BBe1 cell line was C 12 PB ≈ C 12 BZK > C 12 TAB/[N 1,1,1,12 + ][Br − ], which was consistent with other mammalian epithelial polarized cells in prior findings.Furthermore, the membrane potential within the plasma membrane of eukaryotic cells was higher than that of prokaryotic cells.Consequently, cationic surfactant adsorption would occur more often on bacterial membranes.On the contrary, the insertion into, translocation across, and rupture of the membrane is more difficult in polarized epithelial cells due to the presence of cholesterol, which is not present in the bacterial membrane.Under a similar TAScontaining aqueous phase, the surfactant concentration at the reaction site(s) in mammalian cells would be lower given the larger total membrane surface and overall size of mammalian cells compared to that of bacteria, thus, possibly contributing to their minimal toxicity level.
Furthermore, Basilico et al. [83] investigated the potential cytotoxicity in vitro of a newly developed lipophilic TAS class with an eight-carbon lipophilic electron-rich polyconjugate at the nitrogen atom.The lipophilicity and electronic density of the compound were further modified through the insertion of C 12 -C 18 saturated alkyl chain and methyl-benzyl substituents.The cytotoxicity of the synthesized TAS was assessed using a human microvascular endothelial cell line (HMEC-1), while the MTT assay was employed to determine cell proliferation.The results revealed that all drugs were safe when applied to the human cell line with the selectivity index ranging from 5.99 to 22.09 μM.Surprisingly, the most potent chemicals were also the least harmful, indicating that they specifically target parasitized red blood cells.It was assumed that these compounds hindered the transport of choline since their structural criteria for anti-plasmodial action (lipophilicity and polar head around nitrogen) were comparable to those of the pre-identified TAS derivatives.
Generally, gemini TAS surfactants contain two cationic head groups linked to the spacer and two hydrophobic alkyl chains that enhanced the interfacial properties, including additional miscellaneous aggregate morphologies, smaller critical micelle concentrations (CMCs), stronger solubilizing ability, and improved surface activity, compared to a single chain TAS.Previously, a sequence of gemini TAS surfactants with different methylene spacers and chain lengths was synthesized and subjected to a cytotoxicity test using N.M.Mustafa et al.C6 cells and human embryonic renal cell lines HEK29360.Interestingly, when exposed to the least concentration of these surfactants, the activity of the treated cells was significantly greater than that of untreated cells (control).The hormesis effect was explained by the enhancement of low-concentration drugs and the inhibition of high-concentration drugs.The IC 50 values of the gemini TAS surfactants with the same spacer chain length were roughly consistent with the following trend: 12-s-12 > 14-s-14 > 16-s-16, which refers to the hydrophobicity of monomeric species in the surfactants.Similarly, the CMC values for these surfactants in pure water are as follows: 12-s-12 > 14-s-14 > 16-s-16, indicating an increase in hydrophobicity with the increasing length of the alkyl chain.The higher cytotoxicity of the longer chain-length gemini TAS surfactant partly constitutes the higher hydrophobicity, which allows greater contact of the surfactant with the plasma membrane.
The synthesis of new quaternary ammonium methacrylate (QAMs), 2-dimethylamino ethyl methacrylate (DMAEMA) was reported by Li et al. [84] through the addition of tertiary amines to organo-halides with varying chain lengths.The cytotoxicity test was then performed on the developed QAMs using human gingival fibroblasts (HGF) and Odontoblast-like MDPC-23 mice cells.According to findings, the cell viability still prevailed when exposed to QAMs at 0.5 g/mL.Moreover, the cell viability of 2-hydroxyethylmethacrylate (HEMA) and triethylene glycol dimethacrylate (TEGDMA) matched that of the control media without monomer.The viability of HGF and odontoblast-like cells was reduced as the monomer content in the medium increased.The findings showed that the increase in both monomer concentration and chain length influenced the increase in cytotoxicity.
Apart from the MTT assay, previous studies have employed the in vitro cytotoxicity of TAS using other methods (Table 1; section other assays).Previously, Dzhemileva et al. [85] demonstrated the first large-scale cytotoxic analysis of various types of TAS, including cholinium and ammonium derivatives.A total of seven human cell lines (HEK293, A2780, U937, A549, K562, HL60, and Jurkat) were employed to identify the relationship between the structural components of TAS and the cytotoxicity level using the concentration required to decrease the proliferative activity by 2-fold values (CC 50 ).The test substances were evaluated for cytotoxicity using PrestoBlue Cell Viability Reagent.Based on the findings, the cytotoxic impact of the alkyl chain length in the TAS was nearly independent of the type of cation and anion TAS, including the type affected of cell lines.The presence of an oxygen atom in the side alkyl chain showed no significant effect on the cytotoxicity of TAS.
For instance, Erfurt et al. [86] applied the WST-1 test to assess the cytotoxicity of ILs in rat leukemia IPC-81 cell line.The phase-transfer catalysis (PTC) method was proposed to produce 2-chloro-1,3-butadiene (chloroprene) using D-glucose-based TAS as the catalyst.Then, bromide-based TAS with R 1 --CH 3 C 12 H 25 and C 16 H 33 were selected to evaluate the lipophilicity of the produced salts on their cytotoxicity.Based on the EC 50 results, the [GlcO(CH 2 ) 2 N(CH 3 ) 3 ]Br TAS showed no cytotoxic effect on the viability of the IPC-81 cell line up to a concentration of 584 mol/L.However, the EC 50 values for other compounds with longer side chains at a range of 9.5-85.1 mol/L indicated a considerable cytotoxic effect.Additionally, the EC 50 value of the TAS derivative with a dodecyl side chain and linked with bromide anion was an order of magnitude greater than that of the derivative with a hexadecyl side chain.Interestingly, the toxicity of the [GlcO(CH 2 ) 2 N(CH 3 ) 2 (C 16 H 33 )] cation in combination with either bromide or bis(trifluoromethanesulfone)imide (NTf 2 ) was similar.The increased cytotoxicity along with the extension of the alkyl substituent in this study demonstrated the strong correlation between the cytotoxicity and hydrophobicity of the TAS.The result shows the significant reduction of cytotoxicity when more hydrophilic D-glucose is used to replace the phenyl ring.
The minimum concentration required to cause 50 % cell death (MTS assay) and the minimum concentration required to release 50 % LDH from cells (LDH assays) have also been employed to assess the cytotoxicity of TAS.Perinelli et al. [70] performed the MTS and LDH assays on human epithelial cell lines, namely Caco-2 (intestinal) and Calu-3 (airway) to analyse and compare the cytotoxicity profiles of TAS derived from methionine and leucine and esterified with fatty acids at different chain lengths (C 10 , C 12 , and C 14 ) with commercial BAC.The MTS colorimetric assay assessed the cell viability of the synthesized TAS at various concentrations, while the LDH method evaluated the impact of TAS on membrane disruption as a responsive and relevant test based on the materials.EC 50 results were then calculated based on the hydrophobic value of the TAS, which is largely influenced by the length of the hydrocarbon chain.According to the findings, the EC 50 values decreased as the hydrophobicity increased.The LDH assay recorded a marginally higher value, while no significant discrepancies in cytotoxicity were observed between the two cell lines.Besides the expected interaction between cytotoxicity and hydrophobicity, the study also identified the relationship between EC 50 values (from both MTS and LDH assays) were much lower than those of CMC values, demonstrating the ability of mammalian epithelial cells to extract and solubilize phospholipids, subsequently contributing to the development of cell membrane destruction and mixed micelles.
Furthermore, Inacio et al. [88] studied the cell membrane permeabilization using the lactate dehydrogenase (LDH) leakage assay in columnar epithelial (MDCK) cell line in vitro exposed to ecyltrimethyl ammonium bromide (C 10 TAB/[N 1,1,1,10 + ][Br -]) at different concentrations for 3 h.The cell viability of the MDCK started to decline at C 10 TAB concentrations of 1.3 mM (CMC/30) with the LD 50 at 3.1 mM.Nevertheless, membrane damage only occurred at 2.7 mM (CMC/15) with the LD 50 at 3.8 mM.Although the cells eventually died, the membrane leakage was the probable cause of death rather than the direct impact of the TAS.Carpenter et al. [89] also investigated the toxicity of TAS-functionalized silica nanoparticles with and without the presence of nitric oxide (NO) on L929 fibroblast [65].Fibroblast cells signify the standard benchmark for cytotoxicity tests given their major role in immune response and wound healing.The inclusion of primary amines in control N-(6-aminohexyl) aminopropyltrimethoxysilane (AHAP) particles resulted in a considerable toxicity at a higher particle dosage of 6 mg/mL.When 8 mg/mL of methyl QA particles were used, the cytotoxicity was significantly reduced as the primary amines were converted to trimethyl QA groups, which corresponded to the 40% reduction in cell viability.However, an increased in cytotoxicity of the QA-functionalized particles against fibroblasts at their minimum bactericidal concentration (MBC) was observed as the alkyl chain length increased.Accordingly, the fibroblast viability was reduced by 21 % and 86 % when treated with 4 and 16 g/mL BAC, respectively.The detection of harmful effects of particle-bound TAS even at substantially lower BAC concentrations indicates the advantages of particle-bound TAS.
In vitro genotoxicity
In vitro cytotoxicity test can also be performed through genotoxicity assays, which involve successive procedures to evaluate induced DNA damage that affects the structure, segregation, or content of DNA and are not essentially related to mutagenicity [90].As such, three primary endpoints (structural chromosome aberrations, numerical chromosome aberrations, and gene mutation) should be investigated for an appropriate genotoxic analysis.Each event is involved in heritable diseases and carcinogenesis.The standard in vitro test battery consists of a bacterial reverse mutation test (OECD TG 471), mammalian chromosomal aberration test (OECD TG 473), mammalian cell mutation test (OECD TG 476) and TG 490 and mammalian cell micronucleus test (OECD TG 487).Any in vivo confirmatory continue test must cover the same endpoint, which has achieved positive results [91].
One of the commonly performed genotoxicity tests is the bacteria reverse mutation test (Ames test).The test identifies mutations that are the root cause of several human genetic disorders and play a crucial role in tumour growth and initiation (Table 2).The bacterial strains have different mutations that deactivate the gene engaged in the synthesis of critical amino acids, namely tryptophan (Escherichia coli) or histidine (Salmonella) so that the strains can only develop in culture media that complement those amino acids.Appropriate mutations include substituting particular base pairs or frameshift mutations triggered by the deletion or inclusion of DNA [92].
Previously, Reid et al. [93] reported the mutagenicity index (MI) of protic ionic liquids (PILs) comprising TAS cations depending on the Ames assay that utilized Salmonella typhimurium strain TA100 and TA98.Based on the results, the increasing length of the alkyl side chain on the cation and thereby the lipophilicity increased the MI with the TA98 strain.Despite the presence of a longer aliphatic chain inside its structure compared to other hydroxyl cations in this sample, TAS cation showed varying behaviour, which indicates that hydroxyl groups could minimize the resulting MI of the TAS as a result of the increased alkyl chain length.The inclusion of polar functional groups in TAS cation complexes has been associated with a decline in the reported toxicity, so it is reasonable that the studied mutagenicity similarly decreased.
On the other hand, Lavorgna et al. [94] carried out the acute chronic toxicity and alkaline Comet assay to assess the genotoxicity of TAS on Ceriodaphnia dubia and Daphnia magna subjected to BAC concentrations.The Comet assay, also known as the single-cell gel electrophoresis assay, identifies strand breaks and alkali-labile sites from the association of multiple toxic intermediates with DNA after being exposed to a broad spectrum of genotoxins, such as UV radiation.The findings indicate a strong initiation of DNA migration with BAC, where the DNA migration was calculated by lesions and DNA single-strand breaks that formed single-strand cracks.The percentage of DNA in tail intensity (%) recorded significant damage on both D. magna and C. dubia after 24 h of exposure, beginning at 0.4 ng/L.Besides, Ferk et al. [95] published the first study on the genotoxic effects of benzalkonium chloride (BAC) and didecyl dimethyl ammonium chloride (DDAC) in primary rat hepatocytes using the Comet assay.The findings revealed that didecyldimethylammonium bromide (DDAB) (or structurally known as DMDODAB/[N 1,1,18,18 + ][Br -]) exhibited a greater significant impact through the Comet assay compared to that of BAC.
In vivo toxicity studies
Animal studies and established in vitro models are frequently employed to assess the possible adverse effects of specific compounds on human health, domestic pets, and livestock animals [74].Although the primary goal of toxicity analysis is to determine the harmful health effects of xenobiotics, the approach may be supplemented by more complex biomolecular techniques targeted at elucidating the mechanisms of action of specific compounds.In view of this, laboratory animals have become a significant and well-established instrument to analyse in vivo toxicological results of newly developed chemicals and medicinal items.Initially proposed to predict acute systemic toxicity in animals, in vivo toxicology analyses have adopted increasingly complex, highly precise, and multispecies techniques with well-defined objectives and experimental methodologies, notably for regulatory testing.Acute toxicity studies are often conducted as preliminary evaluations for the development of novel substances and to detect the presence of possible toxicity.They are also used as the first analytical tool to determine the harmful impacts of a limited delivery period of a single drug dose.The highly versatile acute toxicity studies have therefore often been applied to evaluate the possible toxicity of ILs.However, the most common side effects of acute toxicity studies are morbidity or mortality.Thus, the conventional method of performing acute oral toxicity studies involves the evaluation of the lethal dose, 50 % (LD 50 ), which reflect the dose of ILs that induces mortality, and effective dose, 50 % (ED 50 ), which indicate any specified consequence in 50 % of the model species [96].Acute dermal or inhalation tests are conducted when laboratory animal specimens are intensely exposed to ILs in water, which is also expressed as the concentration of ILs that induces mortality (LC 50 ) or other specified impacts (EC 50 ) in 50 % of the model species.The outcome of acute toxicity studies does not only establish the toxicity degree of ILs and classify them as hazardous materials but also determine the IL dosages for sub-lethal disinfectant application, evaluate chronic toxicity levels, compare the toxicity among IL members, and assists in collecting low-toxicity ILs for potential applications.
Realizing the high risk of human exposure to TAS, epidemiological findings that associate TAS with allergic infection, and the demand for extensive dermal toxicological evidence, Shane et al. [97] assessed the skin sensitization and irritancy potential of DDAB using a murine model to evaluate its role in the development of allergic infection.The local lymph node assay (LLNA) was utilized to estimate the sensitization capacity at varying concentrations of 0.0625-2 %.Based on the results, DDAB caused major irritancy in female BALB/c mice, as indicated by ear swelling.After four and 14 days of DDAB exposure, a substantial increase in gene expression was reported in the draining lymph nodes (DLN) and ear.The findings demonstrated the potential hypersensitivity and inflammation reactions of DDAB following dermal exposure, which raised questions regarding the impact of exposure duration on the hypersensitivity reactions.Furthermore, Lee et al. [98] evaluated the acute toxicity of BAC on target organs in mice and LD 50 following intratracheal instillation and oral intake prior to the recurrent dosage toxicity test.BAC is a common TAS and has been related to toxic effects on the eyes, skin, and airways.According to the results, most of the mice died within 24 h of oral BAC intake at 400 mg/kg, while none were affected after being exposed to 100 mg/kg after 14 days.Additionally, dose-dependent mortality was observed at 150-300 mg/kg with an increase in the number of dead mice against time.The BALB/c mice were subjected to a single oral dose of BAC and recorded an LD 50 of 241.7 mg/kg.After the intratracheal therapy, the LD 50 was reduced to 8.5 mg/kg, indicating that the lung could be the primary source of toxicity.
In vivo toxicity on TAS also include the use of invertebrates with D. magna as the most frequently studied invertebrate [99].Recently, Mori et al. [100] performed an acute toxicity test of tetraalkyl ammonium cation involved in the process of making liquid crystal display of thin-film transistors assessed with batteries and electrolytes, namely TMAH, TEAA, TBAH, and NH 4 Cl, using a 24-hr D. magna immobilization test (Daphtox kit F).It was found that compounds containing larger alkyl groups, including TBAH and TEAA, recorded greater toxicity to D. magna, while NH 4 Cl achieved the lowest toxicity level.A unique synergistic action of iodide was also observed compared to other halide salts namely, bromide, chloride, and fluoride.Longer alkyl chains of TAS ions, such as TEAA and TBAH, were more toxic than TMAH in the D. magna immobilization test.Other related studies have also examined the toxicity of TAS in vivo against vertebrates is limited.Li et al. [101] reported the toxicity of three quaternary ammonium based ILs 1-hydroxyethyl-3-methylimidazolium tetrafluoroborat ([HOEMIm]BF 4 ), 1-methoxyethyl-3-methylimidazolium tetrafluoroborat ([MOEMIm]BF 4 ), 1-aminoethyl-3-methylimidazolium tetrafluoroborate ([C 2 NH 2 MIm]BF 4 ) on zebrafish.Based on the acute toxicity test, the median concentration after 96 h (LC 50 ) of the three ILS in zebrafish was 3086.7 mg/L, 2492.5 mg/L, and 143.8 mg/L, respectively.In a separate study, Mori et al., 2015 [102] investigated the TAS toxicity of Medaka fish (Oryzias latipes) following the OECD 203 test procedure (OECD, 1992).The TAS exposure showed a severe impact on the zebrafish growth with the 96 h of death test exposure recording an LC 50 value of 154 mg/L.
Insight and conclusions
This review presented a significant number of studies with varying views of the toxicity effects and the safe use of TAS on human health and a broad range of living organisms as well as the environment.While certain researchers claimed that TAS materials are biocompatible in various biomedical investigations, others have refuted the claim by demonstrating the undesirable biological reactions of TAS, including cytotoxicity [103][104][105][106][107][108][109].The contradicting findings could have arisen due to the notable differing variables in each study, including different study organizations, specific cellular or animal models, and different physicochemical characterizations of TAS.Nevertheless, the performance and competitiveness of many science-based and industrial processes, including nanotechnology, organic synthesis, electrochemistry, catalysis, and analytical chemistry, have increased as a result of the recycling capabilities of TAS.
Therefore, researchers are compelled to develop new approaches to exploit the non-volatile properties of TAS, which are not shown by conventional media.The in vivo biocompatibility of TAS for human consumption or other biomedical applications must be addressed with an in-depth toxicity analysis.Furthermore, since this material has emerged as important component in the solar photovoltaic industry especially perovskite solar cell which foresee extremely fast progress by tri-fold its performance in just ten years, detailed research on the possible hazards of perovskite solar cell's materials to human health and the environment is required.
Addressing the potential toxicity of PSCs is crucial, and numerous life cycle analysis (LCA) studies have been conducted to evaluate their environmental impacts precisely [110].While the leaching of Pb from PSC panels during their lifespan results in a low contamination level compared to background values of Pb in urban areas, anticipating the environmental impact of the abundance of obsolete PSCs at the end of their shelf life is imperative.This remains a critical concern, especially with the widespread implementation of commercial PSC products.Therefore, proactive attention should be given to urban mining approaches, incorporating waste recycling, resource recovery, and circular technological solutions.As of now, solar cells, including PSC, are regulated under waste electrical and electronic equipment guidelines in the EU and China, which could potentially accelerate recycling and recovery initiatives [111].
Although TAS is estimated to contribute ~1% to the composition, this figure does not fully reflect the diverse applications of TAS as a precursor, additive, solvent, interfacial layer, and protective layer in various procedures for preparing high-performance PSC devices.
On the contrary, a majority of PSC components can be easily removed through dissolution in polar organic solvents, alkaline aqueous solutions, and ionic eutectic solvents, presenting promising prospects for industrializing the recycling and reuse of TCO/ETL substrates from PSCs.However, the inevitable generation of waste and end-of-life devices poses serious environmental concerns.Therefore, proactive development of recycling and recovery technologies for perovskite solar cells, where individual components in the perovskite can be recycled and recovered using various physical and chemical methods, is necessary.For instance, precious metal contacts such as Aurum (Au), Argentum (Ag), and Platinum (Pt) can be recovered through electrochemical extraction.Subsequently, purification and recycling processes can be applied to the HTM component and the perovskite layer, utilizing methods such as dissolution, solvent extraction, column chromatography, precipitation, and crystallization.Washing, sonication, and annealing methods can be employed for the ETL and FTO components [110].As mentioned earlier, TAS can be recovered and purified using the same processes applied to the HTM component and the perovskite layer in the perovskite mixture.Despite the remarkable versatility of TAS in various aspect in solar cells application, it is imperative to note that regulatory bodies often mandate cytotoxicity testing as an integral component of safety assessments.This is particularly crucial when considering the commercial application of TAS compounds.Besides, the extensive application of TAS has raised the risk of harmful chemical leakage into the atmosphere.Regardless of the green properties, the ecotoxicity of freshly synthesized TAS for industrial application should be determined since various reports have shown the potential harm of TAS on the climate.Interestingly, the rapid advancement of green technology facilitates the innovative development of TAS with biodegradable and environmentally friendly features.The toxicity test on bio-based TAS has been recognized as a significant step toward developing more sustainable, biodegradable, and ecologically beneficial TAS derivatives.The utilization of natural and bioactive chemicals, such as fatty acids, amino acids, choline, and other natural acids, has sparked a new interest in the development of bio-based TAS.Hence, it is critical to establish a standard guideline that regulates the synthesis of non-toxic and biodegradable materials from sustainable sources, such as choline, sugars, bicyclic monoterpene moiety derivatives, and amino acids.
To the best of the author's knowledge, only a few research on the toxicity of bio-based TAS for solar industry usage, mainly TAS produced from fatty acids, have been reported so far.Conventional TAS toxicity information used in solar cells is only obtained from previous studies and other sources, for example TBACl, IC 50 (Cell line) = 1.79-27.47mM [85], TBAI, LD 50 Oral (rat) = 1990 mg/kg and LD 50 Oral (mouse) = 112 mg/kg [113].Fatty acids are formed through the hydrolysis of triacylglycerol molecules and are naturally found in vegetable oils, such as soy and sunflower oils.They also cause acidity and off-flavour in crude vegetable oils.Because of their renewal resources, the synthesis and usage of TAS ionic liquids from biobased are receiving more attention.Taken together, this review highlights the need to determine the toxicity and hazards of new materials prepared from new sources so that they are not harmful to human health and do not pollute the environment.
explored the impact of incorporating tetrabutylammonium iodide (TBAI/[N 4444 + ][I − ]) as a precursor into the MAPbI 3 solution, forming MA (1-x) TBA (x) PbI 3 thin films.This investigation aimed to examine how different percentages of TBAI incorporation affected the structure-property relationship of MAPbI 3 .The findings revealed that introducing TBAI increased crystallinity, and grain size, improved surface morphology without pin-holes, and enhanced roughness in the resulting MAPbI 3 thin films.Furthermore, the MA (1-x) TBA x PbI 3 thin film exhibited superior stability, particularly in a relative humidity of approximately 60 % after 15 days, compared to the pure MAPbI 3 thin film.Poli et al. [33] also observed an increased hydrophobicity, which produced excellent moisture stability by modifying the perovskite absorbent layer through the addition of tetrabutylammonium iodide (TBAI/[N 4444 + ][I − ]) and methylammonium iodide (MAI/[NH 3,1 + ][I − ]).Since (TBAI/[N 4444 + ][I − ]) is unreactive with water at room temperature, it possesses exceptional thermal and thermodynamic stability,
Fig. 2 .
Fig. 2. The summary of the TAS based solar cell efficiency.
002 mM (continued on next page) N.M.Mustafa et al.
N.M.Mustafa et al.
Table 2
In vitro genotoxicity assay of TAS.
|
2024-03-17T16:15:30.815Z
|
2024-03-01T00:00:00.000
|
{
"year": 2024,
"sha1": "16bdf929b4c6b9e0d957f9d4a006b2c7b20c561f",
"oa_license": "CCBYNC",
"oa_url": "http://www.cell.com/article/S2405844024034121/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4c7e3e71194180a95c3368476ddcc22aee865e7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235671296
|
pes2o/s2orc
|
v3-fos-license
|
Commodity risk assessment of Juglans regia plants from Turkey
Abstract The European Commission requested the EFSA Panel on Plant Health to prepare and deliver risk assessments for commodities listed in Commission Implementing Regulation (EU) 2018/2019 as ‘High risk plants, plant products and other objects’. This Scientific Opinion covers the plant health risks posed by 2‐year‐old grafted bare rooted plants for planting of Juglans regia imported from Turkey, taking into account the available scientific information, including the technical information provided by Turkey. The relevance of any pest for this Opinion was based on evidence following defined criteria. Two EU quarantine pests, Anoplophora chinensis and Lopholeucaspis japonica, and three pests not regulated in the EU, two insects (Garella musculana, Euzophera semifuneralis) and one fungus (Lasiodiplodia pseudotheobromae), fulfilled all relevant criteria and were selected for further evaluation. For these pests, the risk mitigation measures proposed in the technical dossier from Turkey were evaluated by considering the possible limiting factors. For these pests, an expert judgement was given on the likelihood of pest freedom taking into consideration the risk mitigation measures acting on the pests, including uncertainties associated with the assessment. While the estimated degree of pest freedom varied among pests, Lasiodiplodia pseudotheobromae was the pest most frequently expected on the commodity. The expert knowledge elicitation indicated, with 95% certainty, that 9,554 or more grafted bare rooted plants per 10,000 will be free from Lasiodiplodia pseudotheobromae.
Background and Terms of Reference as provided by European Commission 1.1.1. Background The new Plant Health Regulation (EU) 2016/2031 1 , on the protective measures against pests of plants, has been applied from December 2019. Provisions within the above Regulation are in place for the listing of 'high risk plants, plant products and other objects' (Article 42) on the basis of a preliminary assessment, and to be followed by a commodity risk assessment. A list of 'high risk plants, plant products and other objects' has been published in Regulation (EU) 2018/2019 2 . Scientific opinions are therefore needed to support the European Commission and the Member States in the work connected to Article 42 of Regulation (EU) 2016/2031, as stipulated in the terms of reference.
Terms of Reference
In view of the above and in accordance with Article 29 of Regulation (EC) No 178/2002 3 , the Commission asks EFSA to provide scientific opinions in the field of plant health.
In particular, EFSA is expected to prepare and deliver risk assessments for commodities listed in the relevant Implementing Acts as 'High risk plants, plant products and other objects'. Article 42, paragraphs 4 and 5, establishes that a risk assessment is needed as a follow-up to evaluate whether the commodities will remain prohibited, removed from the list and additional measures will be applied or removed from the list without any additional measures. This task is expected to be on-going, with a regular flow of dossiers being sent by the applicant required for the risk assessment.
Therefore, to facilitate the correct handling of the dossiers and the acquisition of the required data for the commodity risk assessment, a format for the submission of the required data for each dossier is needed.
Furthermore, a standard methodology for the performance of 'commodity risk assessment' based on the work already done by Member States and other international organisations needs to be set.
In view of the above and in accordance with Article 29 of Regulation (EC) No. 178/2002, the Commission asked EFSA in December 2019 to provide scientific opinion in the field of plant health for Juglans regia from Turkey taking into account the available scientific information, including the technical dossier provided by Turkey.
Interpretation of the Terms of Reference
The EFSA Panel on Plant Health (hereafter referred to as 'the Panel') was requested to conduct a commodity risk assessment of Juglans regia from Turkey following the Guidance on commodity risk assessment for the evaluation of high-risk plant dossiers (EFSA PLH Panel, 2019).
The EU quarantine pests that are regulated as a group in the Commission Implementing Regulation (EU) 2019/2072 4 were considered and evaluated separately at species level.
Annex II of Implementing Regulation (EU) 2019/2072 lists certain pests as non-European populations or isolates or species. These pests are regulated quarantine pests. Consequently, the respective European populations, or isolates, or species are non-regulated pests.
• Did not assess the effectiveness of measures for Union quarantine pests for which specific measures are in place for the import of the commodity from Turkey in Commission Implementing Regulation (EU) 2019/2072 and/or in the relevant legislative texts for emergency measures and if the specific country is in the scope of those emergency measures. The assessment was restricted to whether or not the applicant country implements those measures.
• Assessed the effectiveness of the measures described in the Dossier for those Union quarantine pests for which no specific measures are in place for the importation of the commodity from Turkey and other relevant pests present in Turkey and associated with the commodity.
Risk management decisions are not within EFSA's remit. Therefore, the Panel provided a rating based on expert judgement regarding the likelihood of pest freedom for each relevant pest given the risk mitigation measures proposed by the MAF of Turkey.
Data provided by MAF of Turkey
The Panel considered all the data and information (hereafter called 'the Dossier') provided by MAF of Turkey, including the additional information provided by MAF of Turkey in April 2021, after EFSA's request. The Dossier is managed by EFSA.
The structure and overview of the Dossier is shown in Table 1. The number of the relevant section is indicated in the Opinion when referring to a specific part of the Dossier.
The data and supporting information provided by the MAF of Turkey formed the basis of the commodity risk assessment.
The list below shows the data sources used by MAF of Turkey to compile the pest list associated with J. regia.
3) CABI Invasive Species Compendium (online)
The Invasive Species Compendium is an encyclopaedic resource including science-based information, detailed data sheets on pests, diseases, weeds, host crops and natural enemies based on trustable sources (scientists, specialists, independent scientific and specialist organisations, images, maps, bibliographic databases and full-text articles).
4) European and Mediterranean Plant Protection Organization Global Database EPPO (online)
This is a Global Database providing pest-specific information on host range, distribution ranges and pest status. Available online: https://gd.eppo.int/
5) Plant Protection Bulletin (Journal, available online)
The Plant Protection Bulletin has been published by Plant Protection Central Research Institute since 1952. The journal is published four times a year with original research articles in English or Turkish languages on plant protection and health. It includes research on biological, ecological, physiological, epidemiological, taxonomic studies and methods of protection in the field of disease, pest and weed and natural enemies that cause damage in plant and plant products. In addition, studies on residue, toxicology and formulations of plant protection products and plant protection Fauna Europaea is Europe's main zoological taxonomic index. Scientific names and distributions of all living, currently known, multicellular, European land and freshwater animal species are available in one authoritative database. The index was used to verify the taxonomic position of the insects. The International Plant Protection Convention (IPPC) is an international plant health agreement, established in 1952, that aims to protect cultivated and wild plants by preventing the introduction and spread of pests. The IPPC provides an international framework for plant protection that includes developing International Standards for Phytosanitary Measures (ISPMs) for safeguarding plant resources. Available online: https://www.ippc.int/en/core-activities/standards-setting/ispms/ 10) Journals and other sources Journals and bibliographic database containing research articles on plant pests were used to complete the pest list and required relevant information on pest. National and EU legislations were used to determine pest status in Turkey and in EU, respectively.
Additional information used by MAF of Turkey to compile the Dossier and details on literature searches along with full list of references can be found in the Dossier Sections 1.0 and 3.1.
2.2.
Literature searches performed by EFSA The following general searches were combined: i) a general search to identify pests of Juglans regia in different databases and ii) a general search to identify pests associated with Juglans as a genus. The general searches were run between 6 August and 1 September 2020 using the databases indicated in Table 2. No language, date or document type restrictions were applied in the search strategy.
The search strategy and search syntax were adapted to each database listed in Table 2, according to the options and functionalities of the different databases and the CABI keyword thesaurus.
For Web of Science, the literature search was performed using a specific, ad hoc established search string (see Appendix B). The string was run in 'All Databases' with no range limits for time or language filters.
Finally, the pest list assessed included all the pests associated with J. regia and all EU quarantine pests associated with Juglans as a genus.
Methodology
When developing the Opinion, the Panel followed the EFSA Guidance on commodity risk assessment for the evaluation of high-risk plant dossiers (EFSA PLH Panel, 2019).
In the first step, pests associated with the commodity in the country of origin (EU-regulated pests and other pests) were identified. The EU non-quarantine pests not known to occur in the EU were selected based on evidence of their potential impact in the EU. After the first step, all the relevant pests that may need risk mitigation measures were identified.
In the second step, the overall efficacy of the proposed risk mitigation measures for each pest was evaluated. A conclusion on the pest freedom status of the commodity for each of the relevant pests was achieved and uncertainties were identified. Pest freedom was assessed by estimating the number
Commodity data
Based on the information provided by the MAF of Turkey the characteristics of the commodity were summarised.
Identification of pests potentially associated with the commodity
To evaluate the pest risk associated with the importation of J. regia from Turkey a pest list was compiled. The pest list is a compilation of all identified plant pests associated with J. regia based on information provided in the Dossier Sections 1.0 and 3.1 and on searches performed by the Panel. In addition, all EU quarantine pests associated with any species of Juglans were added to the list.
The scientific names of the host plants (i.e. Juglans regia and Juglans) were used when searching in the EPPO Global Database and CABI Crop Protection Compendium. The same strategy was applied to the other databases excluding EUROPHYT and Web of Science.
EUROPHYT was investigated by searching for interceptions associated with J. regia commodities imported from Turkey from 1995 to May 2020 and TRACES-NT was used for interceptions from May 2020 to January 2021.
The search strategy used for the Web of Science Database was designed combining English common names for pests and diseases, terms describing symptoms of plant diseases and the scientific and English common names of the commodity and excluding pests which were identified using searches in other databases. The established search string is detailed in Appendix B and was run on 6 August 2020.
The titles and abstracts of the scientific papers retrieved were screened and the pests associated with J. regia were included in the pest list.
Finally, the list was complemented by those pests mentioned in the Dossier if they were not found using the source of information listed above.
The compiled list (see Microsoft Excel ® file in Appendix D) includes all agents reported in association with J. regia, potentially including natural enemies of insects and not harmful microorganisms, and all quarantine pests that use Juglans as their host. The list was eventually further compiled with other relevant information (e.g. EPPO Codes, taxonomic information, categorisation, distribution) useful for the selection of the pests relevant for this Opinion.
The evaluation of the compiled pest list was carried out in two steps: first, the relevance of the EU quarantine pests was evaluated (Section 4.1); and second, the relevance of any other plant pests was evaluated (Section 4.2).
Pests for which limited information was available on one or more criteria used to identify them as relevant for this Opinion are listed in Appendix C (List of potential pests not further assessed).
Listing and evaluation of risk mitigation measures
The proposed risk mitigation measures were listed and evaluated for the commodity. When evaluating the potential pest freedom of the commodity the following types of potential infection sources for J. regia plants in export nursery and relevant risk mitigation measures were considered (see also Figure 1): The risk mitigation measures proposed by MAF of Turkey were evaluated. Information on the biology, likelihood of entry of the pest to the export nursery, of its spread inside the nursery and the effect of the measures on the specific pest on the commodity were summarised in pest sheets for each pest selected for further evaluation (see Appendix A).
Expert knowledge elicitation
To estimate the level of pest freedom of the commodities, a semi-formal expert knowledge elicitation (EKE) was performed following Annex B.8 on semi-formal EKE of the EFSA Opinion on the principles and methods behind EFSA's Guidance on Uncertainty Analysis in Scientific Assessment (EFSA Scientific Committee, 2018). The specific question for the semi-formal EKE was defined as follows: 'Taking into account i) the risk mitigation measures listed in the Dossier, and ii) other relevant information, how many of 10,000 J. regia plants will be infested with the relevant pest/pathogen when arriving in the EU?'. The EKE question was common for all the pests that were assessed.
The uncertainties associated with the EKE (expert judgements) on the pest freedom of the commodity for each pest were taken into account and quantified in the probability distribution applying the semi-formal method described in Section 3.5.2 of the EFSA PLH Guidance on quantitative pest risk assessment (EFSA PLH Panel, 2018). Finally, the results were reported in terms of the likelihood of pest freedom. The lower 5% percentile of the uncertainty distribution reflects the opinion that pest freedom is with 95% certainty above this limit.
The risk assessment uses individual plants as the most suitable granularity. The following reasoning is given: i) There is no quantitative information available regarding the clustering of plants during production. ii) For the pests under consideration, a cross-contamination during transport is not likely.
iii) The walnut plants are delivered to fruit producers or nurseries.
Description of the commodity
The commodity to be exported to the EU are 2-year-old Juglans regia (common name: walnut; family: Juglandaceae) grafted bare rooted plants for planting referred to as saplings, without leaves (Dossier Sections 1.0 and 3.1). Juglans regia is bud grafted on J. regia rootstock. The production of plants is carried out in soil in production plots in open air. Before export, the roots are washed to remove soil. At the moment of export, the diameter at the collar of saplings is 1.5-2 cm and the height of the sapling is 120-150 cm. Walnut saplings are delivered to fruit producers or nurseries (Dossier Sections 1.0 and 3.1).
3.2.
Description of the production areas According to Dossier Section 1.0, walnut saplings are produced in 36 provinces in Turkey, although the production is mainly concentrated in Balıkesir, Bursa, Denizli, _ Izmir, Samsun and Yalova provinces. Balıkesir ranks first in the production of walnut saplings, see Figure 2.
According to Dossier Section 3.1, walnut production in Turkey is certified and the same standards are applied for domestic and international trade. Certified walnut saplings are officially checked annually during production, and all certified walnut saplings have export potential. In 2020, certificates were issued for 8,500,000 walnut saplings grown by 162 sapling producers. Manufacturers and production sites for export to the EU are currently unknown due to the ban imposed by the EU (Dossier Section 3.1).
Based on the above information, the Panel considers in its assessment all 36 provinces where walnut saplings are produced as potential places of production of walnut saplings to be exported to the EU.
The production areas are surrounded by wire or stone wall or left empty (Dossier Section 3.1).
Figure 2:
Production of walnut plants for planting in Turkey (Dossier Section 1.0). In the legend of the map, the Panel interprets the term 'city border' as 'province border' and the figures after the colour boxes as the number of the saplings produced each year According to the rules described in Table 4, a distance of at least 20 m is left between the nurseries and other woody plants (Dossier Section 3.1). There is no information on the species composition of the woody plants in the surroundings.
According to Dossier Section 3.1, there are generally no woody plants other than walnut mother plants and walnut saplings at a distance of less than 2 km from the nursery plots, although photographs provided in the Dossier Section 1.0 support that woody plants are present near to production plots.
According to Dossier Section 3.1, there is distance of 5-10 km between the nurseries and urban areas.
According to Dossier Section 3.1, the vast majority of walnut sapling producers only produce walnut saplings. Annual production is 8-10 million walnut saplings in Turkey (see Figure 2). In addition to the production of walnut saplings, only a few producers also produce other fruit saplings in different plots and leave at least 8 m distance between the production plots of walnut and other species.
According to K € oppen-Geiger climate classification, the main climate present in Turkey belongs to the classes B, C, D and E (Yılmaz and C ßic ßek, 2018). Tropical climate zone (A) is not present in Turkey and the polar climate (E) is restricted only to high mountain areas. The temperate climate (C) is the most widely distributed in Turkey (Yılmaz and C ßic ßek, 2018).
3.3.
Production and handling processes
Growing conditions
The production of plants is carried out in the soil in production plots in the open air. After a 2-year break in the sapling production plot (i.e. rotation for the first year cultivation of cereals and fallow for the second year after the saplings are pulled), the same plot can be used as a production parcel (Dossier Sections 1.0 and 3.1).
The isolation distances for sapling production plot and for mother blocks (i.e. an area that includes the plants from which the propagation materials are obtained) from gardens are specified in Table 3.
Walnut saplings to be exported to the EU fall into the category 'Nursery gardens' (Dossier Section 3.1).
Soil samples are taken from the area where mother blocks will be established and if free from nematodes (root knot nematodes Meloidogyne spp.), saplings with basic certificate must be planted in the mother block facility.
The maximum and the minimum plant densities are six and four saplings per m 2 in nurseries (Dossier Section 3.1).
Source of planting material
The propagation material is obtained from the producer's own or another producer's mother block (Dossier Section 1.0). Ninety-one per cent of the certified walnut sapling production are made by using the buds obtained from the saplings producer's own mother plants (Dossier Section 3.1). WALNUT It must be in screenhouse or at least 500 m away from non-certification material It must be at least 100 m away from the material outside certification.
If the required isolation distance cannot be provided, reproduction should be made in screenhouse.
It must be at least 20 m away from the material outside certification.
It must be at least 8 m away from outside certification.
Most producers purchase certified seeds from a few producers. Less than 4% of the rootstocks are produced via tissue culture methods (Dossier Section 3.1).
If the production materials used, namely buds, seeds, seedlings and clonal rootstocks, have been previously certified by the Ministry, they can be used in sapling productions. It is forbidden to use noncertified material in sapling production. If uncertified materials are used, the saplings produced with these materials cannot be certified, and they are destroyed, and the producer is penalised by the Ministry (Dossier Section 3.1).
The mother plants are approximately 10 years old, but they can be used up to an age of 25 years in accordance with the Phytosanitary Standards Instruction in Fruit and Vine Saplings and Propagation Materials. Plants used for seed production are mostly 20-25 years old (Dossier Section 3.1).
Management of mother plants
Before establishing the mother block, soil samples are taken by the official inspector and officially analysed to confirm the absence of quarantine organisms. Phytosanitary inspections are carried out on mother plants by Ministry experts three times per year in spring, summer and autumn. Certificate and labels are issued by the MAF of Turkey and sent to the producer for the buds to be taken from the mother plants that meet the requirements of the Phytosanitary Standards Instruction. The producer either uses the certified buds in his own sapling production or sells them to another sapling producer (Dossier Section 3.1).
The walnut mother plants should be at least 20 m away from other orchards or plants. If the isolation distance is sufficient, the mother plants are visually inspected for the presence of harmful organisms specified in Table 4 and if in doubt samples are taken and analysed in the laboratory. If the isolation distance is not sufficient, samples are taken from 1/5 of the mother plants every year and analysed in the laboratory (Dossier Section 3.1).
Mother plants that produce seeds are officially inspected in the same way as mother plants that produce buds (Dossier Section 3.1).
If plants are free from the organisms specified in Table 4, the Ministry issues certificates and labels for the propagation materials to be taken from plants in the mother blocks (Dossier Section 1.0).
Production cycle
Before sapling production, officer takes soil samples from the parcels. The samples are analysed for nematodes by the Ministry Quarantine Agency. If it is found that the growing medium is free from nematodes, the production of saplings is started (Dossier Section 1.0).
Before the rootstock planting, burnt animal manure or worm manure is applied to the growing area. In November, seeds or clonal rootstocks of J. regia are sown/planted in the sapling production parcel or growing medium. Peters brand 30.10.10 fertiliser is given by drip irrigation after the seeds germinate and the seedlings start to sprout in the spring and, if needed, spraying against thrips is done. However, no further information was provided on these treatments (Dossier Section 1.0).
According to Dossier Section 3.1, in the production of walnut saplings, seedlings are mostly used as rootstocks. The seeds are sown in the sapling production plot in November. Rootstocks produced by tissue culture methods, which are rarely used in sapling production, are planted in the plot at the beginning of spring. Clonal rootstocks are rootstocks produced by tissue culture methods and the rate of use in total walnut sapling production is lower than 4%. Only producers authorised by the Ministry can produce rootstocks via tissue culture methods. In the propagation of rootstocks via tissue culture methods, producers transfer the shoot tips or buds taken from their mother plants to the in vitro culture. The productions made by these producers are also under the control and inspection of the Ministry. As a result of the official inspections, certificates and labels are issued by the Ministry for rootstocks that are true to type and that meet the requirements of the phytosanitary legislation. After the rootstocks are certified, they can be sold to sapling producers.
According to Dossier Section 3.1, in the vast majority of walnut sapling production, patch bud grafting is used. Patch bud grafting is made in August-September the following year. At the time of grafting, the rootstock is 9-10 months old (the duration between sowing and grafting time). The graft wound is protected against infections by using copper solutions. Then, the graft is wrapped with grafting tape. Tools are disinfected with chemical compounds containing 10% chlorine prior to grafting. The period between patch bud grafting and the export of walnut saplings is 16 months (Dossier Section 3.1).
Chip budding is rarely used in the production of walnut saplings and it is performed in April-May. At the time of grafting, rootstock is 5-6 months old. Grafted bud will burst in the spring following the year of grafting (Dossier Section 3.1).
According to Dossier Section 3.1, drip irrigation system is used in almost all of the nurseries. No treatment is made to the irrigation water. During the vegetation period, irrigation is generally done every 4 days (Dossier Section 3.1).
In May, spraying against fungal diseases is made with copper products twice with an interval of 10 days. The same fertilisation as described above is repeated in June and potassium nitrate is used in fertilisation from July. Spraying is done against Empoasca spp. and red spider mite (Panonychus ulmi) 2 times from July [according to Dossier Section 3.1 using 80% sulfur (400 g/100 litres of water), which is licensed for fruit trees]. In August, potassium nitrate and, if necessary, Empoasca spp. and red spider mite spraying is continued. If there is a micro element deficiency, fertilisation is made for it. When 50% of the leaves are lost in autumn, the copper spraying is repeated. In the spring, the fertilisation and spraying schedule of the previous year is applied exactly.
According to Dossier Section 1.0, the general rules on productions of walnut sapling/production material in Turkey are specified as follows: a) The producer must obtain a 'Sapling Producer Certificate' from the MAF of Turkey before starting production. Afterwards, with this certificate of authorisation, operator registration is made to the plant passport system. b) Production, certification and marketing in Turkey are only permitted in registered varieties. c) In the event that analysis results are determined as being clear in terms of quarantine factors (see Table 4) in production area/environment, it is permitted to produce sapling and materials only in this area. d) At the beginning of the production process, the producer applies to the Provincial Directorate of the MAF of Turkey. After official inspections conducted by the Ministry experts, an approval of 'certificate and certification label and plant passport can be issued' shall be made for sapling and materials that are healthy and at certain growing standards. 'Sapling Certificate', 'Production Material Certificate' and 'Certification-Plant Passport Label' are issued by the Ministry only for these productions and sent to the producer. e) Certificates and labels shall not be issued for the productions to which the Ministry experts have not given an approval and marketing of them is not permitted in Turkey and abroad. f) Production may not be permitted for a certain period in the area containing productions that are detected as having quarantine factor contamination, possible risks are prevented by making controls and analyses with survey researches at productions made in the area that is within in a certain distance from the area detected as having contamination. g) The certification system is subject to the same conditions as the EU certification system. In this system, following the provisions of the Instruction on Plant Health in Fruit and Vine Sapling and Propagation Materials, isolation distances rule from other areas of preliminary basic, basic and certified productions in walnut species are given in Table 3. h) Permission is given by the Ministry to export sapling and propagation materials that are allowed to be sold in Turkey, namely having a 'Sapling Certificate' or 'Production Material Certificate' and 'Certification-Plant Passport Label'; in cases for which there are no certificate and label, marketing of them domestically and abroad is not permitted. If any production is performed without the stated requirements, these productions are seized and destroyed by the Ministry. This Certificate is a 'Marketability' certificate showing that the sapling and the material are healthy and accurate for its name and issued by the Ministry organisation DSRC (Directorate of Seed Registration and Certification) or SSTD (Sapling and Seedling Test Directorate). 'Certificate of Origin' and 'Plant Health Certificate' are also issued in the export of saplings and materials. i) Following the Certification Regulations, breeding No. 1 is installed with plant breeder material by variety holder that is authorised by DGOPP (Directorate General of Plant Production) or authorised organisations in variety. Preliminary basic productions are made from that breeding. Health controls are performed with macroscopic and mostly laboratory analyses in terms of factors (quarantine factors and harmful organisms affecting the quality, see Table 4) subject to certification in these productions, the amount that can be certified is determined, production material certificate and certification-plant passport label are obtained from DSRC or SSTD, then the materials can be sold domestically and abroad. j) In the basic and certified productions, variety control is performed in breeding No. 2 and 3 by the organisation expert given authority by the Ministry, a 'Breeding Variety Identification Report' is issued for approved breeding ones. k) In standard sapling production in the Certification Regulations, if production material (rootstock-slip) is purchased, then submission of material certificate is mandatory. l) If the producer uses the production material obtained from his own breeding in sapling production in standard class, there is no requirement for the material to have a certificate. However, this does not mean that the material has not been inspected in terms of plant health. In these cases, there is no risk from this type of sapling in terms of plant health as breeding by the producer has been inspected in terms of quarantine factors (see Table 4) by inspections performed annually for plant health and by experts of Provincial/District Directorate of Agriculture in the operator production areas.
Pest monitoring during production and official inspections
According to Dossier Section 3.1, annual inspection on saplings and mother plants are carried out by the Ministry inspectors. If the isolation distances specified in Table 3 are provided, inspections are done visually. If required by the Ministry, additional laboratory analyses are performed. Plants around the production areas are also annually inspected by the Ministry experts in terms of quarantine organisms. If these plants are contaminated with harmful organisms subject to quarantine, plants and saplings in this area are destroyed. According to Dossier Section 3.1, the inspections focus on the list of harmful organisms listed in Table 4, which also includes a description of the methods of detection of relevant pests. Analysis is done in the spring or autumn once before planting, if the soil analysis and harvesting from the area to be produced is not in place, then analysis is done as a maximum every 4 years. During harvesting, the roots are visually inspected for the presence of symptoms.
Virus
Cherry leaf curl disease (Cherry leaf roll nepovirus) The plants are visually inspected at least once a year; laboratory analysis is performed.
Fungus
Phytophthora root and crown rot disease (Phytophthora spp.) Visual inspection is done at least once during the vegetation period and isolation and microscopic examination is done from suspected samples. Cytospora canker (Cytospora spp.) Visual inspection is done at least once during vegetation period and isolation and microscopic examination is done from suspected samples.
Rosellinia root rot (Rosellinia necatrix)
Visual inspection is done once or twice in a year and isolation and microscopic examination is done from suspected samples.
Armillaria root rot (Armillaria mellea)
Visual inspection is done once or twice in a year and isolation and microscopic examination is done from suspected samples.
Laboratory methods of detection and identification used for different categories of pests
The Dossier Section 3.1 specifies the laboratory methods for detection and identification used for different categories of pests as follows: -For insects and mites: visual macroscopic and microscopic analyses are conducted by the entomology experts. -For nematodes identification: morphological diagnosis under a microscope and PCR test are used when necessary. In particular, before establishing new nurseries, soil samples are collected and analysed for the presence of Meloidogyne spp. -For quarantine organism: there is a guideline on standard diagnostic protocols for analysing quarantine pests in Plant Health Diagnostic Laboratories under the MAF of Turkey. The protocols in this guideline have been prepared based on EPPO Standard diagnostic protocols and if there were no EPPO protocols on any of the pests concerning walnut, diagnostic protocols were prepared using IPPC or other well-known scientific methods. -For other pests, e.g. regulated non-quarantine pests: a) For bacteria: generally, standard isolation process and plating on semi-specific or general artificial medium are used (NA, YDCA, KingsB, etc.). Suspicious colonies are then selected and purified. After that biochemical, serological, biological or molecular methods (PCR, real-time PCR) according to target pathogens are used. Finally, pathogenicity tests on the concerned hosts or on the host proposed are performed. b) For viruses: enzyme-linked immunosorbent assay (ELISA) methods based on serology are used. In case of suspicious results or any positive results, the result is confirmed using molecular biology methods (RT-PCR). c) For fungi: symptoms on plants are checked visually. After that isolation procedures for fungi, morphological and microscopic identification are performed. If needed, molecular methods (PCR) for fungi are used. Pathogenicity test on host may also be performed.
Post-harvest processes and export procedure
Saplings are removed from the soil after shedding their leaves in November -December (Dossier Sections 1.0 and 3.1). Then the saplings are transported to the warehouses by trailers. Roots are washed in the washing areas, which are near the warehouses, cleaned from the soil and taken to the warehouses. High pressure water is not used for washing to avoid damaging the roots.
Roots are sprayed with fungicides, traditionally with Thiram, although this fungicide is no longer usable (Dossier Section 3.1). No further information is provided on whether Thiram has been replaced with other fungicides. Finally, roots are wrapped in gelatine (Dossier Section 3.1). Official inspections before export are carried out by the Ministry quarantine inspector. A phytosanitary certificate is issued to the saplings if found suitable.
Saplings to be exported are grouped in bundles of ten (Dossiers Sections 1.0 and 3.1). Before export, bundles are wrapped in plastic sheets and loaded into crates. After packaging, they are kept in storage at 2-4°C and 85% relative humidity until the date of export. In general, after packaging for export, the saplings are immediately loaded onto trucks immediately. Saplings are transported under conditions suitable for the buyer's request and the sales agreement. Generally, transportation is carried out in refrigerated trucks. The moisture of the loaded trailer must be between 85% and 95%. Trailers temperature must be between 2°C and 4°C.
The export takes place from November to March.
Identification of pests potentially associated with the commodity
The compiled pest list (see Microsoft Excel ® file in Appendix D) including all agents associated with J. regia and all EU quarantine pests associated with Juglans yielded 704 pests. This list also included 27 RNQPs and 3 deregulated pests that were subsequently excluded from the evaluation as indicated in Section 1.2.
Selection of relevant EU-quarantine pests associated with the commodity
The EU listing of Union quarantine pests and protected zone quarantine pests (Commission Implementing Regulation (EU) 2019/2072) is based on assessments concluding that the pests can enter, establish, spread and have potential impact in the EU.
Twenty-six EU-quarantine pests that are reported to use J. regia or Juglans as a host plant were evaluated (Table 5) for their relevance of being included in this Opinion.
The relevance of an EU-quarantine pest for this Opinion was based on evidence that: 1) the pest is present in Turkey; 2) Juglans regia or other species in the genus Juglans are hosts of the pest; 3) one or more life stages of the pest can be associated with the specified commodity.
Pests that fulfilled all three criteria were selected for further evaluation.
Of the 26 EU quarantine pests evaluated, two pests (i.e. Anoplophora chinensis and Lopholeucaspis japonica), present in Turkey and known to be associated with the commodity were selected for further evaluation (see Table 6).
Selection of other relevant pests (not regulated in the EU) associated with the commodity
The information provided by MAF of Turkey, integrated with the search performed by EFSA, was evaluated to assess whether there are other potentially relevant pests of J. regia present in the country of export. For these potential pests not regulated in the EU, pest risk assessment information on the probability of introduction, establishment, spread and impact is usually lacking. Therefore, these pests that are potentially associated with J. regia were also evaluated to determine their relevance for this Opinion based on evidence that: 1) the pest is present in Turkey; 2) the pest is (i) absent or (ii) has a limited distribution in the EU and either official phytosanitary measures are in place in at least of one EU MS or there is evidence of a recent introduction of the pest; 3) Juglans regia is a host of the pest; 4) one or more life stages of the pest can be associated with the specified commodity; 5) the pest may have an impact in the EU.
Pests that fulfilled all five criteria were selected for further evaluation. Based on the information collected, 678 potential pests not regulated in the EU, known to be associated with J. regia were evaluated for their relevance to this Opinion. Pests were excluded from further evaluation when at least one condition listed above (1-5) was not met. Details can be found in the Appendix D (Microsoft Excel ® file). Of the evaluated pests not regulated in the EU, two insects (i.e. Garella musculana and Euzophera semifuneralis) and one fungus (Lasiodiplodia pseudotheobromae) were selected for further evaluation because they met all of the selection criteria. More information on these three pests can be found in the pest datasheets (Appendix A).
Overview of interceptions
Data on the interception of harmful organisms on plants of J. regia can provide information on some of the organisms that can be present on J. regia despite the measures taken.
According to EUROPHYT online (accessed on 1 September 2020) and TRACES-NT online (accessed on 5 February 2021), there were no interceptions of plants for planting of J. regia from Turkey destinated to the EU Member States due to the presence of harmful organisms between 1995 and January 2021.
The walnut saplings planned to be exported to the EU in 2020 were 2,250,000 pieces.
List of potential pests not further assessed
From the list of pests not selected for further evaluation, the Panel highlighted seven pests (see Appendix C) for which there was uncertainty about at least one criterium to select them for further evaluation. The detailed reason is provided in Appendix C for each species.
Summary of pests selected for further evaluation
Five pests reported to be present in Turkey while having potential for association with the commodity destined for export to the EU are listed in Table 6. The effectiveness of the risk mitigation measures proposed for the commodity by Turkey was evaluated for these selected pests.
Risk mitigation measures
For each of the selected pests (see Table 6), the Panel assessed the possibility that it could be present in the exporting nurseries and assessed the probability that pest freedom is achieved by the proposed risk mitigation measures.
The information used in the evaluation of the effectiveness of the risk mitigation measures is summarised in a pest data sheet (see Appendix A).
Possibility of pest presence in the export nurseries
For each relevant pest, the Panel evaluated the likelihood that the pest could be present in the export nurseries by assessing the possibility that J. regia saplings are infested either by: • introduction of the pest (e.g. insects, spores) from the environment surrounding the nursery, • introduction of the pest with new plants/seeds, • spread of the pest within the nursery.
Risk mitigation measures proposed by MAF of Turkey
With the information provided by the MAF of Turkey (Dossier Sections 1.0, 3.1 and 3.5), the Panel summarised the risk mitigation measures (see Table 7) proposed by MAF of Turkey for J. regia plants designated for export to the EU.
The descriptions of the risk mitigation measures in Table 7 are fully consistent with the original wording used in the Dossier. The target species in the table are those indicated in the Dossier Sections 1.0, 3.1 and 3.5. While most of the target species are not relevant, the Panel assessed the risk mitigation measures described in the table with reference to the pests retained for further evaluation in this Opinion in the Appendix A. All nurseries producing J. regia plants for planting are required to respect the 'Regulation on the plant passport system and registration of operators', 'Regulation on the certification and marketing of young fruit plants and propagation materials' and 'Instructions on plants health in fruit and vine saplings and propagation material', where the phytosanitary standards for fruit saplings and propagation materials are described (Dossier Section 3.5).
Physical isolation
The production areas are surrounded by wire or stone wall or left empty (Dossier Section 3.1). 3 Soil analyses Target pest species: Xiphinema diversicaudatum.
Timing of the treatment: Spring or autumn one time before planting Specification for Xiphinema diversicaudatum: Before the nursery is established, soil analysis is done in terms of X. diversicaudatum in vine almond, plum, apricot, cherry, cherry peach and olive fields. Soil analysis: Analysis is done in the spring or autumn once before planting, if the soil analysis and harvesting from the area to be produced is not in place, then analysis is done at maximum of every 4 years.
Number
Risk mitigation measure
Implementation in Turkey
Target species: Meloidogyne spp.: In particular before establishing new nurseries, soil samples are collected and analysed for the presence of Meloidogyne spp. (Root Knot Nematodes). Soil samples are taken from the lands where nurseries will be established. Sampling is performed by using a soil probe up to 10-30 cm deep. Subsamples taken from 60 different points representing 1 ha of area are mixed and one single sample is taken from this mixture and the final sample is formed. One sample contains 1 kg of soil. The sample coming to the laboratory is mixed homogenously in the laboratory and 200 cm 3 of soil is taken and analysed. With regard to the extraction method in the quarantine laboratories and nematology laboratories in research institutes, a combination of methods has been used. All laboratories use the same methods found in EPPO diagnostic protocols (PM 7/119) and a modified Baermann Funnel Method is used in soil analysis (EPPO, 2013) (Dossier Section 3.1). Timing of the treatment: All year Specification for Anoplophora chinensis: All the trees infested with the pest were marked and cut into chips and then destroyed by burning. Various trees species such as maple, willow, poplar were destroyed within the scope of eradication work. In the period between May and October, when the adults of the pest are active, trees with adult emergence are marked. In the period from November to March when the pest is inactive, trees previously marked are cut down. The root parts of the trees with dishes were destroyed.
Specification for Quadraspidiotus perniciosus: -Soil cultivation, irrigation, fertilisation, pruning and other cultural measures should be done in a timely and duly manner in orchards contaminated with San-Jose scale. -Fruit trees that are heavily contaminated with San Jose scale should be pruned shortly before the buds awaken, and the leftovers from pruning should be put away from the orchard for parasitoid emergence. -While establishing an orchard, certified and clean saplings should be used.
-Sticks taken from infested trees should not be used as support for healthy trees.
-Bud eye and scions should not be taken from contaminated trees.
-If other host plants are contaminated with pests at the edge of the garden, it should be sprayed. -Infested fruits should be destroyed. Pest surveillance and monitoring during production and official inspections According to Dossier Section 3.1 annual inspection on saplings and mother plants are carried out by the Ministry inspectors. If the isolation distances specified in Table 4 are provided, inspections are done visually. If required by the Ministry additional laboratory analyses are performed. Plants around the production areas are also annually inspected by the Ministry expert in terms of quarantine organisms. If these plants are contaminated with harmful organisms subject to quarantine, these plants and saplings in this area are destroyed.
Specification for the surveillance of target pest species: Timing: April to October Specification for inspection and trapping of target pest species: Timing: All year Specification for Anoplophora chinensis: In places where the pest's presence is unknown, surveys should be carried out at least once a year, at any time of the year. September -October months can be preferred. During the surveys, pupae are expected to be seen in April -May, young larvae in June -July and mature larvae in September -October.
Plants that host this insect are examined by visual inspection method. This insect is difficult to detect on plants. However, if the larvae cause severe damage to the plants, it may be noticed. The presence of the pest can be detected more easily by seeing the exit holes formed by the adults exiting the tree trunk after they are mature or by the adults. Adult exit holes can be 10 -15 mm in diameter. The appearance of feeding signs and sawdust residues in the shoots of the larvae may indicate the presence of the pest. If in doubt, a branch of the tree on the ground or below is cut and the galleries formed by the larvae are viewed. If the larva is present, a sample is taken and sent for diagnosis.
Specification for Erschoviella musculana/Garella musculana: For the early count, a total of 100 shoots from at least 10 trees selected randomly in the garden of 100 trees, 10 shoots each, are checked by eye examination method. If there are more than 100 trees or saplings in the garden, the same process is repeated for each additional 100 trees/saplings. If there is no larva infestation in the early period or if it cannot be detected, the fruit contamination should be checked. For fruit count, a total of 100 fruits are visually inspected in a garden of 100 trees from at least 10 trees chosen by chance and 10 fruits from different directions and heights. Since tree yellow worm and fruit damage can also be confused with apple jerky worm damage, if in doubt, shoot and fruit samples should be taken and sent to the relevant Institute for diagnosis. Newly established nurseries and gardens with seedlings and trees grafted with foreign varieties should be examined. In new shoots, it should be checked if there is an opened gallery, especially in the parts where the petiole is located. Corrugated cardboard should be used as a trap tape against the larvae of the pest. The pupae of the pest that have gathered under the shell should be checked. Spilled fruits and pruning residues should be examined. Especially by transplanting plants, scions and pruning residual branches, pests can be transmitted to new areas. For this reason, especially the scions consisting of foreign varieties should be examined carefully. 9 Weed management Weeds are controlled by mechanical means once a month from March to September-October (Dossier Section 3.1).
10 Chemical treatments during production In May, spraying against fungal diseases is carried out with copper products twice with an interval of 10 days.
Spraying is carried out against Empoasca spp. and red spider mite twice from July using 80% sulfur (400 g/100 litres of water), which is licensed for fruit trees.
If needed, spraying against thrips is carried out.
The graft wound is protected against infections using copper solutions.
11
Washing the roots Before exporting, the roots of the saplings are washed so that no soil remains. High pressure water is not used for washing to avoid damaging the roots.
12
Official inspections before export Official inspections before export are carried out by the Ministry quarantine inspector. A phytosanitary certificate is issued for saplings found suitable (Dossier Section 1.0).
13
Chemical treatments before export Before loading, the roots of seedlings are sprayed with fungicide (Dossier Section 1.0). According to Dossier Section 3.1, in Turkey since 31 July 2020 Thiram is no longer used. No further information is provided in the Dossier on whether Thiram has been replaced with other fungicides.
Evaluation of the current measures for the selected relevant pests including uncertainties
The relevant risk mitigation measures acting on the selected pests were identified. Any limiting factors on the effectiveness of the measures were documented.
All the relevant information including the related uncertainties deriving from the limiting factors used in the evaluation are summarised in a pest datasheet provided in Appendix A. Based on this information, for each relevant pest, an expert judgement is given for the likelihood of pest freedom of commodity taking into consideration the risk mitigation measures acting on the pest and their combination.
An overview of the evaluation of each relevant pest is given in the sections below (Sections 5.3.1-5.3.5). The outcome of EKE on pest freedom after the evaluation of the proposed risk mitigation measures is summarised in Section 5.3.6.
The explanation of pest freedom categories used to rate the likelihood of pest freedom in Sections 5.3.1-5.3.5 is shown in Table 8.
Overview of the evaluation of Anoplophora chinensis
Overview of the evaluation of Anoplophora chinensis for grafted bare rooted plants Rating of the likelihood of pest freedom Pest free with some exceptional cases (based on the Median). Possibility that the pest could become associated with the commodity Anoplophora chinensis is a polyphagous wood-boring beetle that attacks living trees. Anoplophora chinensis is reported to be 'transient and under eradication' in Turkey. In _ Istanbul, A. chinensis was detected first in 2014 in nurseries producing ornamental plants. Anoplophora chinensis was detected in _ Istanbul mostly in public parks, home gardens and recreation areas, which are all environments rich in potential host trees. Both males and females can fly up to 2 km. Juglans regia is a host of A. chinensis, although it is not listed as a preferred host. As walnut saplings intended to be exported to the EU are produced in 36 provinces including _ Istanbul, it cannot be excluded that populations of A. chinensis are present in the neighbouring environment of export nurseries. Plants are grown in open fields and adult A. chinensis can enter from the Sometimes pest free < 5,000
Percentile of the distribution
More often than not pest free 5,000 to -< 9,000 Frequently pest free 9,000 to -< 9,500 Very frequently pest free 9,500 to -< 9,900 Extremely frequently pest free 9,900 to -< 9,950 Pest free with some exceptional cases 9,950 to -< 9,990 Pest free with few exceptional cases 9,990 to -< 9,995 Almost always pest free 9,995 to -10,000 surrounding environment. Oviposition occurs in the bark in the lower part of the stems with diameter larger than 1 cm making the commodity a pathway.
Measures taken against the pest and their efficacy
The relevant applied measures are: (i) regular inspections in the nurseries (at least 1 inspection per year); (ii) export inspections; (iii) surveillance at national level. Eradication (Roguing) is also performed.
Interception records
In the EUROPHYT/TRACES-NT database, there are no records of notification of J. regia plants for planting neither from Turkey nor from other countries due to presence of A. chinensis between the years 1995 and January 2021 (EUROPHYT/TRACES-NT, online).
Shortcomings of current measures/procedures
Eradication through roguing is unlikely to involve asymptomatic plants. Therefore, the measure will not be fully effective. There is no clear indication of other risk mitigation measures in place in the exporting nurseries and surrounding environments, effective against A. chinensis.
Main uncertainties
The pest prevalence in the surrounding environment is unknown.
In general, the information provided was either poorly detailed or not specifically adapted to nurseries. There is uncertainty on whether the risk mitigation measures indicated by Turkey are mandatory or only general recommendations.
Overview of the evaluation of Garella musculana
Overview of the evaluation of Garella musculana for grafted bare rooted plants
Rating of the likelihood of pest freedom
Extremely frequently pest free (based on the Median). Possibility that the pest could become associated with the commodity Garella musculana is a tuft moth native of to Central Asia strictly associated with walnut. Garella musculana is reported to be introduced in Turkey in the Bartın area, where it is abundant in walnut orchards. The pest can fly, and as walnut saplings intended to be exported to the EU are produced in areas close to Bartın it cannot be excluded that populations of G. musculana could enter the export nurseries. Oviposition occurs on the shoots and the larva is a shoot miner while the pupa is formed on the bark making the commodity a pathway.
Measures taken against the pest and their efficacy
The relevant applied measures are: (i) regular inspections in the nurseries (at least 1 inspection per year); (ii) export inspections; (iii) surveillance at national level. Roguing is also performed.
Interception records
In the EUROPHYT/TRACES-NT database there are no records of notification of J. regia plants for planting neither from Turkey nor from other countries due to presence of G. musculana between the years 1995 and January 2021 (EUROPHYT/TRACES-NT, online).
Shortcomings of current measures/procedures
Roguing is unlikely to be applied on recently infested plants. Therefore, the measure will not be fully effective. Except for biological control there is no clear indication of other risk mitigation measures in place in the exporting nurseries and surrounding environments, effective against G. musculana. The biological control strategy is superficially described hampering a thorough assessment.
Main uncertainties
As long as the walnut is the main host of the pest, there is an uncertainty on the frequency of walnut orchards in the surrounding environments.
In general, the information provided was either poorly detailed or not specifically adapted to nurseries.
There is uncertainty on whether the risk mitigation measures indicated by Turkey are mandatory or only general recommendations.
Overview of the evaluation of Euzophera semifuneralis
Overview of the evaluation of Euzophera semifuneralis for grafted bare rooted plants Rating of the likelihood of pest freedom Pest free with some exceptional cases (based on the Median). The pest can enter the production fields by flying. Juglans regia is reported as host. Euzophera semifuneralis overwinters as mature larva in a typical white silken cocoon under the bark. Young trees and saplings may also be infested.
Measures taken against the pest and their efficacy
The relevant applied measures are: (i) regular inspections in the nurseries (at least 1 inspection per year); (ii) export inspections.
Interception records
In the EUROPHYT/TRACES-NT database, there are no records of notification of J. regia plants for planting neither from Turkey nor from other countries due to presence of E. semifuneralis between the years 1995 and January 2021 (EUROPHYT/ TRACES-NT, online).
Shortcomings of current measures/procedures
There is no clear indication of a pesticides scheme or any other risk mitigation measure in place in the exporting nurseries and surroundings, effective against E. semifuneralis on J. regia.
Main uncertainties
The presence of the pest in the surrounding environment of the nurseries is uncertain.
There is uncertainty on whether the risk mitigation measures indicated by Turkey are mandatory or only general recommendations. Possibility that the pest could become associate with the commodity The pathogen was reported from Mersin and Andana provinces in Turkey. The pathogen normally enters the plant through wounds (usually by pruning) which is the main way of spreading, although as for other fungi in the family Botryophaeriaceae endophytic stages are also reported. While pycnidia are produced on diseased plant tissues, conidia are spread by wind, rain or insects. Pathogen inoculum could also be spread by contaminated pruning and grafting tools. The presence of host species in the environment of the nurseries is an important factor for the possible migration of inoculum into the nursery. The pathogen overwinters in diseased twigs or in plant debris in the soil. Juglans regia is a host and plants for planting are pathway.
Measures taken against the pest and their efficacy
The relevant applied measures are: i) regular inspections in the nurseries (at least 1 inspection per year); ii) export inspections.
Interception records
In the EUROPHYT/TRACES-NT database, there are no records of notification of J. regia plants for planting neither from Turkey nor from other countries due to presence of L. pseudotheobromae between the years 2008 (year of description of the fungus) and January 2021 (EUROPHYT/TRACES-NT, online).
Shortcomings of current measures/procedures
Due to the potential dormant phase, the visual inspection is insufficient. There is no clear indication of a pesticides scheme or any other risk mitigation measure in place in the exporting nurseries and surroundings, effective against L. pseudotheobromae.
Main uncertainties
The presence of the pathogen and suitable hosts in the surroundings of the nurseries is uncertain. The infection potential of the fungus in its endophytic stage is not known.
There is uncertainty on whether the risk mitigation measures indicated by Turkey are mandatory or only general recommendations. The pest can enter the production fields as crawlers either with air currents or transported accidentally by human activities or hitchhiking on animals. Females adhere to the bark of trees including plants for planting.
Measures taken against the pest and their efficacy
The relevant applied measures are: (i) regular inspections in the nurseries (at least 1 inspection per year); (ii) export inspections.
Interception records
In the EUROPHYT/TRACES-NT database, there are no records of notification of J. regia plants for planting neither from Turkey nor from other countries due to presence of L. japonica between the years 1995 and January 2021 (EUROPHYT/TRACES-NT, online).
Shortcomings of current measures/procedures
There is no clear indication of a pesticides scheme or any other risk mitigation measure in place in the exporting nurseries and surroundings, effective against L. japonica on J. regia.
Main uncertainties
The presence of the pest in the surrounding environment of the nurseries is uncertain.
There is uncertainty on whether the risk mitigation measures indicated by Turkey are mandatory or only general recommendations.
5.3.6. Outcome of expert knowledge elicitation Table 9 and Figure 3 show the outcome of the EKE regarding pest freedom after the evaluation of the currently proposed risk mitigation measures for all the evaluated pests. Figure 4 provides an explanation of the descending distribution function describing the likelihood of pest freedom after the evaluation of the proposed risk mitigation measures for J. regia plants designated for export to the EU based on the example of Lasiodiplodia pseudotheobromae. Sometimes pest free < 5,000 L Pest freedom category includes the elicited lower bound of the 90% uncertainty range More often than not pest free 5,000 to -< 9,000 Frequently pest free 9,000 to -< 9,500 M Pest freedom category includes the elicited median Very frequently pest free 9,500 to -< 9,900 U Pest freedom category includes the elicited upper bound of the 90% uncertainty range Extremely frequently pest free 9,900 to -< 9,950 Pest free with some exceptional cases 9,950 to -< 9,990 Pest free with few exceptional cases 9,990 to -< 9,995 Almost always pest free 9,995 to -10,000 The Panel is 95% sure that: -9,554 or more grafted bare rooted plants per 10,000 will be free from Lasiodiplodia pseudotheobromae; -9,817 or more grafted bare rooted plants per 10,000 will be free from Garella musculana; -9,907 or more grafted bare rooted plants per 10,000 will be free from Anoplophora chinensis; -9,908 or more grafted bare rooted plants per 10,000 will be free from Lopholeucaspis japonica; -9,916 or more grafted bare rooted plants per 10,000 will be free from Euzophera semifuneralis. Categories of pest freedom Figure 3: Elicited certainty (y-axis) of the number of pest free Juglans regia grafted bare rooted plants (x-axis; log-scaled) out of 10,000 produced in Turkey and designated for export to the EU for all evaluated pests visualised as descending distribution function. Horizontal lines indicate the percentiles (starting from the bottom 5%, 25%, 50%, 75% and 95%) 6.
Conclusions
There are five pests relevant for this Opinion that are associated with grafted bare rooted plants of Juglans regia: Anoplophora chinensis, Garella musculana, Euzophera semifuneralis, Lasiodiplodia pseudotheobromae and Lopholeucaspis japonica.
For these pests, the likelihood of the pest freedom after the evaluation of the proposed risk mitigation measures relevant for the commodity of J. regia designated for export to the EU was estimated.
For Anoplophora chinensis, the likelihood of pest freedom for grafted bare rooted plants following evaluation of proposed risk mitigation measures was estimated as 'pest free with some exceptional cases' with the 90% uncertainty range spanning from 'extremely frequently pest free' to 'almost always pest free'. The EKE indicated, with 95% certainty, that between 9,907 and 10,000 plants per 10,000 will be free from A. chinensis.
For Garella musculana, the likelihood of pest freedom for grafted bare rooted plants following evaluation of proposed risk mitigation measures was estimated as 'extremely frequently pest free' with the 90% uncertainty range spanning from 'very frequently pest free' to 'pest free with few exceptional cases'. The EKE indicated, with 95% certainty, that between 9,817 and 10,000 plants per 10,000 will be free from G. musculana.
For Euzophera semifuneralis, the likelihood of pest freedom for grafted bare rooted plants following evaluation of proposed risk mitigation measures was estimated as 'pest free with some exceptional cases' with the 90% uncertainty range spanning from 'extremely frequently pest free' to 'almost always pest free'. The EKE indicated, with 95% certainty, that between 9,916 and 10,000 plants per 10,000 will be free from E. semifuneralis.
For Lasiodiplodia pseudotheobromae, the likelihood of pest freedom for grafted bare rooted plants following evaluation of proposed risk mitigation measures was estimated as 'very frequently pest free' with the 90% uncertainty range spanning from 'very frequently pest free' to 'pest free with some
Uncertainty distributions of pest freedom for Lasiodiplodia pseudotheobromae
The panel is 95% certain that at least 9,554 Plants out of 10,000 are pest free of Lasiodiplodia pseudotheobromae The panel is 50% certain that at least 9,837 Plants out of 10,000 are pest free of Lasiodiplodia pseudotheobromae The panel is 5% certain that at least 9,988 Plants out of 10,000 are pest free of Lasiodiplodia pseudotheobromae Categories of pest freedom Figure 4: Explanation of the descending distribution function describing the likelihood of pest freedom after the evaluation of the proposed risk mitigation measures for plants designated for export to the EU based on based on the example of Lasiodiplodia pseudotheobromae exceptional cases'. The EKE indicated, with 95% certainty, that between 9,554 and 10,000 plants per 10,000 will be free from L. pseudotheobromae. For Lopholeucaspis japonica, the likelihood of pest freedom for grafted bare rooted plants following evaluation of proposed risk mitigation measures was estimated as 'pest free with some exceptional cases' with the 90% uncertainty range spanning from 'extremely frequently pest free' to 'almost always pest free'. The EKE indicated, with 95% certainty, that between 9,908 and 10,000 plants per 10,000 will be free from L. japonica.
Control (of a pest)
Suppression, containment or eradication of a pest population (FAO, 1995(FAO, , 2017. Entry (of a pest) Movement of a pest into an area where it is not yet present, or present but not widely distributed and being officially controlled (FAO, 2017). Establishment (of a pest) Perpetuation, for the foreseeable future, of a pest within an area after entry (FAO, 2017). Impact (of a pest) The impact of the pest on the crop output and quality and on the environment in the occupied spatial units. Introduction (of a pest) The entry of a pest resulting in its establishment (FAO, 2017 Any means that allows the entry or spread of a pest (FAO, 2017).
Phytosanitary measures
Any legislation, regulation or official procedure having the purpose to prevent the introduction or spread of quarantine pests, or to limit the economic impact of regulated non-quarantine pests (FAO, 2017).
Protected zones (PZ)
A Protected zone is an area recognised at EU level to be free from a harmful organism, which is established in one or more other parts of the Union. Quarantine pest A pest of potential economic importance to the area endangered thereby and not yet present there, or present but not widely distributed and being officially controlled (FAO, 2017). Regulated nonquarantine pest A non-quarantine pest whose presence in plants for planting affects the intended use of those plants with an economically unacceptable impact and which is therefore regulated within the territory of the importing contracting party (FAO, 2017). Risk mitigation measure A measure acting on pest introduction and/or pest spread and/or the magnitude of the biological impact of the pest should the pest be present. A risk mitigation measure may become a phytosanitary measure, action or procedure according to the decision of the risk manager. Spread (of a pest) Expansion of the geographical distribution of a pest within an area (FAO, 2017). The pest is included in the EPPO A2 list (EPPO, online_a).
Abbreviations
It is a quarantine pest in Morocco, Mexico and Tunisia (EPPO, online_b).
Pest status in Turkey
Anoplophora chinensis is reported as transient, under eradication in Turkey (EPPO, online_c). Anoplophora chinensis is on A2 list of Turkey (EPPO, online_b).
Host status on Juglans regia
Juglans regia is reported as a host of A. chinensis (Ge et al., 2014).
PRA information
Pest Risk Assessments available: -Pest risk analysis, Anoplophora chinensis (van der Gaag et al., 2008), -Scientific Opinion on the commodity risk assessment of Robinia pseudoacacia plants from Turkey (EFSA PLH Panel, 2021).
Other relevant information for the assessment Biology
Anoplophora chinensis is a longhorn beetle native to China, Japan and Korea (CABI, online).
Anoplophora chinensis life history includes four stages: egg, larvae of various instars, pupae and adults.
Oviposition occurs at the base of the trunk or on emerging roots, whereas the eggs are laid rarely on higher parts of trunks and main branches (van der Gaag et al., 2010).
According to temperature, larvae hatch about 10 days after oviposition. First and second instar larvae feed in the phloem and later deeply into the wood. The minimum diameter of the branches/trunks to become suitable for infestation and larval development is 1 cm (EPPO, 2013). Larvae develop deeply downwards in the trunk of the host tress and many also reach the roots (H erard et al., 2005), where about 90% of the population can be found (H erard et al., 2006). Both in the native countries (Adachi, 1994) and in southern Europe (H erard and Maspero, 2019), larvae need 1 or 2 years to complete their development. In colder regions, however, A. chinensis has a longer life cycle (van der Gaag et al., 2008).
Pupation occurs in late springsummer inside the wood, usually in the upper part of the feeding areas of larvae (CABI, online).
After metamorphosis, adults' emergence occurs between April and September, in relation to latitude and local temperature, and they may survive from 30 (recorded in China) to 70 days (recorded in Japan) (CABI, online). Adults emerge through circular holes with a mean diameter of 10-15 mm, usually smaller in males than in females, and located about 25 cm below the oviposition site (Haack et al., 2010).
After emergence and before copulation, tender adults need a maturation feeding carried out for about 10-15 days on twigs and leaf petioles (Haack et al., 2010). However, adults continue nutritional feeding for their whole life, making the egg laying homogenously distributed over spring and summer (Haack et al., 2010). Reached sexual maturation, both males and females mate polygamously. Mating occurs in summer (from May to August) on trunks and main branches, usually at least 60 cm from the trunk collar (CABI, online).
Anoplophora chinensis spread capacity is reported to be low, and the distance covered naturally by adults falls generally within a few hundred metres from the tree from which they emerged (Adachi, 1990). Most adults are assumed to disperse by walking and remain near their natal tree unless conditions are unfavourable, although some adults were shown to be able to travel distances of 2 km (Adachi, 1990). In Lombardy, Italy, the maximum distances between infestations in urban and agricultural areas were calculated to be about 500 and 663 m, respectively (Cavagna et al., 2013). However, 97.0% and 99.2% of new cases were found within 200 and 400 m, respectively (Cavagna et al., 2013). EFSA (2019) estimated the maximum distance of natural spread in one year to be approximately 194 m (with a 95% uncertainty range of 42-904 m), for a population with a 2-year life cycle.
Concerning the human-assisted spread, the main pathway for A. chinensis dispersal was identified in the international trade of woody plants for planting (including bonsai), with a stem or root diameter > 1 cm, which are infested in the nurseries during the production process (Haack et al., 2010;EPPO, 2013; CABI, online). Larvae of A. chinensis were intercepted also in wood packaging material (WPM) arriving from Asia, although this is a less common pathway of dispersal (Haack et al., 2010;H erard and Maspero, 2019).
Main type of symptoms
Most symptoms caused by A. chinensis are mainly due to the feeding activities of the larvae within the wood, although a few characteristic symptoms are produced also by adults during maturation feeding and oviposition. Detailed descriptions of A. chinensis symptoms specific on Juglans regia are not available in literature. Nevertheless, symptoms induced by A. chinensis colonisation are similar in most hosts (CABI, online).
The main symptoms caused by newly emerged adults on plants are foliage wilting and discoloration, twig deformation and bark erosion (EFSA, 2019). Females engrave into the bark characteristic 'T shape' oviposition pitches, which is a very characteristic symptom of tree colonisation by A. chinensis (H erard and Maspero, 2019). Furthermore, in the first weeks after the oviposition it is possible to observe the sap coming out from the freshly cut slits (EPPO, 2016). The main symptoms caused by feeding larvae are gradual and progressive canopy decline, desiccation of the main branches due to the larval tunneling activity concentrated at the lower part of the stem (EFSA, 2019), galleries under the bark, frass at the base of the tree and exit holes (H erard and Maspero, 2019; CABI, online). The exit holes are large, circular, with an average diameter of about 10-15 mm, smaller for males and larger for females (Haack et al., 2010). They can be seen mainly around the lower trunk, on emerging roots, or below-ground level (EFSA 2019; CABI, online).
Presence of asymptomatic plants
Although there is no specific report of asymptomatic infested plants, introductions that occurred in the past through plants for planting may support that early infestation associated with little symptoms could be present and go undetected.
Confusion with other pests
Crown wilting, stem discoloration and branch desiccation are nonspecific symptoms of infestation, common to many wood-boring beetles (Haack et al., 2010).
Symptoms produced by A. chinensis (frass emission, emerging holes, maturation feeding) may be confused with those of other longhorn beetles of similar size, especially for the species belonging to the same subfamily Lamiinae such as other Anoplophora species (Pennacchio et al., 2012). Juglans regia is also reported as a host of A. chinensis (Ge et al., 2014).
Reported evidence of impact
Anoplophora chinensis is listed as EU Quarantine pest (Annex II, Part B of Commission Implementing Regulation (EU) 2019/2072).
Pathways and evidence that the commodity is a pathway
Plants for planting: The main pathway for the A. chinensis dispersal was identified in the international trade of woody host plants for planting (including bonsai) with a stem or root diameter > 1 cm (Haack et al., 2010;EPPO, 2013; CABI, online).
Surveillance information
Anoplophora chinensis is recorded in the Dossier Sections 1.0 and 3.1 as pest occurring in Turkey and reported in the list of pests potentially associated with walnut plants for planting in Turkey.
Anoplophora chinensis is included in the official surveillance programme of the Ministry and it is under the national survey and monitoring programme in the last 5 years. Survey instruction was prepared, and control and eradication measures were applied in _ Istanbul, Antalya and Bartın provinces. In Bartın and Antalya, A. chinensis was reported as eradicated (Dossier Section 3.1). Up to date, A. chinensis was not found on walnut nor reported as a pest of walnut in Turkey. Surveillance is still on-going in the infested area of _ Istanbul until A. chinensis will be eradicated from Turkey (Dossier Section 1.0).
A.1.2.
Possibility of pest presence in the nursery A.1.2.1. Possibility of entry from the surrounding environment Anoplophora chinensis was found in Turkey as an invasive alien species in _ Istanbul, Antalya and Bartın provinces. In Bartın and Antalya, A. chinensis was then reported as eradicated (Dossier Section 3.1). To date, the only A. chinensis infestation known for Turkey is in _ Istanbul. In _ Istanbul (where the infestation is still occurring), A. chinensis was detected first in 2014 in nurseries producing ornamental plants (Dossier Section 3.1). The species arrived through international trade of plants for planting probably from China or Italy (Dossier Section 3.1). In _ Istanbul, at least three infested areas were found spread over the town (Dossier Section 3.1).
It has also been reported that the points where A. chinensis was detected in _ Istanbul are mostly public parks, home gardens and recreation areas, which are all environments rich of potential host trees, such as Acer sp., Salix caprea, Fagus orientalis, Aesculus hippocastanum, Platanus orientalis, Populus nigra and Salix babylonica (Dossier Section 3.1). Anoplophora chinensis is a largely polyphagous longhorn beetle able to infest weakened and healthy woody broadleaves (Haack et al., 2010;EFSA, 2019). Both males and females can fly from up to 2 km (Adachi, 1990).
The production areas are surrounded by wire or stone wall or left empty (Dossier Section 3.1). According to the rules, a distance of at least 20 m is left between the nurseries and other woody plants (Dossier Section 3.1). There is no information on the species composition of the woody plants in the surroundings.
According to Dossier Section 3.1, there are generally no woody plants other than walnut mother plants and walnut saplings at a distance of less than 2 km from the nursery plots, although pictures provided in the Dossier Section 1.0 support that woody plants are present nearby production plots. According to Dossier Section 3.1, there is distance of 5-10 km between the nurseries and urban areas.
Considering these two pest characteristics (polyphagy and fly ability), A. chinensis can be present and reproduce in various ornamental trees growing around the infested areas of the town of _ Istanbul, and then move to nurseries through the adult dispersal capacity. At the moment of export, the diameter at the collar of walnut sapling is 1.5-2 cm (Dossier Section 3.1), therefore compatible with stem colonisation of A. chinensis entering from the surrounding environment.
Uncertainties:
-No information about the density and distribution of the population of A. chinensis in the infested areas surrounding the nurseries of _ Istanbul is available. Taking into consideration the above evidence and uncertainties, the Panel considers that it is possible for the pest to enter the nursery. The pest can be present in the surrounding areas and the transferring rate could be enhanced by dispersal capacity of A. chinensis as males and females fly, the species is highly polyphagous and potential hosts grow in wild or domestic areas close to the nurseries.
A.1.2.2. Possibility of entry with new plants/seeds
In both provinces of _ Istanbul (where the infestation is still occurring) and Bartın (where the infestation has been eradicated), A. chinensis was detected first in nurseries producing ornamental plants (Dossier Section 3.1), suggesting that A. chinensis may enter in nurseries with new plants.
In the Dossier Section 1.0, it is stated that in Turkey there are no plant protection products registered for walnut against A. chinensis. In addition, in the Dossier Section 3.1, it is clearly stated that no chemical treatment is performed against A. chinensis in nurseries.
Since A. chinensis is largely polyphagous longhorn beetle infesting woody broadleaves (Haack et al., 2010;EFSA, 2019), the pest may enter to the nurseries with new infested plant material (even belonging to species different than walnut) arriving in Turkey through the international or national trade of plants for planting or rootstocks bought from other nurseries (Dossier Section 3.1) and then moving on walnut.
Uncertainties:
-While the majority of plants (95%) are produced on-site (Dossier Section 3.1), the origin of the remaining planting material (about 5%) is unknown.
Taking into consideration the above evidence and uncertainties, the Panel considers that the pest could enter the nursery with new plants, as it already happened in the past.
A.1.2.3. Possibility of spread within the nursery
In Turkey, 162 nurseries produce walnut sapling certified for export (Dossier Section 3.1). Anoplophora chinensis is known to be able to infest walnut (Ge et al., 2014) and many other hosts (Haack et al., 2010;EFSA, 2019). Both males and females of A. chinensis can fly up to 2 km (Adachi, 1990). At the moment of export, the diameter at the collar of walnut sapling is 1.5-2 cm (Dossier Section 3.1), therefore compatible with A. chinensis stem colonisation. No specific procedure/ treatment is applied against A. chinensis in the export nurseries. No licensed plant protection products against A. chinensis, nor specific protocol for pest control in the nurseries are currently available (Dossier Sections 1.0 and 3.1). Therefore, A. chinensis can spread within the nursery if present.
Uncertainties:
-It is unknown if inspections before export are targeted on the pest and their procedures (Dossier Section 3.1). -The pest status of A. chinensis within the infested nurseries is unknown.
Taking into consideration the above evidence and uncertainties, the Panel considers that the transfer of the pest within the nursery is possible, as both males and females fly, the pest is polyphagous and potentially able to shift among hosts, including walnut, which has a size suitable for colonisation.
A.1.3. Information from interceptions
In the EUROPHYT/TRACES-NT database there are no records of notification of J. regia plants for planting neither from Turkey nor from other countries due to the presence of A. chinensis between the years 1995 and January 2021 (EUROPHYT/TRACES-NT, online).
A.1.4. Evaluation of the risk mitigation measures
In the table below, all risk mitigation measures indicated in the Dossier from Turkey are listed and a description of their effectiveness on A. chinensis is provided. Information on the risk mitigation measures is provided in Table 7.
Number Risk mitigation measure
Effect on the pest Evaluation and uncertainties (1) Information provided is poorly detailed. However, eradication through roguing is unlikely to involve asymptomatic plants. Therefore, the measure will not be fully effective. Chemical treatments during production
Yes
The proposed chemical treatments with 80% sulfur have no effect on the pest. The proposed treatments against thrips are performed only if thrips are detected. Such kind of treatments has no effect on the pest present inside plants.
Uncertainties:
-There is no information on the active substances and timing of treatments against thrips. -There is uncertainty on whether treatments against thrips may have some effect on adults of A. chinensis.
11
Washing the roots No Not applicable. 12 Official inspections before export Yes Information is not sufficient to judge on the quality of inspections.
Uncertainties: -It is unclear whether the suggested method allows to detect plants that show initial symptoms. -It is unclear whether there is a method to detect asymptomatic plants. -It is unclear how big the sample size is.
13
Chemical treatments before export No Not applicable (1): Based on the description provided by the applicant country and summarised in Table 7, for all risk mitigation measures there is uncertainty on whether the risk mitigation measures indicated by Turkey are mandatory or only general recommendations.
A.1.5. Overall likelihood of pest freedom for grafted bare rooted plants A.1.5.1. Reasoning for a scenario which would lead to a reasonably low number of infested grafted bare rooted plants The scenario assumes that most exports will come from nurseries far away from outbreak areas of A. chinensis and that outbreaks are efficiently controlled. The scenario also assumes that no woody plants are present at 2 km distance from the nurseries, that nurseries are specialised to J. regia and that J. regia is a minor host. Inspection before export done by Ministry staff is effective in detecting infestations. The scenario assumes that risk mitigation measures are implemented.
A.1.5.2. Reasoning for a scenario which would lead to a reasonably high number of infested grafted bare rooted plants The scenario assumes that some export will come from nurseries close to the outbreak areas of A. chinensis and that the outbreaks are not sufficiently controlled. The scenario also assumes that woody plants are present in the surroundings of the nurseries, that nurseries are not specialised to the production of J. regia and that J. regia is a host allowing a full development of the pest. Inspection before export done by Ministry staff is not sufficiently effective in detecting infestations. The scenario assumes that risk mitigation measures are not implemented.
A.1.5.3. Reasoning for a central scenario equally likely to over-or underestimate the number of infested grafted bare rooted plants (Median) A.1.5.5. Elicitation outcomes of the assessment of the pest freedom for Anoplophora chinensis on grafted bare rooted plants The following tables show the elicited and fitted values for pest infestation/infection (Table A.1) and pest freedom (Table A.2).
Based on the numbers of estimated infested plants the pest freedom was calculated (i.e. = 10,000number of infested plants per 10,000). The fitted values of the uncertainty distribution of the pest freedom are shown in Table A
Regulated status
Garella musculana is not regulated in the EU. It is reported in the EPPO A2 list and recommended for regulation as a quarantine pest (EPPO, online_a).
Pest status in Turkey
Garella musculana was reported for the first time in 2015 in the city of Bartın, Turkey (Bostancı et al., 2019;EPPO, online_c).
Pest status in the EU
Garella musculana is present in Bulgaria only in some parts of the country. It was reported for the first time in Varna in 2016 and more recently (2019) in the province of Burgas (municipality of Kableshkovo) (Bostancı et al., 2019;EPPO, online_e).
Host status on Juglans regia
Juglans regia is a host to G. musculana (EPPO, online_d, g; Robinson et al., online); G. musculana is reported as a major pest for English walnut, causing severe damage to fruits and young shoots (Bostancı et al., 2019;Gull et al., 2019;CABI, online).
Garella musculana has four life stages (egg, four larva instars, pupa, adult) and has 1 to 4 generations per year depending on altitude. Only one generation per year occurs at higher altitudes (EPPO, online_g). Four generations have been observed at the sea level in Bartın, Turkey (Bostancı et al., 2019).
When more than one generation occur, the adults of the first generation fly in April and May, while the second and third generations are observed in June -July and in August, respectively. Females lay 30-120 eggs on young fruits, buds, leaf axils and one-year old shoots. On Juglans nigra, young larvae enter the shoots and bore tunnels up to 6 cm long (2 cm in leaf axil); after 15 days of feeding, the shoots are emptied and die (Bostancı et al., 2019). The attack on the shoots generally occurs in years of low nut production (EPPO, online_f). When attacking the fruits, the young larvae penetrate through the petiole and start feeding on green husk fruit. During the feeding period, which lasts from 25 to 40 days, the larvae pass from one fruit to another, and some fruits can host even 2 -3 larvae (Gull et al., 2019;EPPO, online_g). The larvae of the last generation in late summerautumn are unable to enter the lignified nut and therefore can feed only in the pericarp (EPPO, online_g). At the end of development, the larvae leave the fruit to pupate on tree stem and branches. The pupal stage usually lasts 10 days and the insect spends the winter at the stage of mature larva or pupa inside a cocoon (EPPO, online_g).
In Turkey, up to four generations per year has been observed. It is confirmed that in Turkey G. musculana on J. regia spends the winter at the pupal stage in the cracks of the bark and loose bark (Bostancı et al., 2019).
Although no data about the flight distance of G. musculana adults is available, it is quoted that 'capacities for natural spread is rather limited' (EPPO, online_f). According to Bostancı et al. (2019) 'due to the biology of G. musculana, there is no risk of transport by saplings and scion wood between November and March. We did not detect any form of pest (eggs, larvae, pupae or adult) in the control of saplings and scion wood (Juglans regia) between November and March, (stagnant period) and no harm was observed.'
Symptoms Main type of symptoms
On J. regia green husky fruits, the main symptom is the emergence of dark frass at the entry holes of the larvae. The larval feeding in the pericarp causes obvious deformation and discoloration of the fruit. Infested fruits can also show round emergence holes of mature larvae, larger than the entry ones (Gull et al., 2019;EPPO, online_g). Brown dark frass and internal feeding is a symptom of pest attack on English walnut young shoots (CABI, online; EPPO, online_g). Dying of J. nigra shoots was observed after 15 days of larval feeding in 6 cm long gallery (Bostancı et al., 2019). Yellowing and dying of infested shoots are also reported on J. regia (EPPO, online_g).
On walnut wood with bark, the occurrence of the pest can be suspected by finding aggregation of moth cocoons in bark crevices (CABI, online; EPPO, online_g).
Symptoms are commonly easy to detect on both walnut fruits and shoots. Living insects which are eventually found need to be examined by specialists.
Presence of asymptomatic plants
No report was found on the presence of asymptomatic plants.
Confusion with other pests
Symptoms on green walnut fruits can be confused with that of the codling moth Cydia pomonella, while damaged young shoots show symptoms similar to those caused by the cossidae Zeuzera pyrina (Yıldız et al., 2018).
Host plant range
Garella musculana is a pest of Juglans regia and J. nigra (Bostancı et al., 2019;CABI, online;EPPO, online_d). It is also reported as serious pest of both walnut and almond in Uzbekistan (Esonbaev et al., 2020). Furthermore, Populus species are also reported as hosts (Robinson et al., online).
Reported evidence of impact
Garella musculana is considered a primary pest for English walnut orchards in Central Asia (mainly Kyrgyzstan, Tajikistan and Uzbekistan) causing up to 70-80% yield loss of fruits, with considerable economic impact (Yildiz et al., 2018;Esonbaev et al., 2020;EPPO, online_f). Important infestations on shoots in years of low nut production cause severe damage in young plantations; in mountain forests, the reduced walnut seed production can compromise the natural regeneration (EPPO, online_f).
In Turkey, in the Bartın province, the recent appearance of G. musculana in walnut plantations is of a great concern due to its potential to cause serious losses in the nut production, of which Turkey is the fourth largest producer in the world (Yıldız et al., 2018;Bostancı et al., 2019). Recent surveys have shown that the damage rate of G. musculana in walnut orchards in Bartın varies from 8% to 90%, with 22% of walnut trees infested, and 15% damage rate found on shoots (Dossier Section 3.1).
Commodity risk assessment of Juglans regia plants from Turkey www.efsa.europa.eu/efsajournal Pathways and evidence that the commodity is a pathway Four pathways are potentially associated with the risk of introduction of G. musculana: plants for planting, cut branches, fruits (nuts) and wood with bark (EPPO, online_f). In Bartın, Turkey, the pest was found in orchards less than ten years old, so it is believed that it was introduced with walnut varieties as Chandler, Ferno and Fernette (Dossier Section 3.1). The pest is able to spread in all stages of its development. Adults can fly only over short distances (EPPO, online_f). During the growing season, eggs and larvae can be transported in green husk fruits, potted seedlings, cut branches, plants for planting and grafts. Pupae can spread throughout the year by transporting trunks and branches of walnut with bark (Bostancı et al., 2019;EPPO, online_f). While damaged fruits are considered at low risk as they are not profitable, cut branches and plants for planting are at higher risk because they possibly carry eggs and living larvae (Bostancı et al., 2019;EPPO, online_f). The pest overwinters at the larva or pupal stage inside the cocoon. Moth cocoons can be carried by walnut trunks with bark; this pathway is considered at high risk for transport between countries (Yildiz et al., 2018).
Surveillance information
Garella musculana is recorded in the Dossier as a pest occurring in Turkey, potentially associated with walnut plants for planting and subject to official control and certification of the commodity. Garella musculana is also included in the official surveillance programme of the MAF of Turkey.
A.2.2.
Possibility of pest presence in the nursery
A.2.2.1. Possibility of entry from the surrounding environment
Garella musculana is currently present in Turkey in Bartın province only. A recent national survey carried out throughout the territory of Turkey has excluded the presence of the pest somewhere else (Dossier Section 3.1). According to Dossier Section 3.1 in Bartın there are no nurseries growing walnut plants for planting for export; however, the pest is widespread in walnut orchards in various districts of the province of Bartın, so its presence and damaging rate are continuously surveyed (Dossier Section 3.1). Adult moths of G. musculana have limited active flight capacity for natural dispersal (EPPO, online_f); other life stages of the pest (eggs and larvae on living plants, overwintering larvae/ pupae on wood with bark) need human support for spreading.
The production areas are surrounded by wire or stone wall or left empty (Dossier Section 3.1). According to the rules, a distance of at least 20 m is left between the nurseries and other woody plants (Dossier Section 3.1). There is no information on walnut and other potential host plants in the surroundings.
Uncertainties:
-Although considered a species exclusively feeding on walnut, G. musculana has recently been reported as harmful also to the almonds (Esonbaev et al., 2020). Populus is another tree genus recorded as host (Robinson et al., online). This implies the risk that the pest could enter the nursery from surrounding environment also through other hosts than walnut. -Walnut plants for planting may also be found in ornamental plant nurseries in Bartın, even if not intended for export. Trading of this material for no professional purposes (e.g. domestic orchards) from the province of Bartın to the surrounding environment of production areas of walnut plants for planting to export could favour the spread and entry of the pest. -There is the possibility that adults and cocoons are accidentally introduced into the production areas by transporting walnut logs or branches with bark for different purposes (e.g. firewood, etc.), so that the pest could enter the nursery from surrounding environment. The Dossier gives no information on disposals on the prohibition of transport of walnut wood with bark outside the province of Bartın. In addition, a progressive spreading of the pest through the bordering provinces cannot be excluded, also taking into account that the effective control of G. musculana in the province of Bartın is considered very difficult (Dossier Section 1.0). -The abundance of walnut trees in the surroundings of the nurseries is not known.
Taking into consideration the above evidence and uncertainties, the Panel considers that there is the possibility for the pest to enter the nursery despite its poor active flight capacity. The lack of any regulation regarding the ban of the export of walnut wood with bark outside the province of Bartın is critical in this regard.
A.2.2.2. Possibility of entry with new plants/seeds
As stated before, the Bartın province, is not an area of production of live walnut plants for export. The propagation material (seeds, buds, etc.) mainly comes from mother plants located within the nurseries, or in the immediate vicinity. Mother plants are in turn subject to inspections by Ministry inspectors (Dossier Section 3.1).
Uncertainties:
-Although most plants (95%) are produced from propagation material coming from mother plants growing on-site (Dossier Section 3.1), the origin of remaining propagation material (about 5%) is unknown. -There is uncertainty about the possibility that other species of fruit/ornamental plants can also grow in walnut nurseries; this should be considered a potential risk factor, given the recent findings on the feeding habits of the pest.
Taking into consideration the above evidence and uncertainties, the Panel considers that the pest could enter the nursery with new plants/seeds.
A.2.2.3. Possibility of spread within the nursery
Feeding on walnut as main host and having up to four generations per year in Turkey (Bostancı et al., 2019), the pest is able to spread naturally (adult flight) within the nursery. This is also confirmed by the high infestation rate in walnut orchards in Bartın, where the species is present (Bostancı et al., 2019). Planting distances and other growing practices appear not to be relevant in this regard. No licensed plant protection products against G. musculana and no specific protocol for pest control in the plant for planting nurseries are currently available (Dossier Sections 1.0 and 3.1).
Uncertainties:
-No uncertainties Taking into consideration the above evidence and uncertainties, the Panel considers that the spread of the pest within the nursery is possible once entered.
A.2.3. Information from interceptions
In the EUROPHYT/TRACES-NT database, there are no records of notification of J. regia plants for planting neither from Turkey nor from other countries due to the presence of G. musculana between the years 1995 and January 2021 (EUROPHYT/TRACES-NT, online).
A.2.4. Evaluation of the risk mitigation measures
In the table below, all risk mitigation measures indicated in the Dossier from Turkey are listed and a description of their effectiveness on G. musculana is provided. Information on the risk mitigation measures is provided in Table 7.
Number
Risk mitigation measure Effect on the pest Evaluation and uncertainties (1)
Roguing and Pruning Yes
Information provided is poorly detailed. However, roguing is unlikely to remove the plants recently colonised by the larvae. Therefore, the measure will not be fully effective. Uncertainties: -It is unclear how measures are applied as no specific information is provided for the species.
6 Biological control and behavioural manipulation
Yes
The biological control application is superficially described hampering a thorough assessment. Furthermore, biological control is usually used for population control to a low level, not for eradication. Some of the species may not be commercially available. Chemical treatments during production
Yes
The proposed chemical treatments with 80% sulfur have no effect on the pest. The proposed treatments against thrips are performed only if thrips are detected. These types of treatments are expected to have little effect on the pest present inside the shoots.
Uncertainties: -There is no information on the active substances and timing of treatments against thrips. -There is uncertainty on whether treatments against thrips may have some effect on adults of G. musculana. 11 Washing the roots No Not applicable 12 Official inspections before export Yes Information is not sufficient to judge the quality of inspections.
Uncertainties: -It is uncertain whether the pupae can be detected with visual observation. -It is unclear which is the sample size of the official control.
13
Chemical treatments before export No Not applicable (1): Based on the description provided by the applicant country and summarised in Table 7, for all risk mitigation measures, there is uncertainty on whether the risk mitigation measures indicated by Turkey are mandatory or only general recommendations.
A.2.5. Overall likelihood of pest freedom for grafted bare rooted plants A.2.5.1. Reasoning for a scenario which would lead to a reasonably low number of infested grafted bare rooted plants The scenario assumes that most export is coming from nurseries far away from the outbreak areas and that there is a low pest population density in surroundings. The scenario also assumes that young plants and nurseries are poorly attractive, that mother plants are grown exclusively in the nurseries and that infestation is easy to detect by frass of larvae at the early stage of infestation leading to prompt detection. The scenario also assumes that inspection before export done by Ministry staff is effective in detecting pupae. The scenario assumes that risk mitigation measures are implemented.
A.2.5.2. Reasoning for a scenario which would lead to a reasonably high number of infested grafted bare rooted plants The scenario assumes that some export is coming from nurseries close to the outbreak areas and that there is a high pest population density in surroundings. The scenario also assumes that young plants and nurseries can be attractive, that infested scions are introduced in the nurseries for grafting, and that infestations with eggs and small larvae are difficult to detect. The scenario also assumes that inspection before export done by Ministry staff is not effective enough in detecting infestations. The scenario assumes that risk mitigation measures are not implemented.
A.2.5.3. Reasoning for a central scenario equally likely to over-or underestimate the number of infested grafted bare rooted plants (Median) Regarding the uncertainties on the frequency of orchards in the surroundings of the nurseries, but taking into account the certification system used and that the pest is reported only in Bartın, the Panel assumes a lower central scenario, which is equally likely to over-or underestimate the number of infested J. regia plants.
A.2.5.4. Reasoning for the precision of the judgement describing the remaining uncertainties (first and third quartile/interquartile range) A.2.5.5. Elicitation outcomes of the assessment of the pest freedom for Garella musculana on grafted bare rooted plants The following tables show the elicited and fitted values for pest infestation/infection (Table A.3) and pest freedom (Table A.4).
Based on the numbers of estimated infested plants the pest freedom was calculated (i.e. = 10,000number of infested plants per 10,000). The fitted values of the uncertainty distribution of the pest freedom are shown in Table A.4. EKE results 9,799 9,807 9,817 9,833 9,852 9,873 9,892 9,925 9,955 9,968 9,980 9,988 9,994 9,997 9,998 The EKE results are the fitted values.
Biology
Euzophera semifuneralis is a pyralid moth native to North America, reported from the United States, Canada and Mexico (CABI, online). It was initially described from specimens from South America (Colombia) but currently there is no confirmation about the presence of the species further south of Mexico (Biddinger and Howitt, 1992;CABI, online). Out of its native range it is only present in Turkey (Atay and € Ozt€ urk, 2010).
Euzophera semifuneralis has four stages of development: egg, larva (no data were found about the number of larva instars), pupa and adult (Blakeslee, 1915).
Euzophera semifuneralis has two or more generations per year (Solomon and Payne, 1986;Connell et al., 2005). The adults emerge in April and May. After mating the females lay 12-74 eggs singly on the twigs/young stems, or in small groups in the cracks/crevices of the bark, and in bark with small mechanical or pruning wounds, recent grafts, frost damage or disease cankers. The eggs hatch after 8-14 days. The young larvae bore into bark and mine irregular and shallow galleries in the cambium, expelling considerable frass. Larval feeding lasts 4-6 weeks, then larvae pupate under the bark. The pupal stage in summer lasts 10-18 days. Due to the frequent overlapping of generations, the larvae can be observed at any time of the year. Euzophera semifuneralis overwinters as mature larva in a typical white silken cocoon under the bark. The pupal stage in spring lasts about 20-30 days (Blakeslee, 1915;Solomon and Payne, 1986).
There are no specific data on the flight distance of E. semifuneralis adults, but species belonging to genus Euzophera are commonly considered unable to fly long distances (Korycinska, 2018).
Recent interceptions (2020) on Tilia and Liriodendron tulipifera from the USA are likely referable to wood products (TRACES, online). Wood with bark is also considered a suitable pathway for E. semifuneralis, as it was associated with the import of Prunus wood with bark from the USA in 2017 (Korycinska, 2018;. In pomegranate, it has been determined that E. semifuneralis generally feeds by opening galleries, sometimes locally and sometimes all around, especially in the part of the stem close to the root collar of young trees and saplings (Atay and € Ozt€ urk, 2010).
Main type of symptoms
Specific descriptions of symptoms of E. semifuneralis on Juglans regia were not found. Nevertheless, symptoms on other trees belonging to family Juglandaceae, as pecan (Carya illinoinesis) and hickory (Carya sp.), were reported. Symptoms may be observed on stems and branches of various sizes but are usually seen in the lower part of the stem (Solomon and Payne, 1986). The main symptom is a remarkable accumulation of frass on the bark. Frass is mostly formed by masses of larval excrement mixed with sap exudates and silky threads. By removing the bark, larval galleries full of frass, larvae and/or white silken cocoons can be easily observed (Solomon and Payne, 1986).
In pomegranate, it has been determined that E. semifuneralis generally feeds by opening galleries, sometimes locally and sometimes all around, especially in the part of the stem close to the root collar of young trees and saplings, and under the bark of the trunks and branches of old trees (Atay and € Ozt€ urk, 2010). In general, it can be assumed that the symptoms are quite easy to detect.
Presence of asymptomatic plants
No report was found on the presence of asymptomatic plants.
Confusion with other pests
Symptoms caused by E. semifuneralis are not specific, e.g. sesiid borers feeding on Juglandaceae as Synanthedon scitula show similar symptoms (Solomon and Payne, 1986).
For a reliable identification of E. semifuneralis symptoms, visual inspection may not be satisfactory, and careful observation by specialists of larvae, cocoon or another insect stage may be needed.
Reported evidence of impact
Euzophera semifuneralis is generally known as pest of trees showing mechanical injuries or infected by canker diseases (Connell et al., 2005). The larvae are usually unable to attack trees with undamaged bark. Larval feeding in the cambium often causes girdling of stems and death in young trees (Blakeslee, 1915;Solomon and Payne, 1986;Biddinger and Howitt, 1992).
The pest is also known as Ceratocystis fungus vector. Larval feeding is reported as a possible means to introduction Ceratocystis spores into the host (Connell et al., 2005). Euzophera semifuneralis is known as a serious pest mainly to plum and cherry orchards in the USA. It was also noted as a pest in the pruning wounds of pecan and walnut ('walnut gridler') but the insect is usually considered not able to infest healthy, uninjured trees (Biddinger and Howitt, 1992).
Euzophera semifuneralis is quoted as sporadic pest on almond young orchards. Vigorous trees rarely suffer serious damage, but heavily infested branches can break under the action of the wind (Pollack, 1998).
Pathways and evidence that the commodity is a pathway
In pomegranate, it has been determined that E. semifuneralis generally feeds by opening galleries, sometimes locally and sometimes all around, especially in the part of the stem close to the root collar of young trees and saplings (Atay and € Ozt€ urk, 2010).
Therefore, the Panel cannot exclude the commodity to be a pathway.
Surveillance information
No surveillance information for this pest is currently available from the MAF of Turkey. urk, 2010). The Dossier states that 'There is no nursery growing walnut plants for planting intended to be exported from Adana and Osmaniye provinces to the EU' (Dossier Section 3.1).
However, E. semifuneralis is a polyphagous species, feeding on 22 genera of woody and herbaceous plants, including J. regia. Girdling of young walnut plants by E. semifuneralis larvae has been recorded in the USA, where the pest is still considered of minor importance as it is able to infest only trees with mechanical wounds or affected by canker diseases (Biddinger and Howitt, 1992;Connell et al., 2005). The pest can spread naturally only by flight of adult moths; although no precise data on flight distance of adults is available, it is known that all species of Euzophera can fly only short distances (Korycinska, 2018). The possibility that the pest can reach walnut orchards or nurseries through the transport of pomegranate plants for planting (or trunks/cut branches) among the provinces cannot be excluded.
The production areas are surrounded by wire or stone wall or left empty (Dossier Section 3.1). According to the rules, a distance of at least 20 m is left between the nurseries and other woody plants (Dossier Section 3.1). There is no information on the species composition of the woody plants in the surroundings.
According to Dossier Section 3.1, there are generally no woody plants other than walnut mother plants and walnut saplings at a distance of less than 2 km from the nursery plots, although pictures provided in the Dossier Section 1.0 support that woody plants are present nearby production plots. According to Dossier Section 3.1, there is distance of 5-10 km between the nurseries and urban areas. Uncertainties: -Data available on the biology, life cycle, number of generations of E. semifuneralis only refer to North America. The lack of biological data referable to the ecological and climatic context of Turkey is a factor of uncertainty about the real risk posed by the pest. -There is uncertainty about the situation of the Adana province as production area of walnut plants for export. The map of Turkey showing a production up to 100,000 plants for Adana does not seem to correspond to what was stated in the Dossier (Dossier Section 3.1see above). This suggests that infested pomegranate orchards (as well as other fruit orchards, given the polyphagy of the pest) may occur within a flight distance sufficient for the pest to reach walnut nurseries. -There is uncertainty whether some nurseries could also grow pomegranate.
-During the surveys on damage caused by E. semifuneralis carried out in the provinces of Adana and Osmaniye, the pest has been found in about 20 localities and over 30 pomegranate orchards (Atay and € Ozt € urk, 2010). This indicates a relevant presence of the pest, but there is no information on the possibility that pomegranate plants for planting (or cut branches, etc.) from Adana and Osmaniye could be transported within the Turkish territory to reach surrounding areas of walnut nurseries in the provinces of main production of plant for planting for export.
-There is no information on abundance of pomegranates and other host plants in the surroundings of the nurseries.
Taking into consideration the above evidence and uncertainties, the Panel considers that there is the possibility for the pest to enter the nursery, by: natural spread within the province of Adana; accidental introduction of infested pomegranate (or other host) plants for planting in walnut production areas; transport of cut branches or trunks carrying larvae or cocoons of the pest in walnut plants for planting production areas.
A.3.2.2. Possibility of entry with new plants/seeds
There is no data on the walnut as host plant for E. semifuneralis in Turkey so far. The propagation material used in the nurseries mainly comes from mother plants growing in the immediate vicinity; this material is subject to phytosanitary control by Ministry inspectors and certified (Dossier Section 3.1).
Uncertainties:
-Most of the plants (95%) are produced from propagation material coming from mother plants growing on-site (Dossier Section 3.1), but the origin of the remaining propagation material (about 5%) is unknown. -It is not clear whether other species of fruit or ornamental plants can also be grown in walnut nurseries; this should be considered as potential risk factor given the remarkable polyphagy of the pest.
Taking into consideration the above evidence and uncertainties, the Panel considers that the pest could enter the nursery with new plant material.
A.3.2.3. Possibility of spread within the nursery
It is known that E. semifuneralis is able to attack only plants having mechanical wounds, or bark damage caused by canker disease. It is also known that the pest is able to infest stems and branches of various size (Solomon and Payne, 1986). Once entered, there is therefore the possibility that the pest can spread naturally (by adult flight) within the nursery by attacking young plants accidentally damaged by machinery (for example during weed management operations, grafting, or other). However, it should be considered that the likelihood that damaged plants will be found in nurseries is rather low. Anyway, the spread of the pest could be also enhanced by the lack of specific control protocols.
Pruning of mother plants is expected to increase the likelihood of infestation of these plants, therefore increasing the population density in the nurseries, if present.
Uncertainties:
-Lack of data on the behaviour of the insect in Turkish ecological and climatic contexts, which are different from those species studied so far.
Taking into consideration the above evidence and uncertainties, the Panel considers that the spread of the pest within the nursery is possible once entered.
A.3.3. Information from interceptions
In the EUROPHYT/TRACES-NT database, there are no records of notification of J. regia plants for planting neither from Turkey nor from other countries due to the presence of E. semifuneralis between the years 1995 and January 2021 (EUROPHYT/TRACES-NT, online).
A.3.4. Evaluation of the risk mitigation measures
In the table below, all risk mitigation measures indicated in the Dossier from Turkey are listed and a description of their effectiveness on E. semifuneralis is provided. Information on the risk mitigation measures is provided in Table 7.
Number
Risk mitigation measure Effect on the pest Evaluation and uncertainties (1) The biological control application is superficially described hampering a thorough assessment. Furthermore, biological control is usually used for population control to a low level, not for eradication. Some of the species may not be commercially available.
Uncertainties: -It is unclear how measures are applied as no specific information is provided for the species. The scenario assumes that saplings of J. regia are minor hosts, that most of nurseries are specialised to Juglans and are located far from the infested areas in South Turkey and that the surroundings of the nurseries are free from alternative hosts, e.g. pomegranate. The scenario also assumes that mother plants are well inspected and protected. Finally, the scenario assumes that frass is detected by staff at sorting and plants are destroyed, that sorting decreases the infestation level and that official inspections will detect infestations before export, due to the presence of frass. The scenario assumes that risk mitigation measures are implemented.
A.3.5.2. Reasoning for a scenario which would lead to a reasonably high number of infested grafted bare rooted plants The scenario assumes that saplings of J. regia are suitable hosts for infestation because of the presence of injuries, that the nurseries include also alternative hosts, e.g. pomegranate, that most nurseries are located close to the infested areas in the south Turkey, and that alternative hosts are present in the surroundings of the nurseries, e.g. pomegranate. The scenario also assumes mother plants can attract the pest and increase pest population after pruning. Finally, the scenario assumes that infestation is not detected by staff during handling for export, that late infestations with less symptoms will not be detected, and that official inspection will not detect infestations before export, due to the cleaning of saplings. The scenario assumes that risk mitigation measures are not implemented.
A.3.5.3. Reasoning for a central scenario equally likely to over-or underestimate the number of infested grafted bare rooted plants (Median)
Regarding the uncertainties on the surroundings of the nurseries, but taking into account that the pest is reported only in some areas in South of Turkey, the Panel assumes a lower central scenario, which is equally likely to over-or underestimate the number of infested J. regia plants. A.3.5.4. Reasoning for the precision of the judgement describing the remaining uncertainties (first and third quartile/interquartile range) The first quartile describes the highest uncertainty that reflects uncertainty on most of the information available. The third quartile describes high uncertainty, although lower than expressed by the first quartile, reflecting the limited reported distribution of the pest in Turkey.
A.3.5.5. Elicitation outcomes of the assessment of the pest freedom for Euzophera semifuneralis on grafted bare rooted plants The following tables show the elicited and fitted values for pest infestation/infection (Table A.5) and pest freedom (Table A.6).
Based on the numbers of estimated infested plants the pest freedom was calculated (i.e. = 10,000number of infested plants per 10,000). The fitted values of the uncertainty distribution of the pest freedom are shown in Table A.6.
Pest status in the EU
Reported in the Netherlands on Rosa sp. (Alves et al., 2008) and in Spain in pistachio orchards (L opez-Moral et al., 2020).
Host status on Juglans regia
Juglans regia is reported as a host of L. pseudotheobromae (Li et al., 2016).
PRA information
Pest Risk Assessments available: -Scientific Opinion on the commodity risk assessment of Persea americana from Israel (EFSA PLH Panel, 2021).
Other relevant information for the assessment Biology Species of Botryosphaeriaceae cause cankers and fruit rots and they survive as saprophytes, parasites and even as endophytes in symptomless tissues (McDonald and Eskalen, 2011). The pycnidia or fruiting bodies of L. pseudotheobromae are produced on diseased plant tissues. In the summer, conidia are spread by wind, rain or insects. Conidia can be produced all year round depending on the climatic region but the disease spreads more rapidly during the summer when the temperature is around or even higher than 30°C. The pathogen normally enters the plant through wounds (usually by pruning) which is the main way of spreading (Liang et al., 2019). The pathogen overwinters in the diseased twigs or in plant debris in soil.
Symptoms Main type of symptoms
The main symptoms on J. regia are cankered stems, blighted branches and decayed kernels.
Symptoms on fruits:
-Buff to brown, leathery area (Mangifera indica, Citrus limon, Persea americana), -Leathery area encircling the stem end of the fruit.
Regarding the Botriosphaeriaceae family, all plant parts, seeds included, have been recorded as asymptomatic carrier of latent pathogens. The fungi can live endophytically for long periods of time in healthy plants (Slippers and Wingfield, 2007).
Uncertainties:
-The taxon is a newly described cryptospecies, therefore its distribution is only partially known, and may be wider than currently known in Turkey. -No information on the presence of the pathogen and potential host plants in the surrounding of the nurseries.
Taking into consideration the above evidence and uncertainties, the Panel considers that it is possible for the pathogen to enter the nursery. The pathogen can be present in the surrounding areas and the transferring rate could be enhanced by wind and insect's movement.
A.4.2.2. Possibility of entry with new plants/seeds
Grafting and grafted material is among the main carrier of the pathogen but there is no information about the movement of saplings or rootstock from the region where the pathogen has been reported in Turkey (Dossier Section 3.1).
Uncertainties:
-While the majority of plants (95%) are produced on-site (Dossier Section 3.1), the origin of remaining plant material (about 5%) is unknown. -Because L. pseudotheobromae has a large number of hosts, and because no specific treatment is applied before new plants enter the nursery, the pathogen may enter with new plants of different species than walnut, through national or international trade. -The pathogen can be present in asymptomatic form or could be difficult to identify.
Taking into consideration the above evidence and uncertainties, the Panel considers that the pathogen could enter the nursery with plant material, the pathogen being difficult to identify and possibly on asymptomatic plants.
A.4.2.3. Possibility of spread within the nursery
Within the nursery the pathogen can be spread by conidia and infect plants through wounds, including grafting and pruning wounds. Inoculum could be transported by grafting and pruning tools. If overwintering in soil on plant debris, the fungus could sporulate and produce conidia and start new infections the following year.
Taking into consideration the above evidence and uncertainties, the Panel considers that the spread of the pathogen within the nursery is possible naturally by dissemination of conidia and subsequent infection through wounds, or by inoculum be transported by grafting and pruning tools.
A.4.3. Information from interceptions
In the EUROPHYT/TRACES-NT database, there are no records of notification of J. regia plants for planting neither from Turkey nor from other countries due to the presence of L. pseudotheobromae between the years 2008 (year of description of the fungus) and January 2021 (EUROPHYT/TRACES-NT, online).
A.4.4. Evaluation of the risk mitigation measures
In the table below, all risk mitigation measures indicated in the Dossier from Turkey are listed and a description of their effectiveness on L. pseudotheobromae is provided. Information on the risk mitigation measures is provided in Table 7. Protection of graft wounds with copper solutions is expected to reduce the risk of infection of the pathogen. Treatments with 80% sulphur have no effect on the pathogen present inside plant tissues. Spraying against thrips has no effect on the pathogen. Treatments with copper products have no effect on the pathogen inside plant tissues but may prevent new infections limited to month of May, when applications are carried out. The scenario assumes that most nurseries are located far from the infected areas in the south of Turkey and a limited presence of host trees as well as facilities handling fruits in the surroundings of the nurseries. The scenario also assumes that heavy outbreaks will be recognised by symptoms and fungus will be identified, that mother plants are well protected and controlled and that removal of weak plants will reduce the infestation level. Finally, the scenario assumes that official inspection would detect infections before export. The scenario assumes that risk mitigation measures are implemented.
Uncertainties: -No uncertainties
A.4.5.2. Reasoning for a scenario which would lead to a reasonably high number of infected grafted bare rooted plants The scenario assumes that most nurseries are located close to the infected areas in the south of Turkey and that host trees as well as facilities handling fruits are present in the area surrounding the nurseries. The scenario also assumes a slow spread of the disease that can be undetected, that mother plants are not managed correctly, and that the removal of weak plants will not reduce the infestation level, due to remaining inoculum in the soil. Finally, the scenario assumes that official inspection will detect infections before export. The scenario assumes that risk mitigation measures are not implemented.
A.4.5.3. Reasoning for a central scenario equally likely to over-or underestimate the number of infected grafted bare rooted plants (Median) Considering the uncertainties on the pest pressure outside and in the nurseries and the endophytic behaviour of the pathogen, but considering that the pathogen has been reported a few times in Turkey, the Panel assumes a lower scenario, which is equally likely to over-or underestimate the number of infected J. regia plants. A.4.5.4. Reasoning for the precision of the judgement describing the remaining uncertainties (first and third quartile/interquartile range) The first and third quartiles describe the highest uncertainty that reflects uncertainty on most of the information available.
A.4.5.5. Elicitation outcomes of the assessment of the pest freedom for Lasiodiplodia pseudotheobromae on grafted bare rooted plants The following tables show the elicited and fitted values for pest infestation/infection (Table A.7) and pest freedom (Table A.8).
Based on the numbers of estimated infected plants, the pest freedom was calculated (i.e. = 10,000number of infected plants per 10,000). The fitted values of the uncertainty distribution of the pest freedom are shown in Table A.8. The pest is included in the EPPO A2 list (EPPO, online_a).
Other relevant information for the assessment Biology
Lopholeucaspis japonica is an oyster shell-shaped armoured scale, originating from the Far East and spreading to tropical and semitropical areas (CABI, online).
Females and males have different life cycles. The life stages of female are egg, two larval instars and adult, while the male has additional two stages called pre-pupa and pupa (CABI, online). Males are small and have wings (Bienkowski, 1993), while females are sessile enclosed in chitinous 'puparium' (Tabatadze and Yasnosh, 1999). The colour of females, eggs and crawlers is lavender. The wax is covering the body of scales is white (Fulcher et al., 2011). Each female lays on average 25 eggs, which are laid underneath the female bodies (Fulcher et al., 2011;Addesso et al., 2016).
Lopholeucaspis japonica has one or two overlapping generations per year (Addesso et al., 2016). It was reported in Georgia that occasionally there can be a third generation (Tabatadze and Yasnosh, 1999). In India, first generation crawlers were observed from late March until the end of April. Females and male pupae were present from June until the end of August. Second generation crawlers occurred in September and matured females in October (Harsur et al., 2018).
Lopholeucaspis japonica overwinters as an immature stage on trunks and branches in Tennessee (Fulcher et al., 2011) and second instar males and females in Maryland (Gill et al., 2012). In addition, it has been reported to overwinter as fertilised females in Tokyo, Japan (Murakami, 1970) and in Pennsylvania (Stimmel, 1995). They can endure temperatures of -20 to -25°C (EPPO, 1997).
Symptoms Main type of symptoms
Lopholeucaspis japonica is usually on bark of branches and trunk but can be found also on leaves (Gill et al., 2012) and sometimes on fruits (EPPO, 1997).
The scale feeds on plant storage cells, which causes them to collapse (Fulcher et al., 2011). When the population is high, the main symptoms on plants are premature leaf drop, dieback of branches and death of plants (Fulcher et al., 2011;Gill et al., 2012).
Symptoms observed on pomegranate in India were yellowing of leaves, poor fruit set and stunted plant growth (Harsur et al., 2018).
Presence of asymptomatic plants
No report was found on the presence of asymptomatic plants.
Confusion with other pests
Lopholeucaspis japonica can be confused with other armoured scales.
Lopholeucaspis japonica is similar to L. cockerelli but can be differentiated by the number of macroducts (Garc ıa Morales et al., online). Another similar scale is Pseudaulacaspis pentagona (Fulcher et al., 2011).
Host plant range
Lopholeucaspis japonica is a polyphagous armoured scale and feeds on plants belonging to 38 families (Garc ıa Morales et al., online), including Juglans regia (Batsankalashvili et al., 2017).
Reported evidence of impact
Lopholeucaspis japonica is listed as EU Quarantine pest (Annex II, Part A of Commission Implementing Regulation (EU) 2019/2072). Pathways and evidence that the commodity is a pathway Possible pathways of entry for L. japonica are plants for planting (excluding seeds), bonsai, cut flowers and cut branches (EFSA PLH Panel, 2018). There were two interceptions of L. japonica on Acer sp. bonsai plants and one on Zelkova serrata bonsai plants from China, indicating that trade of plants for planting can be a pathway for the pest (EUROPHYT, online).
Surveillance information
No surveillance information for this pest is currently available from the MAF of Turkey. A.5.2.
Possibility of pest presence in the nursery A.5.2.1. Possibility of entry from the surrounding environment Lopholeucaspis japonica was found first in the Ankara region of Turkey in a 1949 study (Dossier Section 3.1), and is still listed as present in the region from the EPPO distribution list (EPPO, online_c) based on Miller and Davidson (2005), and in the Black Sea region (Kaydan et al., 2013) where the Samsun province (one of the main J. regia production area) is located. It is also listed as present in neighbouring countries (Iran, Azerbaijan, and Georgia) in the EPPO distribution list (EPPO, online_c).
The pest is transported by wind and animals in its first mobile stage, and by transport of planting material, budwood and cut branches in the fixed phase.
The production areas are surrounded by wire or stone wall or left empty (Dossier Section 3.1). According to the rules, a distance of at least 20 m is left between the nurseries and other woody plants (Dossier Section 3.1). There is no information on the abundance of walnut trees and on the species composition of the woody plants in the surroundings.
According to Dossier Section 3.1, there are generally no woody plants other than walnut mother plants and walnut saplings at a distance of less than 2 km from the nursery plots, although pictures provided in the Dossier Section 1.0 support that woody plants are present nearby production plots. According to Dossier Section 3.1, there is distance of 5-10 km between the nurseries and urban areas.
Uncertainties:
-There is no specific information available about the present distribution of the pest in Turkey.
-The identification of the pest is very difficult, as diaspid scales (unidentified) are present in many lists of pest interception (EPPO, 1997). -Differences in the pest biology between different countries (Tabatadze and Yanosh, 1999) stress the uncertainty about the pest biology in Turkey. -There is no information on the abundance of walnut trees and on the species composition of the woody plants in the surroundings.
Taking into consideration the above evidence and uncertainties, the Panel considers that it is possible for the pest to enter the nursery. The pest could be present in the surrounding area and in its mobile stage is transported by wind, animals, and humans.
A.5.2.2. Possibility of entry with plant material
At the stage transferred with new plants (the static stage), the pest is visible, usually easy to identify as a diaspid scale and intercepted, but hard to identify as L. japonica.
Uncertainties:
-While the majority of plants (95%) are produced on-site (Dossier Section 3.1), the origin of rest of the walnut saplings production (about 5%) is unknown. -Because of the number of L. japonica hosts, and because no specific treatment is applied before new plants enter the nursery, the pest may enter with new plants of different species than walnut, through national or international trade. -While the pest is listed as quarantine pest in Turkey the level of attention paid to this specific pest in the country seems low (Dossier Section 3.1), considered the new findings of the species reported in international literature.
Taking into consideration the above evidence and uncertainties, the Panel considers that the pest could enter the nursery with plant material, the pest is difficult to identify and its presence in the country could be underestimated.
A.5.2.3. Possibility of spread within the nursery
Lopholeucaspis japonica spreads by wind, animals, and humans. But the possibility of movement within the nursery on plants and cut branches, tools and machinery cannot be excluded.
Uncertainties:
-There is no information in the Dossier about specific treatments or procedures against L. japonica.
Taking into consideration the above evidence and uncertainties, the Panel considers that the spread of the pest within the nursery is possible through the movement of plants and cut branches, tools, and machinery.
A.5.3. Information from interceptions
In the EUROPHYT/TRACES-NT database, there are no records of notification of J. regia plants for planting neither from Turkey nor from other countries due to the presence of L. japonica between the years 1995 and January 2021 (EUROPHYT/TRACES-NT, online).
A.5.4. Evaluation of the risk mitigation measures
In the table below, all risk mitigation measures indicated in the Dossier from Turkey are listed and a description of their effectiveness on L. japonica is provided. Information on the risk mitigation measures is provided in Table 7.
Number
Risk mitigation measure Effect on the pest Evaluation and uncertainties (1)
Roguing and Pruning
Yes Information provided is poorly detailed. However, roguing is unlikely to remove the plants recently infested. Therefore, the measure will not be fully effective.
Uncertainties: -It is unclear how measures are applied as no specific information is provided for the species.
6
Biological control and behavioural manipulation
Yes
The biological control application is superficially described hampering a thorough assessment. Furthermore, biological control is usually used for population control to a low level, not for eradication. Some of the species may not be commercially available.
Uncertainties: -It is unclear how measures are applied as no specific information is provided for the species.
Risk mitigation measure
Effect on the pest Evaluation and uncertainties (1) 7 Physical treatments on consignments or during processing Yes Physical treatments on consignments or during processing could have an effect.
Uncertainties: -It is unclear how the brushing applied against the target species may affect L. japonica -It is unclear how the brushing is applied on small walnut plants in the nurseries.
8
Pest surveillance and monitoring during production and official inspections
Yes
The measure can have an effect.
Uncertainties: -It is uncertain whether the methods are able to identify L. japonica in a low density on plants without magnification.
9
Weed management No Not applicable 10 Chemical treatments during production
Yes
The proposed chemical treatments with 80% sulfur have no effect on the pest. The proposed treatments against thrips are performed only if thrips are detected. These types of treatment might have little effect on the pest under the wax cover.
Uncertainties: -There is no information on the active substances and timing of treatments against thrips. -There is uncertainty on whether treatments against thrips may have some effect on crawlers of L. japonica. 11 Washing the roots No Not applicable 12 Official inspections before export Yes Information is not sufficient to judge the quality of inspections.
Uncertainties: -It is uncertain whether the methods are able to identify L. japonica in a low density on plants without magnification.
13 Chemical treatments before export No Not applicable (1): Based on the description provided by the applicant country and summarised in Table 7, for all risk mitigation measures, there is uncertainty on whether the risk mitigation measures indicated by Turkey are mandatory or only general recommendations.
A.5.5. Overall likelihood of pest freedom for grafted bare rooted plants A.5.5.1. Reasoning for a scenario which would lead to a reasonably low number of infested grafted bare rooted plants The scenario assumes that J. regia is a minor host of the pest and that most exports will come from nurseries far from Black Sea area. The scenario also assumes that low spread occurs from the surroundings, that nurseries are specialised to walnut and that the pest is not spreading within the nursery by natural means or handling. Finally, the scenario assumes that the inspection before export is effective in detecting the pest. The scenario assumes that risk mitigation measures are implemented.
A.5.5.2. Reasoning for a scenario which would lead to a reasonably high number of infested grafted bare rooted plants The scenario assumes that J. regia is a good host of the pest and that some exports will come from nurseries close to the Black Sea area. The scenario also assumes that high spread occurs from the surroundings, that nurseries have diverse production with other host plants, and that the pest is spreading within the nursery by natural means or handling. Finally, the scenario assumes that the inspection before export is insufficient in detecting the pest. The scenario assumes that risk mitigation measures are not implemented.
A.5.5.5. Elicitation outcomes of the assessment of the pest freedom for Lopholeucaspis japonica on grafted bare rooted plants The following tables show the elicited and fitted values for pest infestation/infection (Table A.9) and pest freedom (Table A.10).
Based on the numbers of estimated infested plants, the pest freedom was calculated (i.e. = 10,000number of infested plants per 10,000). The fitted values of the uncertainty distribution of the pest freedom are shown in Table A.10.
|
2021-06-30T05:25:59.849Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "032deed8f413a5d07c49a96e0e64be4caee2d1d8",
"oa_license": "CCBYND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.2903/j.efsa.2021.6665",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "032deed8f413a5d07c49a96e0e64be4caee2d1d8",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16226671
|
pes2o/s2orc
|
v3-fos-license
|
Somatic embryogenesis for mass propagation of elite Spruce families: effect of storage time on somatic embryogenesis initiation
Background Somatic embryogenesis (SE) is the only clonal propagation method that has potential for large scale production of elite conifer plants from the breeding programs. Methods that support bioreactor-based methods for SE propagation are developed [1,2]. Samples of somatic embryos can be stored indefinitely under liquid nitrogen for future plant production. Somatic embryo cultures are also studied as model systems for conifer embryo development to address fundamental research questions, or used as material for genetic transformation to study gene function in conifers. In addition to utilizing SE for masspropagation of known elite clones previously tested in field tests, the SE technology offers an opportunity to directly capture and increase the value of small samples of elite seeds from the breeding programs. Furthermore, by direct masspropagation of families through SE, the value of the elite seed is increased; however without the cost of clonal testing. This is arguably an alternative approach to the traditional approach of only utilizing clonal field-tested material for SE masspropagation [3].The aim of this project was to investigate the effect from seed storage time on the rate of somatic embryo initiation for the purpose of optimizing the use for SE over time of small valuable seed samples. This was done by isolating ZE from seeds of Norway spruce that had been stored for various times, and were collected from different parts of Sweden. Material and methods Plant material Nineteen batches of Norway spruce (Picea abies) seeds from commercial seed orchards in southern, middle and northern parts of Sweden were provided by the forest companies supporting the project.
Background
Somatic embryogenesis (SE) is the only clonal propagation method that has potential for large scale production of elite conifer plants from the breeding programs. Methods that support bioreactor-based methods for SE propagation are developed [1,2]. Samples of somatic embryos can be stored indefinitely under liquid nitrogen for future plant production. Somatic embryo cultures are also studied as model systems for conifer embryo development to address fundamental research questions, or used as material for genetic transformation to study gene function in conifers.
In addition to utilizing SE for masspropagation of known elite clones previously tested in field tests, the SE technology offers an opportunity to directly capture and increase the value of small samples of elite seeds from the breeding programs. Furthermore, by direct masspropagation of families through SE, the value of the elite seed is increased; however without the cost of clonal testing. This is arguably an alternative approach to the traditional approach of only utilizing clonal field-tested material for SE masspropagation [3].The aim of this project was to investigate the effect from seed storage time on the rate of somatic embryo initiation for the purpose of optimizing the use for SE over time of small valuable seed samples. This was done by isolating ZE from seeds of Norway spruce that had been stored for various times, and were collected from different parts of Sweden.
Plant material
Nineteen batches of Norway spruce (Picea abies) seeds from commercial seed orchards in southern, middle and northern parts of Sweden were provided by the forest companies supporting the project.
Initiation of Somatic Embryogenesis
The spruce seeds were sterilized with 95% ethanol followed by 30% (v/v) commercial bleach and Tween 20. The bleach was discarded and the seeds was rinsed three times with sterile distilled water and left to imbibe overnight at room temperature. After imbibition, ZE were dissected from the female gametophyte under a dissecting microscope and cultured on half-strength LP medium supplemented with 10 µM 2, 4-Dichlorophenoxyacetic acid and 4.4 µM Benzyladenine for SE initiation. In total 90 ZE were isolated from each seed batch. The SE initiation was monitored on weekly basis.
Maturation of Somatic Embryos
One cell line per seed batch was tested for embryo differentiation from pro-embryogenic masses (PEMs) on DKM containing no plant growth regulators (PGRs) and maturation on DKM supplemented with 30 µM Abscisic acid.
Results
In total we tested 19 seed batches of Norway spruce where 90 ZE were isolated per seed batch and placed on ½ LP medium containing PGRsfor SE initiation. Three weeks after isolation of ZE, callus formation was observed. Embryogenic callus is composed of PEMs that have a white and translucent appearance (Fig. 1) and are mostly produced from the hypocotyl region of the ZE. When whitish callus reached a size of 5x5 mm, it was isolated from primary explants and placed on proliferation medium for continuous growth.
All 19 seed batches showed SE initiation however at different frequencies (Fig. 2). The initiation frequency did not vary notably between the seeds from different parts of Sweden. There was also no difference in initiation frequency related to the time in storage. The seeds tested had been collected between 1984-2007; the highest initiation frequency was observed in seed batch FP 444 collected in 1992 and the lowest initiation rate was observed in seed batch Saleby collected in 2006.
One of the initiated and established cell lines from each seed batch was subjected to maturation medium to examine whether the cultures of PEMs could produce mature somatic embryos. We observed that 11 out of 19 tested cell lines produced mature somatic embryos (data not shown). Since only one cell line from each seed batch was tested for maturation, we cannot exclude that the remaining 8 seed batches were capable of producing mature somatic embryos. However, similar to the initiation process, the maturation stage did not appear to be related to the storage time and the geographical origin of the seed. Figure 2 SE initiation rates from seeds from different seed batches and collection years. The SE initiation rate for each seed batch is shown in percentage of seeds tested (90 for each batch) that produced PEMs that could be isolated and cultured. Only one cell line per seed was recorded.
Conclusion
We have demonstrated that it is possible to propagate small batches of Norway spruce seeds stored for up to 25 years through somatic embryogenesis. All initiated cell lines established cultures of PEMs and most cell lines tested produced mature somatic embryos. Thus we conclude that SE can provide a promising method for amplifying small valuable batches of elite seeds even if the seeds have been stored for up to 25 years.
|
2014-10-01T00:00:00.000Z
|
2011-09-13T00:00:00.000
|
{
"year": 2011,
"sha1": "dc25532b0eae9336233fa17a79cbc7d631d35f8c",
"oa_license": "CCBY",
"oa_url": "https://bmcproc.biomedcentral.com/track/pdf/10.1186/1753-6561-5-S7-P127",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0047920cf861f4b39c4e899bbea45dadb5597307",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
216868544
|
pes2o/s2orc
|
v3-fos-license
|
memeBot: Towards Automatic Image Meme Generation
Image memes have become a widespread tool used by people for interacting and exchanging ideas over social media, blogs, and open messengers. This work proposes to treat automatic image meme generation as a translation process, and further present an end to end neural and probabilistic approach to generate an image-based meme for any given sentence using an encoder-decoder architecture. For a given input sentence, an image meme is generated by combining a meme template image and a text caption where the meme template image is selected from a set of popular candidates using a selection module, and the meme caption is generated by an encoder-decoder model. An encoder is used to map the selected meme template and the input sentence into a meme embedding and a decoder is used to decode the meme caption from the meme embedding. The generated natural language meme caption is conditioned on the input sentence and the selected meme template. The model learns the dependencies between the meme captions and the meme template images and generates new memes using the learned dependencies. The quality of the generated captions and the generated memes is evaluated through both automated and human evaluation. An experiment is designed to score how well the generated memes can represent the tweets from Twitter conversations. Experiments on Twitter data show the efficacy of the model in generating memes for sentences in online social interaction.
Image memes have become a widespread tool used by people for interacting and exchanging ideas over social media, blogs, and open messengers. This work proposes to treat automatic image meme generation as a translation process, and further present an end to end neural and probabilistic approach to generate an image-based meme for any given sentence using an encoder-decoder architecture. For a given input sentence, an image meme is generated by combining a meme template image and a text caption where the meme template image is selected from a set of popular candidates using a selection module, and the meme caption is generated by an encoder-decoder model. An encoder is used to map the selected meme template and the input sentence into a meme embedding and a decoder is used to decode the meme caption from the meme embedding. The generated natural language meme caption is conditioned on the input sentence and the selected meme template. The model learns the dependencies between the meme captions and the meme template images and generates new memes using the learned dependencies. The quality of the generated captions and the generated memes is evaluated through both automated and human evaluation. An experiment is designed to score how well the generated memes can represent the tweets from Twitter conversations. Experiments on Twitter data show the efficacy of the model in generating memes for sentences in online social interaction.
Introduction
An Internet meme commonly takes the form of an image and is created by combining a meme template (image) and a caption (natural language sentence). The image typically comes from a set of popular image candidates and the caption conveys the intended message through natural language. Over the internet, information exists in the form of text, images, video, or a combination of these. However the information existing as a combination of image or video and short text often gets viral. Image memes are a combination of image, text, and humor, making them a powerful tool to deliver information. The image memes are popular because they portray the culture and social choices embraced by the internet community and they have a strong influence on the cultural norms of how specific demographics of people operate. For example, in Figure 2, we present the memes used by an online deep learning community to ridicule how the new pre-training methods are outperforming the previous state-of-the-art models.
The popularity of image-based memes can be attributed to the fact that visual information is easier to process and understand when compared to reading large blocks of text, and this fact is evident in Figure 2: Memes used by the online deep learning community on social media to ridicule the state-of-the-art pre-training models. Figure 2. The key role played by the image memes in shaping the popular culture of the internet community makes automatic meme image generation a demanding research topic to delve into. Davison (2012) separates a meme into three components -Manifestation, Behavior, and Ideal. In an image meme, the Ideal is the idea that needs to be conveyed. The Behavior is to select a suitable meme template and caption to convey that idea and, the Manifestation is the final meme image with a caption conveying the idea. Wang and Wen (2015) and Peirson et al. (2018) focus on the behavior and manifestation of a meme, not much importance is given to the ideal of a meme. Their approach of image meme generation is limited to selecting the most appropriate meme caption or generating a meme caption for the given image and template name. In this work, we intend to automatically generate an image meme to represent a given input sentence (Ideal) as illustrated in Figure 1, which is a challenging NLP task with immediate practical applications for online social interaction.
By taking a deep look into the process of image meme generation, we propose to co-relate image meme generation to Natural Language Translation. To translate a sentence from a source to target language, one has to decode the meaning of the sentence in its entirety, analyze its meaning, and then encode that meaning of the source sentence into the target sentence. Similarly, a sentence can be translated into a meme by encoding the meaning of the sentence into a pair of image and caption capable of conveying the same meaning or emotion as that of the sentence. Motivated by this intuition for meme generation, we develop a model that operates beyond the known approaches and extend the capability of image meme generation to generate memes for a given input sentence. We summarize our contributions as follows: • We present an end to end encoder-decoder architecture to generate an image meme for any given sentence.
• We compiled the first large-scale Meme Caption dataset.
• We design experiments based on human evaluation and provide a thorough analysis of the experimental results on using the generated memes for online social interaction.
Related Work
There are only a few studies on automatic image meme generation and the existing approaches treat meme generation as a caption selection or caption generation problem. Wang and Wen (2015) combined an image and its text description to select a meme caption from a corpus based on a ranking algorithm. Peirson et al. (2018) extends Natural Language Description Generation to generating a meme caption using an encoder-decoder model with an attention (Luong et al., 2015) mechanism.
Although there is not much work on automatic meme generation, the task of meme generation can be closely aligned with tasks like Sentiment Analysis, Neural Machine Translation, Image Caption Generation and Controlled Natural Language Generation.
In Natural Language Understanding (NLU), researchers have explored classifying a sentence based on their sentiment (Socher et al., 2013). We extend this idea to classify a sentence based on its compatibility with a meme template. The idea of creating an encoded representation and decoding it into a desired target is well establish in Neural Machine Translation and Image Caption Generation. , Bahdanau et al. (2014) and Vaswani et al. (2017) use an encoder-decoder model to encode and decode a sentence from a source to a targeted language. Vinyals et al. (2015), Xu et al. (2015), Karpathy and Fei-Fei (2015) encode the visual features of an image and use a decoder to generate natural language description of the image. Fang et al. (2020) generate natural language description containing common sense knowledge from the encoded visual inputs.
Our proposed model of meme generation shares similar spirit with the above mentioned problems where we encode the given input sentence into a latent space followed by decoding it into a meme caption that can be combined with the meme image to convey the same meaning as that of the input sentence. However, the generated meme caption should represent the input sentence through the selected meme template, making it a conditioned or controlled text generation task. Controlled Natural Language Generation with desired emotions, semantics and keywords have been studied previously. generate text with desired emotions by embedding emotion representations into Seq2Seq models. Hu et al. (2017) concatenate a control vector to the latent space of their model to generate text with designated semantics. Su et al. (2018) and Miao et al. (2019) generate a sentence with desired emotions or keywords using sampling techniques. While these approaches of controlled text generation involve relatively complex conditioning factors, we implement a transformer (Vaswani et al., 2017) based encoder-decoder model to generate a meme caption conditioned on both the input sentence and the selected meme template.
Our Approach
In this section, we describe our approach: an endto-end neural and probabilistic architecture for meme generation. Our model has two components. First, a meme template selection module to identify a compatible meme template (image) for the input sentence. Second, a meme caption generator as illustrated in Figure 3.
Meme Template Selection Module
Pre-trained language representations from transformer based architectures like BERT (Devlin et al., 2019), XLNet (Yang et al., 2019) and Roberta (Liu et al., 2019) are being used in a wide range of Natural Language Understanding tasks. Devlin et al. (2019), Yang et al. (2019) and Liu et al. (2019) show that these models can be fine-tuned specifically to a range of NLU tasks to create state-of-theart models.
For meme template selection module, we fine tune the pre-trained language representation models with a linear neural network on the meme template selection task. In training, the probability of selecting the correct template for a given sentence is maximized by using the formulation given below: where θ 1 denotes the parameters of the meme template selection module, T is the template and S is the input sentence.
Meme Caption Generator
We train the meme caption generator by corrupting the input caption, borrowing from denoising autoencoder (Vincent et al., 2008). We extract the parts of speech of the input caption using a Part-Of-Speech Tagger (POS Tagger) (Honnibal and Montani, 2017). Using the POS vector, we mask the input caption such that only the noun phrases and verbs are passed as input to the meme caption generator. We corrupt the data to facilitate our model to learn meme generation from existing captions and to generalize the process of meme generation for any given input sentences during inference.
The meme cation generator model uses a transformer architecture inspired from Vaswani et al. (2017). Our transformer encoder creates meme embedding for a given sentence by performing multihead scaled dot-product attention on the selected meme template and the input sentence. The transformer decoder initially performs masked multihead attention on the expected caption and later performs multi-head scaled dot-product attention between the encoded meme embedding and the outputs of the masked multi-head attention as shown in Figure 3. This enables the meme caption generator to learn the dependency between the input sentence, selected meme template and the expected meme caption. We optimize the transformer by using the formulation given below: where θ 2 denotes the parameters of the meme caption generator, C is the meme caption and M is the meme embedding obtained from the transformer encoder.
Meme Caption Dataset
To make possible and validate the aforementioned technical framework, we collect a dataset that would enable us to learn the dependency between a meme template and a meme caption. Here we adopt the open online resource imgflip 1 which is one of the most commonly used meme generators.
To automatically crawl the data, we developed a web crawler to collect the memes.
We observe that only a few meme templates dominate the collection. We investigate this dominating memes along with factors that can make a meme popular. Replication of a meme depends on the mental processes of observation and learning of the group of people across which it is being shared (Davison, 2012). Popular meme templates make a content shareable and are replicated frequently because of their capability to a make content viral. To this end, we experiment on image meme generation using the popular meme templates.
Our dataset has 177, 942 meme captions from 24 templates. The distribution of meme captions across the meme templates is presented in Figure 4. The dataset consists of meme template (image & template name) and meme caption pairs. A sample from the dataset is illustrated in Table 1. To add diversity to the generated memes, we use various images for the same meme template. The meme template figures used and a sample of the additional images used for the meme templates are presented in Appendices A and B.
Twitter Dataset
We collect tweets from Twitter to evaluate the efficacy of our model in generating memes for sentences used in online social interaction. We randomly sampled 6000 tweets for the query "Corona virus". We prune the sampled tweets to 1000 tweets by selecting only those tweets with non negative sentiment using VADER-Sentiment-Analysis (Hutto and Gilbert, 2014). Twitter is an open domain and may contain tweets that could affect the beliefs and sentiments of people and to have a control over our model, we remove the tweets with negative sentiments. The goal is to prompt our model to generate an image meme by inputting a tweet and evaluate if the generated meme is relevant to the tweet.
Captions
Template Image
Leonardo Dicaprio Cheers
• to those who have been fortunate enough to have known true love • cheers to us making the future bright • when you see your cousin at a family gathering
Success Kid
• carries the laundry didn't drop a single sock • when you win your first fortnite game • late to work and boss was even later • when she gives you her phone number
Experiments and Results
We train our model on the meme caption dataset (Section 4.1). The train, validation and test dataset contains 142341, 17802 and 17799 samples respectively. We evaluate the performance of the meme template selection module in selecting the compatible template, the effectiveness of the caption generator in generating captions that are similar to the input captions from the meme caption test dataset and the efficacy of the model in generating memes for real-world examples (Tweets) through human evaluation.
Meme Template Selection Module
We fine tune the pre-trained language representation models (BERT base (Devlin et al., 2019), XLNet base (Yang et al., 2019) and Roberta base (Liu et al., 2019)) using a single layer linear neural network with 768 units on the meme template selec-tion task using the meme caption dataset. Performance of the meme template selection module on the meme caption test data using variants of pretrained language representation models is reported in We adopt the best-performing model with a fine tuned Roberta base model as the meme template selection module in the meme generation pipeline.
MT2MC
SMT2MC-NP SMT2MC-NP+V Figure 5: Memes generated by the caption generator variants for the given input sentence.
Meme Caption Generator
For meme caption generation, we experiment with two different variants. The first variant -Meme Template to Meme Caption (MT2MC), inputs the selected meme template and generates a meme caption. The second variant -Sentence and Meme Template to Meme Caption (SMT2MC), inputs the input sentence along with the selected meme template and generates a meme caption. We use two variants as a part of ablation study to demonstrate the usage of the input sentence features enabling our model to generate memes relevant to the input sentence.
We also experiment with two variants of SMT2MC. The first variant uses only the noun phrases from the input sentence while the second variant uses the verbs along with the noun phrases. We experiment only using the noun phrases in order to study to what extent the addition of verbs directs the context of the generated meme towards the context of the input sentence. SMT2MC and MT2MC architectures follow the same denotations as of Vaswani et al. (2017). We report the hyperparameters used in Table 3 We use residual dropout (P drop ) (Srivastava et al., 2014) for regularization and Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.98 and = 1e −9 and a scheduler using cosine annealing with warm restarts (Loshchilov and Hutter, 2016). During inference we generate meme captions using Beam search with a beam of size 6 and length penalty α = 0.7. We stop the caption generation when a special end token or the maximum length of 32 tokens is reached.
A sample of memes generated by the caption generator variants are presented in Figure 5. It can be seen that MT2MC generates a random caption for the given meme template which is irrelevant to the input sentence while the meme captions generated by the variants of SMT2MC are contextually relevant to the input sentence. Among the variants of SMT2MC, it can seen that the caption generated using noun phrases & verbs as inputs better represent the input sentence.
Evaluation Metrics
We use BLEU score (Papineni et al., 2002) to evaluate the quality of the generated captions. The perspective of good quality of a meme is subjective and varies among people. To the best of our knowledge, there are no known automatic evaluation metrics to evaluate the quality of a meme. A fairly reliable technique is to perform human evaluation by a set of raters to evaluate the quality of a (c) User Likes score distribution Figure 6: Human evaluation scores.
meme on a subjective score.
In machine translation, adequacy and fluency (Snover et al., 2009) are used to subjectively rate the correctness and fluency of a translation. Inspired from adequacy and fluency, we define 2 metrics -Coherence and Relevance to evaluate the generated memes, described as follows: • Coherence: Can you understand the message conveyed through the meme (Image + text)? • Relevance: Is the meme contextually relevant to the text?
Coherence score captures the quality (fluency) of the generated meme and the Relevance score captures how well the generated meme represents the input sentence (correctness). We also ask the raters if they like the meme to evaluate if the generated memes are good. The Relevance and Coherence metrics are scored on a range of 1 -4. User Likes score represents the percentage of total raters who liked the meme. To score these metrics, we set up an Amazon Mechanical Turk (AMT) experiment.
Caption Generation Results
The scores for the caption generator variants are reported in Table 4. We see that the SMT2MC variants produce meme captions textually similar to that of the input sentence. Among the SMT2MC variants, the variant which inputs verbs along with noun phrases has better score, and using verbs with noun phrases enables the caption generator to generate relatively relevant captions to that of the input sentence when compared to the variant which inputs only the noun phrases. We use the best performing SMT2MC-NP+V to generate memes for Twitter data.
Human Evaluation Task Setup
We choose Amazon Mechanical Turk (AMT) for the evaluation of the generated memes due to its easy to use platform and the ready availability of a big worker pool with required skills. An example for AMT questionnaire is in presented in Appendix C. Each sample was rated by 2 raters and in case of disagreement among the raters, we consider their average score as the final score.
In our AMT evaluation setup, we design a twostage process to evaluate the meme. We first display the meme image and ask the workers to score the Coherence metric, only based on their understanding of the meme. Later we display the tweet and ask them to understand the text, and then ask them to score the Relevance metric based on their comprehension of the tweet and the meme. Our expectation for the AMT workers is that they are capable of visually understanding an image, capable of semantically and contextually understanding a sentence and possess the reasoning ability to compare context from different information sources. We assume an adult human being is well qualified to meet our expectations.
Human Evaluation Results
The performance of the SMT2MC-NP+V model on the human evaluation metrics is reported in Table 5 Before interpreting the scores, we review the image meme generation task. It requires the ability to semantically and contextually understand the input sentence along with the contextual knowledge of the image memes. Even with the understanding of the input sentence and meme images, one has to posses a good fluency in natural language to generate a meme caption that is compatible with the meme image. The generated meme should also be relevant to the input sentence. We analyze the performance of our model by assuming that a human generated meme would get perfect score across all the metrics in generating a good quality meme for a tweet.
Observing the score distribution from Figure 6, we infer that more than 60% of the generated memes are coherent and relevant to the input tweets. From Table 5, we see that 65% of the raters like the meme shown to them and the like percentage correlate with the coherence and relevance scores. We infer that the raters have liked the meme if they understood the information conveyed through the meme and if the meme is relevant to the input tweet. Quantitatively, our model is capable of generating coherent memes with 66.5% confidence and relevant memes with 66.25% confidence. Our model performs with good confidence on the challenging image meme generation task using only the language features of the image meme during training.
Inter Rater Reliability
We use Cohen's Kappa (κ) to measure the reliability among the raters. Cohen's Kappa is defined as where p o is the relative observed agreement and p e is the hypothetical probability of chance agreement among the raters, and N is the number of samples. The Inter Rater Reliability (IRR) score among the users on different metrics is reported in The raters have higher than 60% agreement on all the metrics which establishes a good consistency among the raters for evaluating the quality of the generated image memes.
Controlling Meme Generation
Corrupting the input data during training enables our model to learn from the meme caption dataset and scale our model for any input sentence during inference as shown in Figure 8. In an ideal scenario, the user might want to select the meme template. During the experiments, we observed that for a given sentence, information abstraction during training has enabled our model to create a meme caption conditioned on any given meme template. We experiment further on this by forcing the caption generator to generate captions for a input sentence conditioned on different meme templates. The generated memes are presented in Figure 9 and our model is capable of generating a meme for an input sentence conditioned on a selected or a given meme template.
Conclusion and Future Work
We have presented memeBot, an end to end architecture that can automatically generate a meme for a given sentence. memeBot is composed of two components, a module to select a meme template and an encoder-decoder to generate a meme caption. The model is trained on a meme caption dataset to maximize the likelihood of selecting a template given a caption and to maximize the likelihood of generating a meme caption given the input sentence and the meme template. Automatic evaluation on meme caption test data and human evaluation scores on Twitter data show promising performance in generating an image for sentences in online social interaction.
The concept of quality of a meme highly varies among people and is hard to evaluate using a set of pre-defined metrics. In real-world scenarios, if an individual likes a meme, he or she shares it with others. If a group of individuals like the same meme then the meme can become viral or trending. Future work includes evaluating a meme by introducing it in a social media stream and rate the meme based on its transmission among the people. The meme transmission rate and the group of people it transmits across can be used as reinforcement to generate more creative and better quality meme.
|
2020-05-01T01:00:47.455Z
|
2020-04-30T00:00:00.000
|
{
"year": 2020,
"sha1": "62be11d830d59193c754c9e3ca5760c7dd6f089c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "62be11d830d59193c754c9e3ca5760c7dd6f089c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
245236658
|
pes2o/s2orc
|
v3-fos-license
|
Protein phosphatase 2A inactivation induces microsatellite instability, neoantigen production and immune response
Microsatellite-instable (MSI), a predictive biomarker for immune checkpoint blockade (ICB) response, is caused by mismatch repair deficiency (MMRd) that occurs through genetic or epigenetic silencing of MMR genes. Here, we report a mechanism of MMRd and demonstrate that protein phosphatase 2A (PP2A) deletion or inactivation converts cold microsatellite-stable (MSS) into MSI tumours through two orthogonal pathways: (i) by increasing retinoblastoma protein phosphorylation that leads to E2F and DNMT3A/3B expression with subsequent DNA methylation, and (ii) by increasing histone deacetylase (HDAC)2 phosphorylation that subsequently decreases H3K9ac levels and histone acetylation, which induces epigenetic silencing of MLH1. In mouse models of MSS and MSI colorectal cancers, triple-negative breast cancer and pancreatic cancer, PP2A inhibition triggers neoantigen production, cytotoxic T cell infiltration and ICB sensitization. Human cancer cell lines and tissue array effectively confirm these signaling pathways. These data indicate the dual involvement of PP2A inactivation in silencing MLH1 and inducing MSI.
6. Page 14, line 315: The authors claim that all MSI CRC belong to the consensus molecular subtype I. This is an overstatement, as in Guinney et al. 2015 only 75% of MSI CRC were classified as subtype I. Similarly, the MSS tumors are distributed over the subtypes, with only a smaller proportion showing features of subtype IV. The statement needs to be amended. 7. The following section (page 15, line 324) "Therefore, MSI CRC has the tumour microenvironment of type I CMS ..." contains confident statements about combination therapies, claiming for two scenarios that "there is no need to combine Treg inhibition". Those are theoretical considerations, reasonable or not, and should be indicate as such. In this context, studies demonstrating clinical benefit from combination of anti-PD-1 and anti-CTLA4 should be considered. 9. Figure 3g: The difference in E2F1 levels are very small, however, dramatic differences in DNMT and MLH1 levels are observed. How can this be explained?
Reviewer #2: Remarks to the Author: The study shows an interesting connection of PP2A inactivation to MLH silencing and MSI and demonstrates that PP2A knockout promotes the therapeutic response to ICB. The study begain with a mouse model showing depletion of ppp2r1a which encodes the PP2A A scaffold subunit increases immunogenicity, T cell infiltration and induction of MSI status. This piece of data in the mouse model is convincingly demonstrated to support the claim that PP2A indeed has a role to play in cancer immunity. The author went on to examine if the finding obtained from the mouse is clinically relevant. Using TCGA database, they claimed that the mRNA levels of PP2A endogenous inhibitors such as CIP2A and SET are significantly higher in CRC samples with MSI compared to that with MSS, while the PPP2R1A is lower. Protein analysis using CRC tissue array further shows the PPPR1A is indeed lower in MSI tumors while the CIP2A and SET are higher. The subsequent experiment using the mouse model explored the mechanism. It showed that PPP2R1A coimmunoprecipiated the HDAC2 and Rb1 etc and PPP2R1a loss resulted in phosphorylation of HDAC2 and Rb1, leading to increased DNMT expression and MLH1 methylation and silencing. Finally, in a C26 syngenic CRC model, they showed that ppp2R1a knockdown increased response to anti-PD1 and a combination of a chemical inhibitor of PP2A with anti-PD1 also shows some better response in growth inhibition. Major concerns Although the mouse data documenting a role of PP2A ( by PPP2R1a A structural unit) in regulating MLH1 expression is convincing, the clinical data supporting the clinical relevance of PP2A-PPP2R1A loss in relation to MSI status in CRC is questionable. First, as the endogenous inhibitor of PP2A, CIP2A and SET function to regulate PP2A activity, instead of PPP2R1A expression. The authors provided data showing that CIP2A and SET is weakly upregulated in MSI tumors compared to MSS tumors but this does not necessarily warrant a claim that PP2A activity is lower in MSI tumors. Showing PPP2R1a is lower in MSI tumors is strange as PPP2R1a expression in general is not downregulated in CRC. Instead, other B subunits of PP2A are widely silenced by DNA methylation in 90% of CRC. This suggests that the majority of the CRC have PP2A dysfunction as a tumor suppressor. Given that MSI tumor is only found in 10-15% of CRC, it is hard to believe that only the MSI tumors have the PP2A inactivation while most MSS tumors are not. The authors need to address this more carefully. For the mechanistic study, the biochemical evidence showing HDAC2 and Rb1 are PP2A substrates are lacking. Although they are co-immunoprecipitated, it is necessary to demonstate that they are the direct substrates of PP2A. PPP2R1a pulldown for an in vitro phosphatase assay using CT26 cell lysis as a substrate will substantiate the mechanistic claim.
The final data showing PP2A inhibitor sensitizing PD1 therapy is not impressive. I find it is hard to appreciate the claim of using PP2A inhibitor as a strategy to induce MSI and then sensitize ICB. As PP2A is widely inactivated in human CRC, it is not convincing that inhibition of PP2A can be a strategy for cancer therapy. Also, inhibiting PP2A as a tumor suppressor can cause lots of oncogenic signaling, including RB E2F1 and many other known oncogenes. I am afraid that the therapeutic implication of using PP2A inhibitor in CRC has limited potential (though it has been reported before). Also, the study focuses on colorectal cancer and it is not clear why the in vivo studies also included the triple-negative 4T1 and pancreatic model.
Reviewer #3:
Remarks to the Author: The antigenicity and hence the immunogenicity of tumours is likely to be a major limiting factor in response to immunotherapy. Altering immunogenicity is a major challenge and this study is potentially important because it attempts to rise to this challenge by evaluating a pathway which may give rise to new antigens.
Whilst the experiments examining the impact of targeting PP2A on MSI status are compelling, the experiments examining effects on the immune response fall short of providing definitive answers. As such, the data as it stands is over-interpreted.
The histology shown in Figure 1 needs some improvement. It is difficult to understand why no CD4+ cells are observed when Foxp3+ cells are seen? What are the cells stained in the lower right panel?
In studies of human CRC (and other cancers) CD8+ T cells and Tregs are normally reported as positively correlating? This is because Tregs are induced when there is an immune response to suppress. This does not appear to be the case in the analysis shown here. Could the authors comment?
The tumour growth curves in Figure 4 show significant differences however these are only assessed for a short period of time (up to day 21). Did the study extend beyond this time-point? The comparison of lymphocyte numbers by IF staining must have been carried out on very small tumours where PP2A is absent. Smaller tumours often have more lymphocytes / g tumour compared to larger tumours thus the authors should provide more details and/or normalise for tumour size.
It is impossible to conclude that there are more neoantigens generated in these tumours without exome sequencing. This is the key missing piece of data. The number of TCRs alone is not sufficient evidence of neoantigen-driven clonal expansions.
The data with the small molecule inhibitor is less compelling. Also, despite the authors' claim that there is an effect beyond an impact of Tregs; this is not proven by the experiment carried out as it does not include the use of the PI-3065 plus anti-PD1 alone as a control. In addition, the authors should note that PI-3065 has effects beyond just direct effects on Tregs. It also affects effector T cells directly as well as monocytes. This experiment needs a re-think to include all necessary controls as well as a "cleaner" method of targeting Tregs.
1. In the legend to Figure 5, the authors state that treatment with LB-100 was performed for 2 days. Length differences for BAT25 between LB-100-treated cells and untreated controls are extremely pronounced with more than 10 basepairs difference. Acknowledging previous studies and estimations for basepair deletion rates per cell division, this cannot be explained. Previous studies suggested mononucleotide peak patterns (precisely of the markers used in the present study) as molecular clocks. These previous studies consistently reported the emergence of MSI-indicating peaks only months after the onset of MMR deficiency, which is in stark contrast to the data presented here. As this experiment is a crucial to convincingly demonstrate the functional connection between PP2A inactivation and MSI, thorough re-analysis is required. To assess this point, the authors should perform standardized time scale experiments, quantitatively examining changes of the peak patterns in parallel after knockout and LB-100 treatment. Response: Thank you very much for your great comments. We have now provided more data to prove the causal link between PP2A loss with MSI induction and its relevance to human CRC. Western blotting analysis showed that treatment with different shRNAs against PPP2R1A or with different PP2A inhibitors (LB100 and LB102) for 2 or 7 days decreased MLH1 protein levels in SW620 and another human MSS CRC cell line, HT29 (Fig. 5d). Western blot analysis of SW620; HT29 transfected with indicated shRNAs, and treated with vehicle control or LB100/LB102. We have also now used commonly recognized method and definition to define whether the tumour was MSI 1 . The panel of markers included D2S123, D5S346, D17S250 and BAT25. Compared with control cells, HT29 treated with PP2A inhibitors for 2 or 7 days or with different shRNAs against PPP2R1A showed changes in the length of all markers used ( Fig. 5e and Fig. S11, S12). Moreover, the profiles of marker length changes caused by treatment with PP2A inhibitors (LB100) for 2 or 7 days or the use Page 2 of different shRNA treatments against PPP2R1A were almost similar. These data suggest that MSI induction caused by PP2A inhibition or knockdown occurs very rapidly and shares the similar profiles of MSI marker length changes. These findings were supported by MSI induction caused by hypoxia 2 and chemical agents 3 , when MSI induction happened 2 and 3 days after the treatment, respectively. Together, these data suggest the causal link between PP2A loss with MSI induction and its relevance to human CRC.
The reason that we did not use knockout to perform this experiment is that complete Aα loss has no transformative properties. Additionally, in cancer patients, Aα is found to be inactivated in a haploinsufficient manner. CRISPR/Cas9-mediated homozygous Aα deletion resulted in decreased colony formation and tumour growth across multiple colorectal and endometrial cancer cell lines. This study further uncovered a mechanism by which PP2A Aα regulates Aβ protein stability and activity and suggests why homozygous loss of Aα is rarely seen in cancer patients 28 .
2. Results of MSI analyses are presented poorly. Response: Thank you very much for the great comments. We have now provided clear MSI data in Fig. 2d. We divided the data of each marker into three panels and used different colors to clearly indicate the size change according to the colors used to mark the type of treatment. In addition, the scale unit has been clearly labelled on the tops of each marker.
3. Abstract, Page 2, line 32: "however, the mechanism of MSI status development is unclear". This statement disregards the history of MSI cancer research and the fact that genetic and epigenetic alterations responsible for the MSI phenotype have been largely clarified; according to some recent studies, epigenetic silencing of MLH1, somatic biallelic MMR gene mutations, and the combination of first and second hit of the same MMR gene in Lynch syndrome together can explain the vast majority of MSI cancers. The authors only present correlation data suggesting a link between PP2A and MSI, no evidence is presented that PP2A inactivation is responsible for the natural occurrence of the MSI phenotype in human cancers. The increased proportion of MSI tumours among endometrial cancers harboring PP2A, SET, and CIP2A mutations may be coincidental or related to the fact that MMR deficiency is generally associated with high mutation burden (so that MMR deficiency is cause, not consequence). Response: Thank you very much for the great comments. 1. We have revised the abstract as "Microsatellite-instable (MSI) tumours are one of the few cancers that respond to immune checkpoint blockade (ICB) with genetic and epigenetic alterations well clarified; however, the mechanism of MSI status development is not well understood." Please refer to Page 2, line 32 to 35 in the revised version. 2. Protein phosphatase 2A (PP2A) is a tumour suppressor that regulates many signaling pathways [29][30][31] , and its loss of function has been associated with cell transformation 32 . PP2A has been directly implicated in the negative regulation of double-strand break DNA repair proteins 33 . Consistent with the idea those protein phosphatases are not just negative regulators of DNA repair signaling, selective inhibition of PP2A activity impairs DNA repair 34,35 . PP2A has been suggested or confirmed to dephosphorylate over 300 substrates including MLH1, PMS1, and PMS2 36 . We also use experiments to prove that pp2a inhibitor can reduce MLH1 expression and MSI status (Fig. 5e, Fig. S11 and Fig. S12).
3. We further demonstrated the positive correlation between mutation count and CIP2A and SET mRNA levels (Unshown Figure. 1). The correlation coefficients were r = 0.13 (P = 0.003) and r = 0.18 (P < 0.00005) for CIP2A and SET, respectively.
Unshown Figure. 1. The mRNA levels of CIP2A and SET mRNA level (Y-axis) positively correlated with total mutation count (mean) (X-axis) for TCGA-colorectal cancer samples.
We used the classification and regression trees (CART), a powerful approach optimizing the cutoff point of independent variables for predicting dependent variable used in medical data sets 37 , to divide the CIP2A or SET data into high-level and low-level subgroups. The cut-off values of CIP2A and SET data were calculated as 440.953 and 11716.08, respectively. The percentages of the subgroups with high and low levels of CIP2A were 16.22% and 83.78%, respectively. In addition, the mutation count of the subgroup with high CIP2A was significantly higher than that of the subgroup with low CIP2A (P = 0.015) (Unshown Table 1). Similarly, the percentages of the subgroups with high and low levels of SET were 3.42% and 96.58%, respectively. Moreover, the mutation count in the high SET subgroup was higher than that in the low SET subgroup, although not significant. (Unshown Table 1). In the MSK-IMPACT cohort, including the clinical and genomic data of 1,661 advanced cancer patients treated with ICB 38 , tumours with PPP2R1A mutation accounted for 1.4%, which was associated with increased tumour mutation burden score and mutation count and better overall survival status (Fig. S15). Together, these data indicate that PP2A is not widely inactivated in colorectal tumours, and inhibition of PP2A may be a strategy for colorectal cancer treatment. The conclusion that PP2A mutation status may help to predict ICB therapy response based on these data is not justified. Response: Thank you very much for the great comments.
Unshown
A previous study analyzed the clinical and genomic data of 1,661 advanced cancer patients treated with ICB, whose tumours underwent targeted next-generation sequencing (MSK-IMPACT) 38 , and showed that higher somatic tumour mutational burden (highest 20% in each histology) can predict survival after immunotherapy across multiple cancer types. Given its clinical and practical importance, we have undertaken a reanalysis of these data and showed that PPP2R1A mutation (1.4%) was associated with an increased tumour mutation burden score and mutation count, and a better overall survival status ( Fig. S15a, b). Moreover, the median survival time and the univariate Cox regression hazard ratio of patients with PPP2R1A-mutated tumours were much better than those of patients with PPP2R1A-non-mutated tumours (Fig. S15c). The pan-cancer nature of this biomarker probably reflects the fundamental mechanisms by which ICB functions. These data also support the hypothesis that PPP2R1A, SET, and CIP2A mutations or altered mRNA levels are associated with higher mutation burden and MSI status and help predict response to ICB.
Minor points: 4. The authors use the cell lines SW-480 and SW-620 as examples for human CRC cell lines. Both cell lines are derived from the same patient and tumor, but derived at different time points. Therefore, one can expect that both lines will behave similarly regarding treatment response. What was the reasoning behind this selection?
Response: Thank you very much for your great comments. We apologize for this mistake and further explain the reason for using these two cell lines. Although SW480 (primary) and SW620 (lymph node) cell lines derived from primary tumour and metastasis from the same patient and carried identical mutation profiles, but had epigenetic differences 39 . According to your comments, we have also now repeated the use of PP2A inhibitors, not only LB-100 but also LB-102, to induce MSI in another MSS human CRC cell line, HT29. As shown in Fig. 5e, Fig. S11 and S12, treatment with LB100 or LB-102 induced MSI both at 2 and 7 days. (including CTLA-4, PD-1/PDL-1 and Combo group), whose tumours underwent targeted next-generation sequencing (MSK-IMPACT) 38 , and showed that higher somatic tumour mutational burden (highest 20% in each histology) can predict survival after immunotherapy across multiple cancer types. Given its clinical and practical importance, we have undertaken a reanalysis of these data and showed that PPP2R1A mutation (1.4%) was associated with an increased tumour mutation burden score and mutation count, and a better overall survival status (Fig. S15a, b). Moreover, the median survival time and the univariate Cox regression hazard ratio of patients with PPP2R1A-mutated tumours were much better than those of patients with PPP2R1A-non-mutated tumours (Fig. S15c). The pan-cancer nature of this biomarker probably reflects the fundamental mechanisms by which ICB functions. These data also support the hypothesis that PPP2R1A, SET, and CIP2A mutations or altered mRNA levels are associated with higher mutation burden and MSI status and help predict response to ICB. In response to your question about the combination of LB-100 and anti-CTLA4, we believe that the combination of LB-100 and anti-CTLA may improve the therapeutic effect.
Reviewer #2 (Remarks to the Author):
The study shows an interesting connection of PP2A inactivation to MLH silencing and MSI and demonstrates that PP2A knockout promotes the therapeutic response to ICB. (1) This study "found that PPP2R2B, encoding B55β, is the only subunit that is consistently downregulated or silenced in all examined CRC cell lines, but not in the normal colon mucosa samples". Specifically, Figure 1B in the Cancer Cell paper shows that B55β (PPP2R2B) is one of the few genes upregulated in primary tumour compared with normal tissue. However, PPP2R1A A subunit, PPP2CA C subunit, and other B subunits, such as PPP2R2A (B55α), PPP2R1B (PR65β), PPP2R3B (PR70), PPP2R5C (B56γ), PPP2R5D (B56δ), are obviously upregulated in primary tumour compared with normal tissue 4 . These data show that compared with normal tissues, the main A and C subunits and most of the B subunits of PP2A are up-regulated in primary tumours. Because RNAi against Aα of PP2A decreased total PP2A activity 5,6 , these data may imply an increase in PP2A activity in tumours 4 . However, this comparison was made between the primary tumour and each corresponding normal tissue, and has nothing to do with individual tumour differences. For example, SET was found overexpressed in 13 out the 21 tumour samples compared with corresponding normal tissue 7 , however, SET overexpression was only detected in 15.4% of 247 CRC patients without metastatic disease at diagnosis 8 . Need to be noted, the observed PP2A B subunit (PPP2R2B) silencing in the Cancer Cell paper was a result of comparison between the primary tumour and each corresponding normal tissue 4 .
(2) Instead, we used TGCA data to examine the correlations of different immune cell infiltration with PPP2R1A and endogenous PP2A inhibitors in different individual tumours. We found that the expression of endogenous PP2A inhibitors, CIP2A, and SET correlated positively with the infiltration of CD8+ T cells and CD20+ B cells and negatively with FOXP3+ Treg cells (Fig. 1c). Contrastingly, the expression of PPP2R1A correlated negatively with CD8+ T cells but positively with FOXP3+ Treg cells (Fig. 1c). Similar findings were observed in human rectal cancers (TCGA) (Fig. 1d).
(3) Moreover, long-term culture may induce changes in DNA methylation 9 , even though the cells studied here are human mesenchymal stem cells. Because PPP2R2B hypermethylation causes acquired apoptosis deficiency in activated T cells of systemic autoimmune diseases 10 , it is possible to induce cloned cells after long-term culture. Actually, this argument is supported by the data shown in the Cancer Cell paper 4 , where obvious PPP2R2B hypermethylation has been observed in CRC cell lines (Fig. 1E: HCT116, RKO….that only exhibited methylated band) compared to the primary CRC tumour samples (Fig. 1F: 2T, 4T, 12T…that exhibited both unmethylated and methylated bands). Therefore, the increase in PPP2R2B DNA methylation levels of CRC cell lines compared with the primary tumour cells can easily be explained as the contribution of long-term culture. As expected, frequent aberrant DNA methylation of PPP2R2B was observed in primary tumour tissues of ductal carcinoma in situ and early invasive breast cancer. PPP2R2B DNA has been shown to be hypomethylated in cancers with TP53 mutations 11 . These data suggest long-term culture induces changes in DNA methylation, which may help select cells with acquired apoptosis resistance Page 10 through PPP2R2B hypermethylation. (4) The survival and proliferation of established CRC cells (SW620 and HT29 lines) and primary human colon cancer cells can be suppressed by LB-100 that inhibited PP2A activity and activated AMPK signaling both in vitro and in vivo 12 . In addition, the self-renewal and sphere formation HT29 cell line and primary human colon cancer cells can be suppressed by silibinin 13 that inhibited the PP2Ac/AKT Ser473/mTOR pathway. Similarly, we used PP2A inhibitors, LB100 and LB102 to suppress PP2A activity (Fig. S10) and thereby induced MSI status (Fig. 5e and Fig. S11, S12). Together, these data indicate that PP2A activity and its downstream pathways of CRC cell lines and primary human CRC cells can still be manipulated by agents that inhibit PP2A activity.
For the mechanistic study, the biochemical evidence showing HDAC2 and Rb1 are PP2A substrates are lacking. Although they are co-immunoprecipitated, it is necessary to demonstrate that they are the direct substrates of PP2A. PPP2R1a pulldown for an in vitro phosphatase assay using CT26 cell lysis as a substrate will substantiate the mechanistic claim. Response: Thank you very much for your great comments. We have now demonstrated that HDAC2 and Rb1 are the direct substrates of PP2A. CT26 cells were treated without (CTR) and with LB100, small-molecule inhibitor of PP2A, followed by PPP2R1a pulldown of the cell lysates for an in vitro phosphatase assay and western blotting. The data showed that the PPP2R1A pulldown in LB100-treated cell lysate exhibited decreased PP2A activity (Fig. S5a) and increased pRb1 and pHDAC2 levels (Fig. S5b). The biochemical evidence showing that HDAC2 and Rb are direct PP2A substrates. We have also revised manuscript as "To further demonstrate that Rb and HDAC2 were the direct substrates of PP2A, CT26 cells were treated without (CTR) and with LB100, a small molecule inhibitor of PP2A 14 , followed by Ppp2r1a pulldown of the cell lysates for an in vitro phosphatase assay and western blotting. The data showed that the Ppp2r1a pulldown in LB100-treated cell lysate exhibited decreased PP2A activity (Fig. S5a) and increased pRb and pHDAC2 levels (Fig. S5b). The biochemical evidence showing that HDAC2 and Rb1 are direct PP2A substrates." Please refer to Page 7 in line 159-164 in the revised version.
The final data showing PP2A inhibitor sensitizing PD1 therapy is not impressive. I find it is hard to appreciate the claim of using PP2A inhibitor as a strategy to induce MSI and then sensitize ICB. As PP2A is widely inactivated in human CRC, it is not convincing that inhibition of PP2A can be a strategy for cancer therapy. Response: Thank you very much for your great comment. We have used TCGA colorectal cancer samples (n=594) to demonstrate that compared with MSS cancers, MSI cancers have increased CIP2A and SET mRNA levels, but reduced PPP2R1A mRNA levels (Fig. 2i). We further demonstrated the positive correlation between mutation count and CIP2A and SET mRNA levels (Unshown Figure. 1). The correlation coefficients were r = 0.13 (P = 0.003) and r = 0.18 (P < 0.00005) for CIP2A and SET, respectively.
Unshown Figure. 1. The mRNA levels of CIP2A and SET mRNA level (Y-axis) positively correlated with total mutation count (mean) (X-axis) for TCGA-colorectal cancer samples.
We used the classification and regression trees (CART), a powerful approach optimizing the cutoff point of independent variables for predicting dependent variable used in medical data sets 37 , to divide the CIP2A or SET data into high-level and low-level subgroups. The cut-off values of CIP2A and SET data were calculated as 440.953 and 11716.08, respectively. The percentages of the subgroups with high and low levels of CIP2A were 16.22% and 83.78%, respectively. In addition, the mutation count of the subgroup with high CIP2A was significantly higher than that of the subgroup with low CIP2A (P = 0.015) (Unshown Table 1). Similarly, the percentages of the subgroups with high and low levels of SET were 3.42% and 96.58%, respectively. Moreover, the mutation count in the high SET subgroup was higher than that in the low SET subgroup, although not significant. (Unshown Table 1). In the MSK-IMPACT cohort, including the clinical and genomic data of 1,661 advanced cancer patients treated with ICB 38 , tumours with PPP2R1A mutation accounted for 1.4%, which was associated with increased tumour mutation burden score and mutation count and better overall survival status (Fig. S15). Together, these data indicate that PP2A is not widely inactivated in colorectal tumours, and inhibition of PP2A may be a strategy for colorectal cancer treatment. including RB E2F1 and many other known oncogenes. I am afraid that the therapeutic implication of using PP2A inhibitor in CRC has limited potential (though it has been reported before). Response: Thank you very much for your great comment. Although the safety of LB100 has partly addressed in the phase I-II clinical trial 14 , there is a concern that inhibiting PP2A, a tumour suppressor, can cause lots of oncogenic signaling. We have addressed the limitations of current study in the discussion section and proposed some tumour-targeting and controlled-release drug carrier systems, such as liposome or nanocages, can solve these limitations. We have now further addressed this issue in the discussion section. "To be noted, inhibiting PP2A, a tumour suppressor, can cause lots of oncogenic signals in normal tissues, thereby limiting the therapeutic potential of PP2A inhibition in cancer treatment. Similarly, this problem can be solved by controlled delivery of therapeutic agents to tumours." Please refer to Page 16 in line 368-370.
Unshown
Also, the study focuses on colorectal cancer and it is not clear why the in vivo studies also included the triple-negative 4T1 and pancreatic model. Response: Thank you very much for your great comment. Both of triple-negative breast cancer 32 and pancreatic cancer 49 have low incidence of dMMR/MSI-H tumours and poorly respond to monotherapy with antibodies against PD1 or PD-L1 50,51 .
However, immune checkpoint blockade is a FDA-approved tissue-agnostic drug for the treatment of MSI-high solid tumours 52 . Pancreatic cancer was also included in 12 tumour types in the cohort of 86 MSI-high tumour patients and responded to anti-PD1 monotherapy 53 . We therefore expanded the application of current results to the treatment of triple-negative 4T1 and pancreatic model. Our results show that the combined use of LB100 in 4T1 and Pan-18 tumour models increased the Page 13 sensitivity of ICB by increasing the mutation burden and inducing MSI status ( Fig. 5a and Fig. S8).
Reviewer #3 (Remarks to the Author):
The antigenicity and hence the immunogenicity of tumours is likely to be a major limiting factor in response to immunotherapy. Altering immunogenicity is a major challenge and this study is potentially important because it attempts to rise to this challenge by evaluating a pathway which may give rise to new antigens.
Whilst the experiments examining the impact of targeting PP2A on MSI status are compelling, the experiments examining effects on the immune response fall short of providing definitive answers. As such, the data as it stands is over-interpreted. Response: Thank you very much for your great comments. We have answered the following questions, which will help solve the problems you raised here.
The histology shown in Figure 1 needs some improvement. It is difficult to understand why no CD4+ cells are observed when Foxp3+ cells are seen? What are the cells stained in the lower right panel?
Response: Thank you very much for your great comments. We apologize for this mistake. The original figure for Foxp3 was misplaced with picture taken from distal small intestine that was filled with Treg attracted by CCR5 15 . The other figures in Fig. 1a were from colon (large intestine). In the revised Fig. 1a, all pictures were taken from colon. The new figures show that very few CD4+ or Foxp3+ cells were observed in the histology of control tissues (Fig. 1a). The cells stained in the lower right panel are CD20+ B cell aggregates that were only observed in the tissue of ppp2r1a loss tumours. Tumour-infiltrating B cells have recently been identified as cellular components of tertiary lymphoid structures (TLSs) in the tumour, which are associated with better respond to immunotherapy 19 . The CD20+ B cell aggregates have been identified in melanoma and sarcoma (three papers side by side published in Nature), which are associated with an increased chance that patients' tumours would respond to immunotherapy [16][17][18] . We further chose some figures published in the Nature papers and merged into the following figure for your reference. In studies of human CRC (and other cancers) CD8+ T cells and Tregs are normally reported as positively correlating? This is because Tregs are induced when there is an immune response to suppress. This does not appear to be the case in the analysis shown here. Could the authors comment?
Response: Thank you very much for the great comments. We used TCGA colorectal cancer data that mainly include the transcriptome of the tumour per se. In other word, we analyzed the intratumoural CD8+ T cells and Tregs. In fact, the ratio of intratumoural CD8+ T/FOXP3+ cells in different colorectal tumours is very different, which is a predictive marker for the survival of colorectal cancer patients [54][55][56] . Intraepithelial lymphocytes were defined as lymphocytes located within tumour cell nests that may be the area used for transcriptome analysis. It has been noted that CD8 expression is detected in the epithelium in 100% of the cases, while FOXP3 expression is not or only sporadically present in the tumour epithelium 55 . Therefore, the lack of a positive correlation between CD8+ T cells and Tregs in our data may be due to the specimens used for analysis.
The tumour growth curves in Figure 4 show significant differences however these are only assessed for a short period of time (up to day 21 to cells transfected with control shRNAs (WT CT26) (Fig. 4c). Although WT CT26 did not respond to anti-PD1 treatment (Fig. 4d), CT26 with Ppp2r1a knockdown responded to anti-PD1 treatment (Fig. 4e), suggesting Ppp2r1a knockdown sensitises CT26 to anti-PD1 treatment (Fig. 4e). Moreover, increased levels of CD8+ tumour-infiltrating T cells were found in tumours formed by CT26 cells with Ppp2r1a knockdown compared to those formed by WT CT26 cells (Fig. 4f).
We further provided the detailed data of CD8+ tumour-infiltrating T cells in tumours formed by CT26 with Ppp2r1a knockdown at different time-points from 21 to 35 days, when the tumours were found to become larger. After immunofluorescence staining, we processed the images to analyse the numbers of positive signals using TissueQuest software (TissueGnostics) 20 . In order to normalise for tumour size, we also divided the numbers of CD8+ tumour-infiltrating T cells by the tumour weights ( Fig. 4f Right) 21,22 . However, these data did not show that smaller tumours had more lymphocytes/ g tumour compared to larger tumours.
It is impossible to conclude that there are more neoantigens generated in these tumours without exome sequencing. This is the key missing piece of data. The number of TCRs alone is not sufficient evidence of neoantigen-driven clonal expansions.
Response: Thank you very much for the great comments. Although there is a plethora of neoantigen discovery pipelines based on genetic information to predict epitopes, the current pipelines are human-centered and are therefore mainly designed for clinical use. Recently, NAP-CNB 27 , a novel bioinformatic pipeline, has been developed to directly estimate H-2 peptide ligands from murine tumour samples, and its area under the curve (AUC) is equal to or better than the state-of-the-art methods. Moreover, this pipeline also has a neural network model of the binding affinity prediction function. We therefore used the NAP-CNB 27 pipeline to identify potential tumour neoantigens. The detailed method has been added in the "Materials and methods" section. We have also modified the manuscript as below: "To demonstrate that Ppp2r1a knockdown converted cold tumours into hot tumours by increasing neoantigen, we submitted the RNA-seq data of CT26-shppp2r1a and CT26-scr tumour samples, integrated in the fastq.gz files, and applied the NAP-CNB 27 to predict neoantigens. A total of 270 missense transcripts, corresponding to 220 genes, shared by three CT26-shppp2r1a tumours but not found in the CT26-scr tumour were identified (Fig. 4g). The software also generated a ranking of putative neoantigens that are common in the three CT26-shppp2r1a tumours samples. The 30 top-scoring putative neoepitopes are shown in Table S5." Please refer to Page 9 in line 193-200 in the revised version.
Page 16
The data with the small molecule inhibitor is less compelling. Also, despite the authors' claim that there is an effect beyond an impact of Tregs; this is not proven by the experiment carried out as it does not include the use of the PI-3065 plus anti-PD1 alone as a control. In addition, the authors should note that PI-3065 has effects beyond just direct effects on Tregs. It also affects effector T cells directly as well as monocytes. This experiment needs a re-think to include all necessary controls as well as a "cleaner" method of targeting Tregs. Response: Thank you very much for the great comments. We have now added the use of PI-3065 plus anti-PD1 alone as a control. The data showed that there is no significant difference between PI-3065 and PI-3065 plus anti-PD1 (Fig. 5b).
We have also strengthened the rationales of using PI-3065 to block mouse Treg-mediated immunosuppression.
(1) The reason we did not use Foxp3-mutant scurfy mice or Foxp3-null mice t mice for this study is that these mice suffered from the lethal lymphoproliferative autoimmune syndrome and become moribund at approximately 4 weeks of age 23 . Although depletion of Treg cells by neonatal thymectomy, adoptive transfer of naive T cell samples depleted of Treg cells into lymphopenic hosts or treatment of mice with antibodies specific for CD25, results in a much milder and more slowly progressing disease 24 , however, Treg cells are also critical in self-tolerance and prevent catastrophic autoimmunity throughout the lifespan of mice. We therefore gave up using the methods mentioned above. (2) Instead, we chose to use p110δ inactivation that has been successfully demonstrated to block Treg-mediated immune suppression in mice carrying solid tumours 25 . Notably, long-term administration of PI-3065, a small molecule inhibitor with selectivity for p110δ, to mice was well tolerated and did not induce weight loss 25 . (3) There are concerns that the inactivation of p110δ in Treg cells will indirectly release CD8 cytotoxic T cells and induce tumour regression, and the inactivation of p110δ will also block the intrinsic immunosuppression of PMN-MDSCs (Ly6G high ), leading to reduced tumour growth. However, it has also been reported that inhibiting p110δ in cancer might impair cytotoxic T cells and negatively impact on cancer immune surveillance 26 . Previous data 25 show that although p110δ blockade reduces the effectiveness of cytotoxic T cells, it also overrides Treg-and probably also MDSC-mediated suppression of anti-tumour immune responses, enabling even weakened CTLs to successfully attack tumours. Thus, p110δ is apparently more essential for Treg rather than effector T-cell responses Page 17 against cancer cells.
(4) To show specific blocking of Treg by using the p110δ inhibitor PI-3065, we first demonstrated that p110δ was only expressed in Treg (Foxp3+), but not expressed in CD8+ or PMN-MDSCs (Ly6G high ) in the CT26 tumour microenvironment. We then used PI-3065 to block regulatory T cell-mediated immune suppression in mice 25 , and thereby we could study the effect of anti-PD1 plus PP2A inhibition on tumour killing without Treg interference. "To prove that LB100 sensitised tumour cells to ICB therapies regardless of its Treg inhibitory activity, we first showed the expression of p110δ in Treg (Foxp3+), but not in CD8+ or polymorphonuclear myeloid-derived suppressor cell (PMN-MDSC) (Ly6G high ) in the CT26 tumour microenvironment (Fig. S9). We then used the p110δ inhibitor PI-3065 to block Treg-mediated immune suppression in mice 25 , and showed that the therapeutic effects of the combination of LB100 and anti-PD1 on reducing tumour growth and enhancing survival were also observed in the presence of PI-3065 (Fig. 5b)." Please refer to Page 10 in line 222-229 in the revised version.
again (that is what we mean "double-checked") and found out that when we extended the exposure time from 4 sec to 20 sec (data were shown below), we found that E2F1 knockdown has a more pronounced effect on the change in E2F1 level, and it has a stronger correlation with changes in downstream gene levels. The left panel was shown in the revision version, while the right panel was shown in the original version.
5. Prior to the use of immune checkpoint blockade, MSI-H/dMMR was historically associated with poorer outcomes in advanced (stage IV) colorectal cancer (which is inverted compared to its association with favorable outcomes in early stage I-III disease). In light of this, the statements on page 3 line 54-55 and 64 ("High incidence of somatic mutations can lead to MSI tumours of a less aggressive nature") are not entirely accurate and should be revised.
Response: Thank you very much for your great comments. We have now revised the Main Text as "MSI is associated with better stage-adjusted prognosis in early stage I-III colorectal cancer 6 and response to immune checkpoint blockade (ICB) 7 than microsatellite-stable (MSS) tumours, leading to the urgent need to investigate the mechanisms causing MSI tumour development." Please refer to Page 3 in Line 56-59.
We have also deleted "High incidence of somatic mutations can lead to MSI tumours of a less aggressive nature".
6. The authors claim that PPP2R1A, SET, and CIP2A mutations "help to predict responses to ICB" is not strongly supported by the results. Namely, univariate Cox regression analysis of survival differences between PPP2R1A-mutated and PPP2R1A-non-mutated tumor is insufficient and lacks adjustment for other prognostic / predictive factors. MSI status and tumor mutation burden (TMB) are themselves predictive biomarkers, but the authors haven't demonstrated that PPP2R1A-mutation status is independent of these.
Response: Thank you very much for your great comments. Based on your comment, we have checked the correlation between patient survival and all parameters of the cohort. Among them, total mutation burden (TMB) and PPP2R1A mutation were found to significantly increase the survival rate of patients and reduce the hazard ratio (HR). We then compared the numbers of TMB between PPP2R1A-mutated and PPP2R1A-non-mutated tumors. Besides, we further compared the numbers of TMB between tumors with and without mutation in several driver mutations, such as TP53, PIK3CA and KRAS. We found that TMB was significantly higher in PPP2R1A-mutated than PPP2R1A-non-mutated tumors (p = 0.00026). TMB was also significantly higher in TP53 (p = 2.92x10 -6 ) and PIK3CA (p = 0.007) mutation groups than non-mutation groups (Unshown Table 1). There was a tendency for TMB in KRAS mutated tumors to be higher than in KRAS non-mutated tumors. These data indicate that tumors with high TMB are more likely to have some key driver mutations than tumors with low TMB. When performed univariate Cox regression analysis of survival differences between gene-mutated and gene-non-mutated tumors, we found that only PPP2R1A mutation reduced the HR to 0.4296 (p =0.03), while TP53 and PIK3CA mutations significantly increased the HR to 1.473 and 1.31, respectively (Unshown Table 2).
There was a tendency for KRAS mutation to increase HR to 1.31. These data indicate that, except for PPP2R1A, most single-gene mutations did not reduce HR, and cannot be used as "favorable prognostic markers" to help predict the response to ICB (Unshown Figure 1). We further performed adjustment for PPP2R1A mutation and other prognostic / predictive factors. Nevertheless, we also found that the PPP2R1A mutation reduced the HR to 0.6142, although the p-value is not significant (Unshown Table 3).
In addition, we showed in mouse tumor models that loss of PPP2R1A led to increased TMB and MSI status, and in mouse and human tumor cell models, PP2A inactivation also led to loss of MLH1 and induces MSI status (Fig. 5). Therefore, this manuscript can suggest a causal relationship between the PPP2R1A mutation and TMB.
|
2021-12-17T06:18:13.985Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "5b26a25b2c1ccb87793ebe9a48a85ce50381ff9c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-021-27620-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a3e4ea1107e241281179cf382d3abf3857f5c83",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236255034
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of Untreated Coffee Wastes for the Removal of Chromium (VI) from Aqueous Medium
Industrial discharges loaded with heavy metals present several problems for aquatic ecosystems and human health. In this context, the present study aims to evaluate the potential of raw spent coffee grounds to remove chromium from an aqueous medium. A structural and textural study of coffee grounds was carried out by FTIR, XRD, and TGA analysis. 'e optimum conditions for the removal of Cr(VI), for a solution with an initial concentration of 100mg/l, were adsorbent dose 2.5 g/l, pH 4.0, and contact time 90min. 'e adsorption equilibrium results show that the Langmuir isotherm best describes the process with an adsorption capacity of 42.9mg/g and that the adsorption kinetics follows the pseudosecond-order model. 'e calculated thermodynamic parameters showed that the adsorption is exothermic and spontaneous. 'e activation energy value (Ea) indicated that the retention is physisorptive in nature.'e regeneration of the adsorbent was carried out by three eluents, among which HCl was the best. Finally, a brief cost estimation showed the great potential of coffee grounds as a low-cost adsorbent.
Introduction
Urbanisation, industrialisation, and globalisation are leading to increasing water pollution [1]. Water pollution is one of the most important issues facing scientists today. Among the major water pollutants, chromium is the highest priority toxic pollutant, according to the US Environmental Protection Agency [2]. In nature, chromium generally has two oxidation states, Cr(III) and Cr(VI) [3]. Cr(III) is less toxic and essential for the human body [4]. e toxicity of Cr(VI) is much higher than that of Cr(III). Cr(VI) is toxic, carcinogenic, and mutagenic and associated with reduced plant growth and changes in plant morphology [5]. Cr(VI) is generally used in the manufacture of pigments, in the treatment of metal surfaces, and in the chemical industry as an oxidising agent [6]. Untreated effluent from these industries may contain 10 to 100 mg/L of Cr(VI) [7].
According to current WHO standards [8], the permissible concentration of Cr(VI) in drinking water is 0.05 mg/L and 0.1 mg/L for surface water. For Cr(III), a concentration of 5 mg/L is the admissible limit. erefore, the removal of Cr(VI) from wastewater is very important to make the environment safe and clean. For this purpose, several methods have been used, for example, filtration, electrochemical precipitation, and ion exchange [9]. ese processes have many limitations such as the high cost, formation of toxic byproducts, and production of sludge [10]. Compared to all of them, adsorption has been proven to be efficient in economic and operational terms [11]. Commercial activated carbon cannot be used in the case of effluent treatment because of its high cost and its difficult regeneration [12]. For this reason, relatively effective, inexpensive, and readily available alternatives are in high demand today [13]. Research has focused on the use of waste as a bioadsorbent, for example, waste potato peels [14], Lathyrus sativus husk [15], orange peels [16], tea waste [17], rice husk [18], citrus peels [19], Cortaderia selloana flower spikes [20], garlic straw [21], foxtail millet shell [22], Corncobs [23], and olive cake waste [24]. However, there is a lack of studies on the use of spent coffee grounds (SCGs) as an adsorbent. Coffee is the world's second most important commodity after petroleum and the largest agricultural product in terms of volume [25]. According to the International Coffee Organisation, world coffee production amounts to 7.4 billion kilos per year. e main producers are Brazil (40%), Vietnam (20%), Colombia (10%), Indonesia (7%), and also Ethiopia (5%). Almost all of these quantities are discharged as solid waste [26]. e valorisation of this residue presents very important environmental and socioeconomic advantages. is study is part of this perspective and consists of examining the possibility of SCGs to remove Cr(VI) from aqueous mediums. e parameters of isotherms, kinetics, and thermodynamics are analysed, as well as the factors influencing adsorption. In addition, the regeneration of the SCGs and a cost estimate were made to show the cost-effectiveness of using the SCGs as an adsorbent.
Preparation and Characterisation of Adsorbent.
SCGs used in this study were collected from a cafeteria in the city of Meknes (Morocco). e raw material was washed several times with hot distilled water (60°C) and then dried at 105°C in an oven for 24 hours. e dry product was then passed through a sieve to retain a grain size of <250 μm and was stored in clean, dry glass flasks. e adsorbent was used in all experiments without any further treatment. e adsorbent characterisation is an important factor in explaining the adsorption mechanism. e raw SCGs used in this study were subject to several measurements. FTIR: chemical functional groups, present on the surface, were carried out by Fourier Transform Infrared Spectroscopy, using an infrared spectrometer (Shimadzu, JASCO 4100). e samples were analysed as very well dried KBr pellets of about 4% (w/w). e spectra were recorded from 4000 to 400 cm −1 with a resolution of 4 cm −1 and 16 scans per sample.
TGA: thermogravimetric analyses were carried out in the TA60 SHIMADZU equipment. e measurements were carried out on a 20 mg sample, between 25 and 600°C, with a linear increase of 10°C/min in the open air.
XRD: crystalline phases of the SCGs were evaluated by X-ray diffraction using a diffractometer (Brucker-AXS D8) with a copper tube (λ �1.5406Å). e radiation was generated at 40 mA and 40 kV. e diffraction angle of 2θ from 10°to 70°was measured at a step size of 0.04 and exposure of 1s at each step. pH pzc : point zero charge has been determined by the salt addition technique [27]. In a series of beakers containing 40 ml of NaCl solution (0.1 M) each, a mass of 0.2 g of SCGs has been added. e pH i was adjusted with a solution of HCl (0.1 M) and NaOH (0.1 M). e pH f values were measured after 24 hours. pH PZC was obtained from the plot of ΔpH (�pH f −pH i ) vs. pH i at ΔpH � 0.
Preparation of Cr(VI) Solutions.
In this study, the analytical quality potassium dichromate K 2 Cr 2 O 7 (Sigma Aldrich. p.a. ≥ 99.0%; molecular weight 294.19 g/mol) was used to prepare a stock solution of Cr(VI) by dissolving 2.828 g in 1000 ml of distilled water. Experimental solutions, of the required concentrations, were obtained by diluting the stock solution with distilled water.
Adsorption Experiments.
e adsorption tests were carried out in batch mode and at room temperature. A dose of SCGs was mixed with 20 ml of synthetic Cr(VI) solution in 50 ml beakers. Agitation was performed by a magnetic stirrer at 200 rpm. e effect of different parameters on the removal of Cr(VI) was studied by varying contact time (5-180 min), pH (1)(2)(3)(4)(5)(6)(7)(8), adsorbent dose (0.5-7 g/L), and temperature (25-50°C). e adsorption kinetic studies are carried out on a solution with a concentration of 100 mg/L at pH 4 and the optimal adsorbent dose of 2.5 g/L and at different temperatures (20, 25, 30, and 40°C). Adsorption isotherm experiments were performed by contacting a fixed dose of SCGs (2.50 g/L) with 20 ml of Cr(VI) solution with different initial concentrations from 10 to 200 mg/L at pH 4 and temperature 25°C.
After each experiment, the adsorbent was separated from the solution by centrifugation at 3000 rpm for 20 min. e residual concentration of Cr(VI) in the solution was measured by a UV/visible spectrometer (Shimadzu, UV1240), using 1-5 diphenylcarbazide as a complexing agent in an acidic medium at wavelength 540 nm. e adsorbed amount of Cr(VI) q t (mg/g) and the percent removal R t (%) were determined by the following equations [28]: where C 0 is the Cr(VI) initial concentration and C t is the concentration at time t (mg/L), V is the solution volume (L), and m is the adsorbent mass (g).
In order to evaluate the performance and validity of kinetic and isotherm models, the coefficient of determination (R 2 ) and the chi-square test (χ 2 ) were used [29]: where q cal and q exp are the calculated and experimental adsorption capacities, respectively (mg/L). (Figure 1(a)) revealed that the SCGs have absorption bands typical of lignocellulosic materials. e broadest band of the spectrum, centred at 3441 cm −1 , corresponds to the stretching of the O-H bonds [30] of the phenolic compounds composing the waste. e bands at 2934 cm −1 and 2852 cm −1 are attributed to the stretching vibration of the aliphatic C-H bonds [30]. e band at 1740 cm −1 is due to the stretching vibration of nonconjugated C�O bonds. ese vibrations are mainly corresponding to the aldehyde, ketone, ester, and carboxylic acid functions of pectin and hemicelluloses and xanthene derivatives such as caffeine [31,32]. According to the literature [31,33], peaks around 1644, 1465, 1379, and 1242 cm −1 indicate the presence of COO, CO, and COO − groups on the adsorbent surface. e other bands between 1200 and 1000 cm −1 are attributed to the stretching vibration of the C-O bonds of the aromatic compounds, the acetyl and carboxylic acid functions [34].
Adsorbent Characterisation. FTIR analysis
e TGA curves ( Figure 1(b)) show the weight loss of a 20 mg sample of SCGs when exposed to heating from 20 to 600°C. e evolution can be divided into three steps. e first begins at about 60°C and corresponds to a slight weight loss of about 10.1% due to evaporation of water (dehydration of the sample), the second stage (290°C < T < 390°C) during which the greatest loss of mass occurs. In this stage, depolymerisation and decomposition of polysaccharides occur, resulting in a weight loss of 50.2%. e third and final stage corresponds to the carbonisation of the SCGs (390°C < T < 600°C) with a weight loss of 26.8%.
To evaluate the crystallinity of the SCGs, the cellulose spectra from the International Centre for Diffraction Data Database (ICDD) were used as a reference. e SCGs are mainly composed of cellulose, lignin, and hemicelluloses. ese last two polymers being amorphous materials, the peaks situated at 15.7°and 22.6°on the X-ray diffractometry spectra (Figure 1(c)), are evidence of the crystallinity of the cellulose. ese values are, respectively, characteristic of the (110) and (200) planes of cellulose [35]. In comparison, the DRX pattern is similar to that of other lignocellulosic wastes [36,37]. e point zero charge pHpzc is the pH value at which the surface charge of the adsorbent is zero [38]. pHpzc determines the working pH range to favour the electrostatic attraction between the adsorbent and the adsorbate. Figure 1(d) shows that the pH PZC of the SCGs is equal to 5.3.
is value is comparable to that reported by other researchers [32,39].
Effect of Solution pH.
e effect of pH on the adsorption of Cr(VI) was studied at an initial concentration of 100 mg/l and an adsorbent dose of 2.5 g/l. Figure 2(a) shows that the maximum elimination corresponds to a pH value of about 4 (88.8%), with decreasing values on either side of this pH. e effect of pH on metal adsorption is strongly related to these two main factors: the chemistry of the metal in solution and the ionic state of the surface functional group [41]. As the pHpzc is equal to 5.3, so the surface is positive when the pH is below 5.3 and it is negative when the pH is above 5.3. Furthermore, Cr(VI) exists in solution in different ionic forms (Figure 2(b)). At pH � 2-5, the HCrO − 4 ions are predominant in the solution, diffusing and adsorbing more easily and in greater quantities due to the strong attraction exerted by the surface. At pH above 5 the surface of the adsorbent becomes negatively charged and there is an electrostatic repulsion which justifies the drop in Cr(VI) removal. Similar patterns have also been reported for the adsorption of Cr(VI) on various wastes [42][43][44]. is suggests that adsorption is controlled by electrostatic forces (physisorption).
Effect of Adsorbent Dose.
e effect of adsorbent dose was studied in the range of 0.5 to 7 g/l for an initial Cr(VI) concentration of 100 mg/L and pH 4. e curve in Figure 3 shows that the increase in the adsorbent dose leads to an increase in the Cr(VI) removal rate, up to a dose of 2.5 g/L, where it remains unchanged. is may be due to the reduction of the concentration gradient between the Cr(VI) ions on the adsorbent surface and the Cr(VI) ions in the liquid solution. erefore, the optimal dose is determined to be about 2.5 g/L. Nur-E-Lam [45] studied the removal of Cr(VI) from leather industry wastewater by adsorption on tea leaf waste. ey showed that 14 g/L is required to adsorb 95.42% of Cr(VI). In another study, Hakan Çelebi [42] tested the efficiency of three tea wastes (black tea waste (WBT), green tea waste (WGT) , and rooibos tea waste (WRT) in the adsorption of Cr(VI); the experimental results showed that the optimal dose is 1 g/L, 1.5 g/L, 3.5 g/L for WBT, WGT and WRT, respectively.
Effect of Contact Time and Temperature.
e results of the time effect on adsorption are shown in Figure 4. For the four temperatures, adsorption is carried out in two stages, the first stage being fast and the second slow. is type of two-phase adsorption is also mentioned in other studies [46][47][48]. Equilibrium is reached after 90 minutes for all temperatures.
e curves also show that the increase in temperature is unfavourable to the removal of Cr(VI), indicating that the adsorption is exothermic [49].
Adsorption Kinetics.
Pseudofirst-order (PFO) and pseudosecond-order (PSO) models were commonly used to fit the experimental data and to calculate kinetic parameters. Equations (5) and (6) represent the nonlinear form of the PFO and PSO models, respectively [50]: where q t (mg.g −1 ) and q e (mg.g and k 2 (g.mg −1 .min −1 ) are the constants of pseudo-firstorder and pseudo-second-order models, respectively. e representation of the two models is given in Figure 5, and the values of the different kinetic parameters are shown in Table 1. A comparison of the values of the error functions χ2 and R 2 , obtained for all temperatures, clearly shows that the pseudosecond-order model is the most suitable for describing the kinetics of adsorption. e value of the initial adsorption rate h (mg.g −1 .min −1 ), at different temperatures, was calculated using the following equation [51]: It can be seen from Table 1 that an increase in temperature leads to an increase in the initial rate of adsorption. In conjunction with other studies, the adsorption of Cr(VI) on other lignocellulosic wastes is of pseudosecond-order [52,53]. e activation energy Ea of Cr(VI) adsorption on the SCGs can be calculated from the Arrhenius equation (8) [54]: where K 2 is the pseudosecond-order rate constant (g.mg −1 .min −1 ), T is the absolute temperature (°K), R is the perfect gas constant (8,314J•mol −1 .K −1 ), A is the preexponential factor (min −1 ), and E a is activation energy (kJ•mol −1 ) By plotting ln (k 2 ) versus 1/T (figure not shown), E a was obtained from the slope of the linear plot. e value of E a is found equal to 10.92 kJ mol −1 . E a can give an idea of the type of adsorption. According to the literature [55], this adsorption is of the physisorption type.
Adsorption Isotherms.
In order to describe the phenomenon governing the retention of Cr(VI) on the SCGs and to calculate the maximum amount of adsorption, the study of the adsorption isotherm is essential. e adsorption isotherm is the relationship between the quantity adsorbed at equilibrium and the concentration remaining in the solution at constant temperature and pH. e experimental results were analysed by the Langmuir, Freundlich, and Temkin models, which are expressed by the following equations, respectively [56]: where q t (mg.g −1 ) and q e (mg.g −1 ) are the amounts of dye adsorbed at t and equilibrium. C e (mg/L) is the equilibrium concentration; q m (mg/g) is the maximum adsorption capacity; K L (L/mg) is Langmuir's constant; K F (mg.g −1 ) (L.mg −1 ) 1/n and n are Freundlich's constants. A and b are Temkin's constants. R is the universal gas constant (8,314 J mol −1 . K −1 ), and T is the absolute temperature (°K). Figure 6 shows the nonlinear curves of these models, and Table 2 shows the calculated nonlinear regression constants of these three models. According to the table, the Langmuir model has the largest value of R 2 and the smallest value of χ 2 .
is indicates that the Langmuir model is the most adequate model to describe the Cr(VI) adsorption equilibrium on the SCGs. Retention is, therefore, on homogeneous adsorption sites without interaction and in the form of a monolayer [56]. e maximum Cr(VI) adsorption capacity on SCGs (q m � 42.9 mg/g) is better than those reported in the literature for other adsorbents, as shown in Table 3. International Journal of Chemical Engineering e separation factor R L is characteristic of the Langmuir isotherm, and its value can be determined from the value of K L , according to the following equation [61]: where K L is the Langmuir constant and C 0 is the highest initial Cr(VI) concentration (mg/L). e R L value of Cr(VI) adsorption on the SCGs is 0.014, indicating that the adsorption is favourable (R L ˂ 1).
Adsorption
ermodynamic. Temperature is an important factor affecting the adsorption process. is effect could be explained by the evaluation of thermodynamic parameters. ese parameters include the Gibbs free energy change (∆G°), standard enthalpy change (∆H°), and standard entropy change (∆S°) and have been calculated using the following equations [53]: where K d � qe/Ce is the distribution constant, R is the universal gas constant (8.314 J/mol K), and T is absolute temperature. e thermodynamic study was carried out at 25, 30, 40, and 50°C. e tests were carried out on mixtures of 20 ml of solutions at a concentration of 100 mg/L and pH 4 with 2.5 g/ L of SCGs. e values of (∆H°) and (∆S°) were determined from the slope and intercept at the origin of the In (Kd) vs 1/ T (Figure 7), respectively. ese values are collected in Table 4. e negative value of ∆G°for all temperatures indicates that the adsorption of Cr(VI) on the SCGs is spontaneous. e positive value of ∆H°confirms that the adsorption is exothermic, while the positive value of ∆S°r eflects the decrease in disorder at the solution-solid interface during adsorption [62]. e results of many studies in the literature are consistent with the present study [36,63].
Regeneration
Studies. e desorption study allows confirming the mechanism of adsorption and the possibility of reusing the adsorbent and the adsorbate. e reversibility of the sorption is proportional to the nature of the binding established between the adsorbent and the adsorbate. e regeneration potential of the SCGs was tested using three different eluents: distilled water H 2 O, NaOH (0.5 M), and HCl (0.5 M). A mass of SCGs (2.5 g) was mixed with 1 L of Cr(VI) solution with a concentration of 100 mg/L under stirring at pH 4 for 6 hours. After the adsorption experiment, the SCGs were collected by centrifugation at 3000 rpm for 20 min and dried in an oven for 12 h at 105°C. Afterward, the mass of SCGs was transferred to different eluents. Desorption was carried out for 120 min with stirring (200 rpm). Consecutive adsorption-desorption cycles were repeated 4 times by the three eluents. e results are shown in Figure 8. It is very clear that HCl is the most efficient eluent for the regeneration of SCGs adsorbent. e loss in adsorbent efficiency between the first and fourth cycles can be attributed to the degradation of the material at these extreme acidity [60] conditions and the progressive blocking of the active sites by impurities from the untreated adsorbent. Furthermore, the easy desorption of Cr(IV) shows that the adsorbent-adsorbate bond is weak and confirms that the adsorption is of the physisorption type.
Cost Estimation.
Cost estimation of the biosorbent (SCGs) was made based on the methodology reported in recent works [64]. International Journal of Chemical Engineering treatment [64]. Moreover, the regeneration of this adsorbent is easy and can be done by a simple acid wash since the adsorbent-adsorbate interactions are mainly of a physical nature.
Conclusion
is document highlights a new inexpensive adsorbent for the removal of Cr(VI) from the aquatic medium. e characterisation of the SCGs showed that it has a structure typical of lignocellulosic materials and its surface is rich in functions that serve as an adsorption site. e parameters influencing adsorption were studied, and the results show that 2.5 g/L of SCGs is sufficient to remove 88.8% of Cr(VI) from a solution with an initial concentration of 100 mg/L, at pH 4 and temperature 298°K. Experimental data showed good agreement with the Langmuir isotherm and the pseudosecond-order kinetic model. e thermodynamic study indicated that the retention of Cr(VI) on the SCGs is feasible, spontaneous, and exothermic in nature. e activation energy value (E a ) showed that the adsorption is physisorption in nature.
is was confirmed by the easy regeneration of SCGs by HCl. Finally, a cost estimate proved that SCGs are 15 times more economical than activated carbon. Taking all these results into account, it can be concluded that SCGs can be considered as an economical alternative to the more expensive adsorbents used for the removal of Cr(VI) in wastewater treatment processes.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest. Step no.
|
2021-07-26T00:06:04.497Z
|
2021-06-09T00:00:00.000
|
{
"year": 2021,
"sha1": "ded03b18aaff6b383782433b3fcadaab9f209ac1",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijce/2021/9977817.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "acc23a8945bc358aff8c27c4dac03d473a568cd2",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
5116413
|
pes2o/s2orc
|
v3-fos-license
|
Saline-induced changes of epicuticular waxy layer on the Puccinellia tenuiflora and Oryza sativa leave surfaces
Background The epicuticular waxy layer of plant leaves enhances the extreme environmental stress tolerance. However, the relationship between waxy layer and saline tolerance was not established well. The epicuticular waxy layer of rice (Oryza sativa L.) was studied under the NaHCO3 stresses. In addition, strong saline tolerance Puccinellia tenuiflora was chosen for comparative studies. Results Scanning electron microscope (SEM) images showed that there were significant changes in waxy morphologies of the rice epicuticular surfaces, while no remarkable changes in those of P. tenuiflora epicuticular surfaces. The NaHCO3-induced morphological changes of the rice epicuticular surfaces appeared as enlarged silica cells, swollen corns-shapes and leaked salt columns under high stress. Energy dispersive X-ray (EDX) spectroscopic profiles supported that the changes were caused by significant increment and localization of [Na+] and [Cl−] in the shoot. Atomic absorption spectra showed that [Na+]shoot/[Na+]root for P. tenuiflora maintained stable as the saline stress increased, but that for rice increased significantly. Conclusion In rice, NaHCO3 stress induced localization and accumulation of [Na+] and [Cl−] appeared as the enlarged silica cells (MSC), the swollen corns (S-C), and the leaked columns (C), while no significant changes in P. tenuiflora.
Background
Soil salination has become an important factor that restricts agricultural development across the globe. Saline soil takes up 37 % of the world's arable land [1]. Saline regions in China are mostly composed of Na 2 CO 3 and NaHCO 3 . Up-to-date researchers have focused salt tolerance study on NaCl, but rare on alkali salt. The threats posed by alkali salt are much more complex and destructive to the ecosystem than by neutral salt [2].
The waxy layer that covers over plant surfaces plays an important role in natural package, which serves as the first barrier to protect plants against threats from the external environment [2,3]. The waxy layer helps protect plants against non-biological stress such as non-stomatal water loss, insect intrusion [4], bacterial invasion, ultraviolet radiation and frost [5]. This natural wax mechanism brings new insight not only for environmental and agricultural applications, but also for the industrial application in biomimetics-package. The waxy surface varies from plant to plant [6][7][8]. Content of plant waxy is determined not only genetically, but also influenced by the environment. The environmental factors have an impact on the biochemical process of waxy synthesis. Correlation between epidermal waxy deposits and drought tolerance has been found in various plants [8,9]. However, there was rare research on the relationship between plants' waxy layer and their saline tolerance [10].
P. tenuiflora is a perennial grass of the Gramineae family and has extremely strong saline tolerance, which is used as a pioneer plant in the improvement of saline soils. The waxy surfaces of the p. tenuiflora leaves have always been a controversial issue with regard to a saltsecreting structure [10]. The excess salt in P. tenuiflora could be discharged through the formation of the waxy layer. However, it is unclear how exactly do the changes of the waxy layer respond to different degrees of saline stress.
Rice (O. sativa L.) is one of the most widely consumed foods as well as the second-highest production of food over the world. Rice has a medium saline tolerance. The epicuticular surface of rice shoots is composed of epidermis cell, stomatal guard cell, trichome, and wart-like protuberance (silicon cell) with crystalline wax layers. There is a salt tolerance wild rice but no cultivated rice that has the discharge of the excessive salts [11].
In this work, we selected P. tenuiflora and rice, for the further studies in dynamic characteristics of the epicuticular waxy formations in terms of different exposures of NaHCO 3 stress. The changes in the waxy ornamentation of epicuticular surface of P. tenuiflora and rice leaves under NaHCO 3 stress were visualized by using environmental scanning electron microscope (ESEM) and their chemical composition were analyzed by X-ray diffraction (XRD) analysis. The relationship between epicuticular waxy layer and saline tolerance was explored based on the observations. Figure 1 showed the typical ESEM images of the epicuticular surfaces of rice and P. tenuiflora leaves. The epicuticular surface of rice leaves ( Fig. 1a and b) contained epidermis cell (EC), stomatal guard cell (GC), stomatal subsidiary cell (SbC), trichome (TC), and wart-like protuberance (silica cell, SC). Crystalline wax covered over the epicuticular surfaces. Wax crystals appeared as randomly distributed crystals over the epicuticular surfaces (Fig. 1b). The wax crystals showed no specific orientation, and their planes were standing with acute angles to the epicuticular surface. The random orientation of small-sized crystalline waxes formed the micro-networks. The heights of platelet wax crystals were less than 0.2 μm. There was no noticeable difference of crystalline wax layers on EC, GC, SbC, and SCs, but no on TC. Figure 1c and d showed the similar wax crystals on the epicuticular surface of the P. tenuiflora leaves. It seemed that there were two different sized crystals in the waxy networks (Fig. 1d). Smaller crystals formed more dense networks within the networks formed by bigger crystals. There was no distinction of waxy morphology between on EC, GC, and SbC. There were no SC and TC on the P. tenuiflora leaves. The density of the wax crystal networks in P. tenuiflora was higher than that in rice.
NaHCO 3 stress induced changes of rice epicuticular surfaces
As NaHCO 3 stress increased for the rice samples, the epicuticular morphologies changed (Fig. 2). Figure 2b showed that wart-like protuberance silica cells merged and enlarged to be the big protuberances (MSC). The distribution density of wax crystal networks deceased and disappeared on the apex surfaces of the MSC. At 100 mM NaHCO 3 with 7 days exposure, leaked columns (C) and/or swollen cornshapes (S-Cs) appeared on the surface (Fig. 2c). Interestingly, wax crystals remained on the surfaces of the S-Cs. Diameter of the leaked columns was 2~5 μm, while the size of the swollen corn was bigger than 10 μm. Cracked side view of the S-Cs revealed the cubic crystals as marked arrow in Fig. 2d, indicating NaCl crystals. There were also the solidified particles underneath of the cell wall as marked number 6 on Fig. 2d.
EDX element analysis of the epicuticular surface of rice leaves EDX microanalysis spectroscopies were obtained from the different spots over the epicuticular surfaces as marked numbers in Fig. 2. For the controlled rice leave surfaces, there were no significant difference among the EDX spectroscopies obtained from SbC, EC and SC. C and O elements ( Fig. 3a and b) were dominated. Traces of other elements including gold were also detected. Relatively high gold peak came from the gold coating. The level of silicon accumulation was low on both epidermal region and silica cells. For the NaHCO 3 exposed rice, there were significant changes in Na and Cl counts at the points on the merged and enlarged silica cells (MSC) ( MSC were counted 12.5 % and 0.3 %, respectively. EDX spectra from the localized swollen corn surfaces showed that concentrations of Na and Cl were 20~30 time higher than those form the normal controlled surfaces. The particles underneath cell wall also showed high counts of Na and Cl (Fig. 3f). The cubic crystals on cross-section surface of the swollen corn appeared, indicating NaCl crystal. At the higher saline stress, condensed Na and Cl were leaked trough the ruptured surfaces to form the NaCl columns.
There was more excessive Cl − than Na + on the swollen corns, while excessive Na + than Cl − on the leaked columns. We have scanned over the surfaces to visualize the morphology depended Na + distribution by using ESEM. Figure 4 showed Na and K contour maps over the surfaces after 7 and 9 days exposure to100 mM NaHCO 3 . Compartmented Na + was found underneath the epicuticular surfaces, but no K + . The surface morphologies over the high Na + accumulations were different from those over the control surfaces. It seemed that the degree of Na + was associated with the morphological changes of the epicuticular surface.
Absorption comparison of cytosolic Na + and K + in rice and p. tenuiflora Plant usually balances at low cytosolic [Na + ], and a cytosolic [K + ]/[Na + ] >1 [12]. Figure 5 showed that Na + influx from the high external [NaHCO 3 ] altered the [K + ]/[Na + ] in the rice. Na + distribution ratio of shoot to root for rice also increased significantly from 0 mM to 150 mM NaHCO 3 stress, appearing as [Na + ] shoot /[Na + ] root > 1 (Fig. 5a). It seems the absorbed Na + ions from root were transported to the shoot. Consequently, the [K + ]/[Na + ] ratios in rice shoots decreased gradually lower than 1 (Fig. 5b). Transported Na + ions were accumulated to be toxic effects in rice shoot. At the 200 mM NaHCO 3 stress, [Na + ] shoot dropped dramatically (# marked in Fig. 5a and b). This [Na + ] decrement may be caused from a dyefunctioned rice (yellowish colored shoot) due to high toxicity. Localized NaCl swollen corn shapes and columns formed by rupturing and/or leaking highly accumulated NaCl were correlated to decreased cytosolic [Na + ] at extremely high NaHCO 3 stress.
For P. tenuiflora, Na + concentrations of both root ([Na + ] root ) and shoot ([Na + ] shoot ) were always balanced well at a very low level. This stable [Na + ] root /[Na + ] shoot indicated that the external saline stress barely affected P. tenuiflora, which was different from those for rice. Figure 5b showed that ratio of [K + ]/[Na + ] decreased for both P. tenuiflora and rice, but for P. tenuiflora, [K + ] was twice higher than [Na + ] as NaHCO 3 stress increased, while [K + ] became almost 5 times lower than [Na + ] for rice. Fig. 2d), (e) C under 100 mM NaHCO 3 for 7 days exposure (#5 spot in Fig. 2c), and (f) The solidified particles underneath of the cell wall under 200 mM NaHCO 3 for 5 days exposure (#6 spot in Fig. 2d) Surface morphology and EDX profiles for P. tenuiflora Figure 6 showed the epicuticular morphologies of P. tenuiflora with EXD characterization. Interestingly, epicuticular surface morphology of P. tenuiflora had no remarkable changes with experiencing the NaHCO 3 stress (Fig. 6). Morphology of waxy crystal network on the P. tenuiflora epidermis surfaces was similar as that on the controlled rice leave surfaces. Even at high NaHCO 3 stress, 150 mM for 21 days exposure, the wax and surface morphologies had no remarkable changes, EDX profiles also showed no remarkable changes of the element concentrations either, including Na + and K + .
Discussion
The waxy layer on the leaf surfaces prevents both molecular uptake and efflux. For both rice and P. tenuiflora, crystalline wax distributed randomly to form the micronetworks over the epicuticular surfaces. There was no distinction in the structure and density of wax crystal networks between different types of epicuticular cells, but no wax crystal networked coverage on the TC surfaces (Fig. 1a).
As both rice and P. tenuiflora exposed to NaHCO 3 stress, no noticeable changes of the crystalline wax networks were observed. It seemed that the external NaHCO 3 stress did not alter the wax synthesis metabolisms. However, surface morphologies of rice leaves had significant changes as Na + localization increased. The surface deformation might be caused mainly by the Na + accumulation under NaHCO 3 stress. Surface deformations appeared as protruded surface bands and MSC, C and S-C for rice. The observed C and S-C on the disrupted-waxy surfaces of rice epidermis were composed mainly by NaCl. In addition, the excessive Na + could be toxic to plant metabolism affecting its development and growth.
Under the NaHCO 3 stress, atomic absorption spectra showed no noticeable increment of [Na + ] root for both rice and P. tenuiflora. It seemed that root had the capability to 9 However, for rice, [Na + ] shoot increased as the external salt stress increased in terms of exposure time and concentration of NaHCO 3 . The absorbed Na + ions from root seemed to be transported to shoot and to be accumulated in the tissue cells of shoot. Excessive Na + in the rice shoot caused the increment of [Na + ] shoot /[K + ] shoot which was toxic to its growth. Consequently, dysfunction of rice under the NaHCO 3 stress began not from root but from the shoot. In general, high [Na + ] shoot in halophytes imply compartmentation into the vacuole to maintain the ion homeostasis. Our results showed the compartmented Na + over epicuticular surfaces of NaHCO 3 experienced rice, but no K + compartmentation (Fig. 4). However it was not clear whether the compartmentations were in the vacuole. EDX element analysis showed no homogeneous [Na + ] enhancement over the epicuticular surface of rice leave as NaHCO 3 stress increased. Initially wart-like protuberance silica cells were enlarged and merged to the big protuberance silica cell (MSC) (Fig. 2b). These manners of morphological changes are very similar to those under the silicon treatments due to the accumulation of silicon [13,14]. Silicon is predominantly deposited in wart-like protuberance silica cells of the epidermis. Our X-ray microanalysis spectra showed that Na presented highly on the enlarged/merged silica cells (Fig. 3c) with low Na X-ray counts around stomatal guard cell areas. It seemed that excessive Na accumulated in silica cells as a similar manner of Si accumulation.
Further enhanced NaHCO 3 stress induced the swollen corn-shaped (S-C) dumps and/or columns (C) on the epicuticular surface (Fig. 2c). EDX element spectra showed that those localized morphologies contained 6 ESEM images of p. tenuiflora leaf surface experienced with 0 mM, 100 mM NaHCO 3 for 3 days, and 150 mM NaHCO 3 for 21 days. EDX spectra over the different spots (marked box on ESEM images) showed no significant changes mostly NaCl. The cubic crystals appeared on the crosssection surface were confirmed to be NaCl crystal (arrow mark in Fig. 2d). Interestingly, the surface of swollen (S-C) NaCl localization was covered with the wax crystals networks, but that of NaCl column (C) was not. It suggested that the swollen localizations were formed slowly without disrupting the wax crystalline networks. The holes of the surface of this swollen localization indicated the remaining silica cells without swelling. However, the NaCl columns were formed by NaCl leakage from the silica cells. Highly accumulated NaCl was also observed underneath of the cell wall where the swollen localizations were found (Figs. 2d and 3f ). The localization or secretion of highly NaCl accumulation on the epicuticular surfaces may be corresponding to the sudden recovery [Na + ] shoot /[K + ] shoot at 200 mM NaHCO 3 as shown in Fig. 5.
Na accumulation dominated in the enlarged silica cells, while in the swollen and column, Na and Cl elements dominated. There were noticeable difference in the element composition, Na > Cl for the leaked NaCl columns and Na < Cl for the swollen NaCl. This difference may be correlated to the different manner of NaCl secretion. The portion of free Na + was accumulated in intra-cuticle cell wall, and the NaCl crystals were excreted in the form of column and swollen dump. It seemed that the NaCl secretion occurred after dysfunction. It would be a great challenge to trigger the NaCl secretion before dysfunction to enhance the salt tolerance on rice leaves.
The results obtained from atomic absorption spectroscopy showed that highly concentrated Na + ions on the rice leave might be transported from root to shoot. For rice, the transported Na + ions were accumulated in shoot appearing as the increment of [Na + ] shoot , [Na + ] shoot /[Na + ] root > 1. For P. tenuiflora, we did not observe the secreted Na + ions on the surface of its leaf, but there might be some mechanisms to maintain the ionic homeostasis, [Na + ] shoot /[K + ] shoot < 1.
Conclusions
With the increase in NaHCO 3 stress concentrate and time, there were no significant changes on the morphology of the waxy crystal networks for both rice and P. tenuiflora epidermis, however the epicuticular morphology of rice leave altered dramatically. MSC, S-C and C appeared as NaHCO 3 stress increased. These new morphologies were correlated with the Na + and Cl − accumulations.
Plant cultivation and treatment
P. tenuiflora and O. sativa L. cv. (Nipponbare rice) were cultivated in hydroponics with a temperature of 25~28°C, light exposure of 6000 lx, optical cycle of 16/8 h (day/night), and relative humidity of 60 %. Water was changed every 5 days. During the process of cultivation, 1/4 Hoagland was used for 2 to 3 days and for the rest of the time distilled water was used.
Ninety P. tenuiflora and rice seedlings of trefoil stage were selected and divided randomly into three different groups, 30 for each. The roots were washed with distilled water. The seedlings were then placed under 0 mM, 50 mM, 100 mM, 150 mM and 200 mM NaHCO 3 stress for 1, 3, 5, 7, 9 and 21 days for P. tenuiflora, and under 0 mM, 50 mM, 100 mM, 150 mM and 200 mM NaHCO 3 stress for 1, 3, 5, 7, and 9 days for rice, respectively.
ESEM and EDX observations
The middle sections of the second true leaves of P. tenuiflora and rice seedlings were taken randomly in the treatment group and the control group. They were cut into 3~5 mm segments, quickly fixed in 3 % glutaricdialdehyde. The dehydrated samples with a vacuum dryer were coated with grain-size gold particles by using sputter coater (SCD005, Bal-Tec GmbH, Germany). The epicuticular surfaces of rice leaves were then visualized with an environmental scanning electron microscopy (ESEM, Quanta-200, Fei Co., USA). Wax composition and epicuticular chemical composition were recorded by EDX during ESEM imaging. X-rays were collected with a detector at the takeoff angle of 30°.
[Na + ] and [K + ] measurements
Saline stress was applied under 50 mM, 100 mM, 150 mM and 200 mM NaHCO 3 . Sample groups were cultivated for 5 days, and the sample materials were removed from the stress solution, washed two times with distilled water to remove surface salt ions. The prepared shoot and root were placed on dry filter paper for absorbing moisture and dried in an oven at 105°C for 10 min. The dried samples were grounded and digested in the 10 ml nitric acid and 1 ml perchloric acid solution. Using 220FS atomic absorption spectrophotometer (Varian, USA), [Na + ] of root and shoot were measured.
|
2016-10-14T01:18:46.145Z
|
2015-07-01T00:00:00.000
|
{
"year": 2015,
"sha1": "405df0cc3d283f41589b47c0dd1899dc28fae6e2",
"oa_license": "CCBY",
"oa_url": "https://biolres.biomedcentral.com/track/pdf/10.1186/s40659-015-0023-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "09649e43925c31da34c381a694024935401e8992",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
49553990
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of the Percolation Sensitivity of Loose Sandstone Using Digital Core Technology
Methods: We take the core of a loose sandstone gas reservoir as a research object and begin by scanning the core samples with a CT scanner. A three-dimensional image of the core can be obtained, the digital information extracted, the pore structure of the porous media mapped directly to the network, and a digital core established using the principles of fractal geometry. The three-dimensional pore network model can also be extracted. Next, we can compare and correct the results calculated by the model based on the real core experimental results, and an objective and effective digital core model can be obtained.
BACKGROUND
Traditional rock sensitivity research primarily depends on indoor flooding experiments, such as velocity sensitivity, salt sensitivity, acid sensitivity, alkali sensitivity, and stress sensitivity.In some special cases, however, the limitations of sampling and experimental conditions make the experimental process and results impossible to achieve.In recent years, with the development of computer science and high-resolution image processing technology [1 -3], digital core technology has been gradually introduced into physical reservoir research, resulting in a new research method [4,5] Using a real rock pore space reaction 3D digital core based on a rock microstructure reconstruction [6 -8], the research scope involves microscopic percolation mechanisms, core displacement simulation experiments, prediction of core macroscopic conductivity, evaluation of oil displacement effects, production dynamic simulation of reservoirs and determining the boundaries of oil and gas field development technology policies [9].Compared with the traditional physical rock experiments, digital core samples are readily available, high-speed, and low-cost, and they make it possible to quantitatively evaluate rock properties and calculate physical quantities that are difficult for conventional physical experiments to measure [10].Currently, the digital core has been widely used for carbonate reservoirs, shale gas, and tight oil and gas reservoirs [11 -15] with good results.
A loose sandstone gas reservoir is characterized by loose, easily hydrated rock [16].It is difficult to core in the process of drilling, and the rock stress sensitivity is obvious.A small amount of samples cannot reflect the real situation underground, with the high clay content, fine particle size, easily slurried core samples, and sand production in the process of displacement.Therefore, as conventional displacement experiments are difficult to achieve, it is difficult to evaluate the sensitivity of the rock under coring conditions accurately, which increases the difficulty of developing gas reservoir development technology policy [17].As the digital core technology has been successfully applied in all types of complex hydrocarbon reservoirs, it will be introduced for loose sandstone gas reservoir physical analysis in this study, to examine the sensitivity of rock through establishing a digital core of loose sandstone.
It's difficult to construct a loose sand sample, and the sample will be destructed during the experiment process, which makes the sample sensitivity test cannot be finished.However, the digital core can figure out all the problems during the sensitivity test.At the same time, evaluating the percolation sensitivity of loose sandstone using digital core technology will support the reasonable production pressure controlling and water invasion researching.
Setup of Digital Core
The core sample derived from SeBei no.2 gas reservoir in qaidam basin in Qinghai province of China, which belongs to the quaternary system of the Pleistocene series and its depth is from 1320.16m to 1327.24mDepending on the X-ray diffraction analysis, argillaceous and Silty mudstone consist the core sample, which average contains 48.7% argillaceous and 32.9% Silty mudstone.And most of consisting rock particles are so small that below the silt level.The absolute content of clay is from 31.1% to 92% in the mudstone, and that is from 13.5% to 31.1% in the sandstone.Clay minerals contained illite 36.7%, chlorite 10.31%, illite and smectite formation 2.48%.The illite and smectite formation will damage the reservoir layer when it water-swellable.In addition, the smaller porethroat channels will be plugged that the kaolinite, illite and chlorite have dispersed and migrated when they were waterswellable.First, we scan the core sample by using the CT scanner.Thus, a three-dimensional image of the core can be obtained, the digital information can be extracted, the pore structure of porous media is mapped directly to the network, and a digital core is established using the principles of fractal geometry.The three-dimensional pore network model can also be extracted and is then used to calculate the percolation parameters in different stress states and under different water cut conditions, focusing on permeability, relative permeability and capillary pressure.The process of constructing the digital core model is shown in Fig. (1).
Core Scanning
The CT scanner is used to scan the whole diameter core.The length of this rock sample is 14.5 cm, and the coring well section is 1320.16~1327.24m.The core is loose, the clay content is 42.04% based on the logging interpretation results, the porosity is 28.42%, the permeability is 9.87×10 -3 μm 2 , the gas saturation is 45.20%, and the irreducible water saturation is 54.53%.The coring sample is as shown in Fig. (2).
Image Digitization
Using Micro XCT scanner to scan and construct the digital core.The process includes rock sample scanning, image reconstruction, filtering processing and image segmentation.According to the output image results of CT scanning, a 1.5 cm×1.5 cm image area is chosen as the image analysis element, and the pixels of the image are converted, setting up an isosurface, forming a real three-dimensional digital core after lamination.The process is shown in Fig. (3).
Porosity Modelling
The topological structure of pore space has been characterized by Maximum ball algorithm.Firstly, it was searched for the biggest ball and filled in the pore space center.Secondly, defining the biggest ball as a pore, the others ball are pore-throat channels, then take network-grid to connect pore space and accurately depicted.Thirdly, counting the characteristic parameters of every grid, such as radius, volume and shape factors and so on.At last, construct the pore grid model by network-grid extraction algorithm as a foundation platform for multiphase flow research.
The basic parameters of the pore network can be extracted from digital core, including the network model size, pore-throat ratio, distribution of pore throat radius, coordination number, shape factor, and so on, as shown in Table 1, in addition to establishing the pore network model and calculating the gas-water relative permeability curves and capillary pressure curves.
PERMEABILITY MODEL
The absolute permeability of Loose Sandstone is impacted by pore throat radius, coordination number and pore shape factor influence.So we choose these parameters as permeability model main parameters.
It concludes the exponential curve and data model through curve fitting from Figs. (4 & 6).
Fig. (6).
The relationship curve between different average pore throat ratio and fitting coefficient.
Based on the analysis of the influence of pore structure parameters on absolute permeability, the absolute permeability calculation model is constructed: Where: C a is Average coordination number (dimensionless); r is throat radius,(μm); G a is pore throat shape factor, (dimensionless); a is Constant coefficient, which decided on the core(μm -2 ); b is absolute permeability index, (dimensionless).
According to the fitting, the relation of C a and S wi can be explained on the following: The average pore throat shape factor G a and the formation factor F follow the linear relationship, and the specific form is.
Comparing to Timur model, Dziuba model and SDR model, the calculation results of our permeability model is more accurate, which shows in Fig. (7).
Relative Permeability Model
The relationship between Gas phase relative permeability krg and Water saturation sw in different pore throat ratio of SeiBei loose sandstone can be transformed with the binomial relation.(Fig. 7)
(5)
Where: A g and B g are Coefficient of the gas phase.
The relationship between Gas phase relative permeability k rw and Water saturation S w in different porosity ratio of SeiBei loose sandstone can be transformed with the binomial relation (Fig. 5).
(6)
Where: A w and B w are Coefficient of the water phase Because the gas saturation lower than 1, and A g is the coefficient of quadratic function squared, B g is the coefficient of the quadratic function.So the influence of B g on gas phase relative permeability is greater than A g .The relationship between the gas phase coefficient B g and the pore throat ratio (p) satisfies the exponential function, which shows in Fig. (6).
The relation between K rg and S w of SeiBei loose sandstone is satisfied with a quadratic function, and this regular function is also satisfied between K rw and S w .it shows in Fig. (7).
The relationship between the gas phase coefficient Based on the previous analysis, it constructs the following Relative permeability model Gas phase coefficient B g and water phase coefficient A w and B w can be expressed as linear combination forms of each parameter: (8)
CORE MODEL CALIBRATION
After the digital core model is established (Figs.9-11), it must be further corrected and tested to determine whether it can objectively reflect the true percolation characteristics of the reservoir rock.Accordingly, we compare it a small amount of displacement experiment data with the calculated digital core results.
From the experimental results on the permeability of core displacement under different effective stress conditions (Fig. 12), when the effective stress increases from 4.16 MPa to 10.84 MPa, the permeability decreases by 43.1%, and when the effective stress increases from 10.84 MPa to 24.10 MPa, the permeability decreases by 66.7%%.The permeability of the digital core decreases by 42.9% and 64.7%, respectively, for these two pressure changes, showing very similar results.Based on comparing the permeability changes under different clay content conditions, the error between the digital core and the real core is controlled within 10% (Fig. 13), which is small.The parameters of the digital core, such as the clay content, pore-throat radius, pore-throat ratio, shape factor, coordination number and other parameters, are adjusted, and the gas phase relative permeability Krg and water phase relative permeability Krw are calculated under different water saturation (S w ) conditions, followed by matching the relative permeability of the real core and of digital core to make them basically the same (Fig. 14).Thus far, the percolation characteristics of the established digital core and the real core have been basically consistent, indicating that the established loose sandstone digital core is of great reliability and can be used to study the sensitivity of loose sandstone.
Fig. (14).
Comparison of real core and digital core relative permeability curves.
Stress Sensitivity Evaluation
Due to the stress sensitivity of loose sandstone reservoirs, the porosity and permeability decrease in response to formation pressure changes, which influence the percolation rule and characteristics of the fluid flow and lead to changes in the shape of the relative permeability curve [18].
In evaluating the stress sensitivity of loose sandstone using the digital core, in addition to the network model input parameters for the initial value, we must also input the initial porosity and permeability of the core, the proportion of different interface shape factors, the triangle interface inside the half angle value, the stress sensitivity index and other parameters.
First, the changes in input parameters and effective stress need to be determined.The dynamic stress sensitivity model is used to calculate the porosity and permeability of the reservoir after the pressure change.
Then, using the porosity and permeability, combined with the characteristic parameters of the dynamic model, we calculate the characteristic parameters after pressure changes: The average pore-throat radius, average shape factor, average coordination number and the average pore-throat ratio (shown in Table 2).Finally, the pore network model program is used to calculate the relative permeability curve after pressure changes (Fig. 15).The results show that, as the effective stress increases from 2 MPa to 20 MPa, the rock is compressed constantly, the pore-throat is narrowed, the capillary pressure becomes large, the two-phase percolation region is narrowed, and the relative flowability of the non-wetting phase increases by more than 43.2%; in contrast, the relative flowability of the wetting phase decreases by more than 43.2%.Accordingly, in the process of real-life gas reservoir development, as the reservoir pressure drops, the effective stress will increase, and the gas production will drop significantly due to rock compression.Therefore, we should slow down to achieve a balanced exploitation mode in the process of decompression.
Sensitivity Assessment of Formation Water Damage
After water invasion of the loose sandstone gas reservoir, the water and water-sensitive clay minerals and sand are in contact for a long time, causing the clay minerals to swell, fall off and migrate, reducing the permeability of the rock [19].The length of time the formation undergoes soaking will affect the degree of damage to the rock permeability.
In evaluating the formation water damage using the digital core, in addition to the network model input parameters for the initial value, we must also input the initial porosity and permeability of the core, the proportion of different interface shape factors, the triangle interface inside half angle value, the clay content, the different water soaking times, the plastic index and so on.
First, the input parameters and different water soaking times are determined, and the clay content is used to calculate the clay expansion coefficient, combined with the dynamic expansion of the clay model, The reservoir porosity and permeability are calculated for the same clay content and different water soaking times.Then, using the porosity and permeability, combined with the characteristic parameters of the dynamic model, the following water saturation,S w relative permeability of gas , K rg relative permeability of water, K rw characteristic parameters are calculated after pressure changes: the average pore-throat radius, average shape factor, average coordination number and the average pore-throat ratio (Table 3).Finally, the pore network model program is used to calculate the relative permeability curve after pressure changes (Fig. 16).
Contact time (days)
The average port throat radius (µm) The average shape factor (Dimensionless) The average pore throat ratio (Dimensionless) The average coordination number (Dimensionless) The results show that as the water immersion time of the rocks varies from 10 days to 30 days, the clay gradually expands in the sandstone, the pore-throats are narrowed, the capillary pressure becomes large, the two-phase percolation region is narrowed, and the relative flowability of the non-wetting phase increases by more than 5.7%, whereas the relative flowability of the wetting phase decreases by more than 5.7%.When the core is in prolonged contact with water, up to 300 days, the change in the pore structure parameters is very small.This result indicates that the gas well production capacity in the SeBei field would decline significantly after the invasion and soaking with water, and therefore we must take effective waterproofing, water control and flood control measures in the process of development.
CONCLUSION
Using CT scanning and digital image processing technology, a digital core model of loose sandstone in the 1.
SeBei gas reservoir is established.After correction of the digital core model, comparison with the experimental data shows that the error is controlled by 10%, the digital core model is highly reliable.We can thus overcome the restrictions of hard coring and experimental conditions in loose sandstone gas reservoirs.The stress sensitivity studies performed using the digital core of loose sandstone in SeBei indicate strong stress 2.
sensitivity characteristics in the SeBei loose sandstone gas reservoir.As the effective stress increases, the rock is water saturation,S w relative permeability of gas, K rg relative permeability of water, Krw obviously compressed, and the reservoir rock permeability decreases by 43.2%.Therefore, we should slow down to achieve a balanced exploitation mode in the process of decompression.Digital core research on the formation water damage of loose sandstone in SeBei indicates strong water 3.
sensitivity characteristics in SeBei loose sandstone gas reservoir.After soaking with water for 30 days, the reservoir rock permeability decreases by more than 5.7%.we should avoid water invasion of the reservoir and extended soaking as much as possible in the development process.The loose sandstone is fragile rock and it is easily decomposed by water.Digital core technology can evaluate 4.
the real core stress sensitivity and water sensitivity, which work out the problem that it is hard to finish the conventional core sensitivity experiment research.And it saves much cost of the experiment.
Comparing of the stress sensitive experimental results, the digital core precision rate is above 98%, which means 5.
the digital core simulated exactly.It provides scientific exploitation basis for loose sandstone reservoir development.
Fig. ( 13
Fig. (13).Comparison of permeability influenced by clay content between the real core and digital core.
|
2018-07-02T10:49:21.603Z
|
2018-06-22T00:00:00.000
|
{
"year": 2018,
"sha1": "bd52384ce60f085b056328d7825a5ca406bf2234",
"oa_license": "CCBY",
"oa_url": "https://openpetroleumengineeringjournal.com/VOLUME/11/PAGE/84/PDF/",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "bd52384ce60f085b056328d7825a5ca406bf2234",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
119679985
|
pes2o/s2orc
|
v3-fos-license
|
Colorful combinatorics and Macdonald polynomials
The non-negative integer cocharge statistic on words was introduced in the 1970's by Lascoux and Sch\"utzenberger to combinatorially characterize the Hall-Littlewood polynomials. Cocharge has since been used to explain phenomena ranging from the graded decomposition of Garsia-Procesi modules to the cohomology structure of the Grassman variety. Although its application to contemporary variations of these problems had been deemed intractable, we prove that the two-parameter, symmetric Macdonald polynomials are generating functions of a distinguished family of colored words. Cocharge adorns one parameter and the second measures its deviation from cocharge on words without color. We use the same framework to expand the plactic monoid, apply Kashiwara's crystal theory to various Garsia-Haiman modules, and to address problems in K-theoretic Schubert calculus.
Introduction
Kostka-Foulkes polynomials, K λµ (t) ∈ N[t], describe the connections between characters of GL n (F q ) and the Hall-Steinitz algebra [Gre55], give characters of cohomology rings of Springer fibers for GL n [Spr78,HS77], and are graded multiplicities of modules for the general linear group obtained by twisting functions on the nullcone by a line bundle [Bry89]. Lusztig [Lus83,Lus81] showed they are the t-analog of the weight multiplicities in the irreducible representations of the classical Lie algebras, K λµ (t) = σ∈W (−1) ℓ(σ) P t (σ(λ + ρ)) − (µ + ρ)) , obtained from a t-deformation of Kostant's partition function P defined by positive roots α Algebraically, Kostka-Foulkes polynomials are the entries in transition matrices between the Schur and the Hall-Littlewood {H µ (x; t)} bases for the algebra Λ of symmetric functions in variables x = x 1 , x 2 , . . . , over the field Q(t). In fact, this reflects the graded decomposition of a simple quotient of the coinvariant ring viewed as have since been a matter of great interest. Rich theories were born from the compelling feature that the q, t-Kostka coefficients reduce to the Kostka-Foulkes polynomials at q = 0. In [GH93], Garsia and Haiman introduced S n -modules R µ , for µ a partition of n, given by the space of polynomials in variables x 1 , . . . x n ; y 1 , . . . y n spanned by all derivatives of a certain simple determinant ∆ µ . They conjectured that the dimension of R µ equals n!, and that the modules provide a representation theoretic framework for (2). Their interpretation was designed to imply the Macdonald positivity conjecture. Haiman spent years putting together algebraic geometric tools which ultimately led him to prove the conjectures in [Hai01].
Formula (1) set the gold standard for defining Macdonald polynomials, but cocharge was abandoned after efforts to give a manifestly positive formula for genericH µ (x; q, t) led no further than the most basic examples. In 2004, an explicit formula for Macdonald polynomials was established by Haglund-Haiman-Loehr. Rather than using Young tableaux and cocharge, the formula involves the major index and an intricate inversion-like statistic: over all Z + -valued functions (fillings) F on the partition λ. The Schur expansion was expected to come shortly behind this breakthrough, but it took another decade even to recover the Hall-Littlewood case. In [Rob17], Austin Roberts converted the q = 0 case of (3) into a new Schur expansion formula: over a mysterious subset U of fillings (see § 4.3). Roberts' questioning of the comparison of his formula with the earlier formulation (1) sparked our interest and led us to revive the study of cocharge. We discovered that the classical combinatorics of cocharge supports Macdonald polynomials as naturally as it does the less intricate setting surrounding Hall-Littlewood polynomials. The key idea is a broadening of the plactic monoid [LS81, LLT02] whereby each letter in a word is colored. Of particular importance is the subset of tabloids, words with an increasing condition used by Young to define (Specht) modules. We prove that Macdonald polynomials are colored tabloid generating functions, weighted by cocharge and a betrayal statistic which measures the variation of cocharge on colored words from its value on usual words. Theorem. For any partition µ,H µ (x; q, t) = T q betrayal(T ) t cocharge(T ) x shape(T ) , over colored tabloids with µ 1 ones, µ 2 twos, and so forth. Further applications of colored words are geometrically inspired. The classical example in Schubert calculus addresses the cohomology of the Grasmann variety where the structure constants c ν λµ count Yamanouchi tableaux. Schubert calculus vastly expanded with efforts to characterize the structure of K-theory and (quantum) cohomology of other varieties; the problems are a combinatorial search for alternative, or more refined notions, of Yamanouchi. Thus, the combinatorial ideas surrounding the plactic monoid are often revisited in Schubert calculus. In fact, c ν λµ can be viewed as the number of skew tableaux with zero cocharge and the broader scope of colored words fits in well.
We extend Van Leeuwen's approach [vL01] to the Yamanouchi condition using Young tableaux companions. We show that colored tabloids serve as companions for the generic Z + -valued functions used in the Macdonald polynomials (3). From this point of view, a super-Yamanouchi condition arises and is applicable to K-theoretic Schubert calculus problems as well as Kostka-Foulkes polynomials. The companion map c simultaneously gives relations between • the formulas (1) and (4) for q = 0 Macdonald polynomials, • genomic tableaux of Pechenik-Yong [PY17] and set-valued tableaux of [Buc02], introduced to study Ktheoretic problems in Schubert calculus, and • cocharge and the Lenart-Schilling statistic [LS13] for computing the (negative of the) energy function on affine crystals.
Colored words also support equivariant K-theory of Grassmannians and Lagrangians, but details are sequestered in a forthcoming paper. We investigate representation theoretic lines with the theory of crystal bases, introduced by Kashiwara [Kas90,Kas91] in an investigation of quantized enveloping algebras U q (g) associated to a symmetrizable Kac-Moody Lie algebra g. Integrable modules for quantum groups play a central role in two-dimensional solvable lattice models. When the absolute temperature is zero (q = 0), there is a distinguished crystal basis with many striking features. The most remarkable is that the internal structure of an integrable representation can be combinatorially realized by associating the basis to a colored oriented graph whose arrows are imposed by the Kashiwara (modified root) operators. From the crystal graph, characters can be computed by enumerating elements with a given weight, and the tensor product decomposition into irreducible submodules is encoded by the disjoint union of connected components. Hence, progress in the field comes from having a natural realization of crystal graphs.
A double crystal structure on colored tabloids using only the type-A crystal operators and jeu-de-taquin provides a lens giving clarity to problems in Macdonald theory and in Schubert calculus. Several crystal graphs arise simultaneously through different colored tabloid manifestations of tabloids. From these, we deduce Schur expansion formulas for dual Grothendieck polynomials and the q = 0, 1 cases of Macdonald polynomials. The q = 1 result perfectly mimics the classical formula (1) for q = 0. In particular, Theorem. For any partition µ,H over colored tabloids with column increasing entries.
Macdonald [Mac95] proved the existence of another basis of polynomials, P λ (x; q, t), also unitriangularly related to the monomials, but orthogonal with respect to the q, t-deformation of , , Of interest to combinatorialists, but not apparent from the definition, he conjectured that P λ (x; q, t) have certain transition coefficients lying in Z ≥0 [q, t]. Garsia modified P λ (x; q, t) into polynomialsH λ (x; q, t) to rephrase Macdonald's conjecture as one about Schur positivity: the q, t-Kostka coefficients iñ Garsia's approach appealed to a broader audience. Namely, results of Frobenius dictate that a positive sum of Schur functions models the decomposition of an S n -representation into its irreducible submodules. Namely, for σ ∈ S n and λ ⊢ n, the value of the irreducible character χ λ of S n at σ arises in where τ(σ) is the cycle-type of σ. Define the linear Frobenius map from class functions on S n to symmetric functions of degree n by and consider the Frobenius image of a doubly-graded S n -module M = r,s M r,s , The function F char(M) (x; q, t) is thus a positive sum of Schur functions with coefficients in Z ≥0 [q, t] by (7). So launched the search for a bi-graded module M for whichH λ (x; q, t) is the Frobenius image. Garsia and Procesi settled the q = 0 case and gave the perfect guide [GP92]. In particular, they gave an algebraic approach to Hotta and Springer's result that K λµ (0, t) describes the multiplicities of S n characters χ λ in the graded character of the cohomology ring of a Springer fiber, B µ . The cohomology ring H * (B µ ) can be defined by a particular quotient, R µ (y) = C[y 1 , . . . , y n ]/I µ , of the coinvariant ring R 1 n (y) = C[y 1 , . . . , y n ]/ e 1 , . . . , e n . R µ (y) is the Garsia-Procesi module under the natural S n -action permuting variables; they proved the ideal I µ is generated by Tanisaki generators, defined to be the elementary symmetric functions e k (S ) in the variables S = {y i 1 , . . . , y i r } ⊂ {y 1 , . . . , y n } when r > k > |S | − # cells of µ weakly east of column r. The simplicity of Garsia and Procesi's definition led them to an algebraic proof that K λµ (0, t) ∈ N[t] and offered an attack on the q, t-Kostka polynomials. Given that the Frobenius image of R µ (y) isH µ (x; 0, t), the task was to define an S n -module R µ (x; y) = C[x 1 , . . . , x n ; y 1 , . . . , y n ]/J µ , under the the diagonal S n -action, simultaneously permuting the x and y variables, so that Garsia and Haiman found just the candidate; it is the ideal where ∆ µ is a generalization of the Vandermonde defined using a graphical depiction of µ. A lattice square (i, j) lies in the ith row and jth column of N × N. The (Ferrers) shape of a composition α = (α 1 , . . . , α ℓ ) ∈ Z ℓ ≥0 is the subset of N × N made up of α i lattice squares left-justified in the i th row, for 1 ≤ i ≤ ℓ. A lattice square inside a shape α is called a cell. Given µ ⊢ n, the cells {(r 1 , c 1 ), . . . , (r n , c n )} in µ define x c 1 2 y r 1 2 · · · x c 1 n y r 1 n . . . . . .
x c n 1 y r n 1 x c n 2 y r n 2 · · · x c n n y r n Although the construction of the modules R µ (x; y) is quite simple, the proof of (8) required sophisticated geometric techniques developed by Haiman [Hai01].
Cocharge
How R µ (x; y) decomposes into irreducible submodules remains an open problem. It is particularly intriguing in light of the perfect description for decomposing R µ (y) in terms of the following statistic on words. Given a word w in the alphabet A, w B is the subword of w restricted to letters of B ⊂ A. When B = {i}, we use simply w i = w B . The weight of a word w is the composition α, where α i is the number of times i appears in w. A word with weight (1, . . . , 1) is called standard. The cocharge of a standard word w ∈ S n is defined by writing w counter-clockwise on a circle with a ⋆ between w 1 and w n , attaching a label to each letter, and summing these labels. The labels are determined iteratively starting by labeling 1 with a zero. Letter i is then given the same label as i − 1 as long as ⋆ lies between i − 1 and i (reading clockwise) and it is otherwise incremented by 1. The cocharge of a word w with weight µ ⊢ n is defined by writing w counter-clockwise on a circle and computing the cocharge of µ 1 standard subwords of w. Letters of the i th standard subword are adorned with a subscript i and this subword is determined iteratively from i = 1 as follows: clockwise from ⋆, choose the first occurrence of letter 1 and proceed on to the first occurrence of letter 2. Continue in this manner until µ ′ 1 has be given the index 1. Start again at ⋆ with i + 1, repeating the process on letters without a subscript. The cocharge of w is the sum of the cocharge of each standard subword. The charge of a word w of weight µ is Kostka-Foulkes polynomials require only words coming from Young tableaux. Use α | = n to denote that α is a composition of degree n = |α| = α 1 + α 2 + · · · . For compositions α and β where α i ≤ β i for all i, we say α ⊆ β. The skew shape of α ⊆ β is β/α, defined by the set theoretic difference of their cells and of degree |β| − |α|. A (semi-standard) tableau is the filling of a skew shape with positive integers which increase up columns and are not decreasing along rows (from west to east).
Definition 2. The reading order of any collection S ⊂ N × N is the total ordering on elements in S defined by saying that lattice squares decrease from left to right, starting in the highest row and moving downward.
Given a tableau T , the reading word w = word(T ) is defined by taking w i to be the letter in the i th cell of T , where cells are read in decreasing reading order. The weight of tableau T is the weight of its reading word and T is called standard when word(T ) is standard. For skew shape λ/µ of degree n and γ | = n, the set of tableaux of shape λ/µ and weight γ is denoted by SSYT (λ/µ, γ). Lascoux and Schützenberger [LS78] proved, for partitions λ and µ of the same degree, where the cocharge of a tableau T is defined by cocharge(word(T )). A similarly beautiful formula for the q, t-Kostka polynomials has been actively pursued for decades. Because K λµ (1, 1) = |SSYT (λ, 1 n )|, the endgame is to establish a formula for K λµ (q, t) by attaching a q and a t weight to each standard tableau.
Macdonald polynomials
Although the Schur expansion of Macdonald polynomial still eludes us, Jim Haglund made a breakthrough in 2004 by proposing a combinatorial formula forH µ (x; q, t). Rather than using semi-standard tableaux and cocharge, different statistics are associated to arbitrary fillings.
A filling F of shape β/α and weight γ | = |β| − |α| is any placement of letters from a word with weight γ into shape β/α. The entry in row r and column c of F is denoted by F (r,c) , and the set of fillings of shape β/α and weight γ is F (β/α, γ). Immediate from the definition is For a filling F of partition shape λ, an inversion triple is a triple of entries (r, t, s) which are arranged in a collection of cells in F of the form r . . . t s , and meeting the criteria that r = s t or some cycle of ( If the cells containing r and t are in the first row we envision that s = 0. The inversion statistic is the number inv(F) of inversion triples in F. The major index of F is where λ ′ c is the number of cells in column c of λ. Every partition λ has a conjugate λ ′ given by reflecting shape λ about y = x. Alternatively, a filling F has a descent at cell (r, c) when F (r,c) > F(r − 1, c) and maj(F) is the number of descents of F, each weighted by the number of cells appearing weakly above it in F.
Colored words and circloids
A colored letter x i is a letter x in an alphabet A adorned with a subscript (its color) i from A. A colored word w is a string of distinct colored letters. The weight of w is a skew composition recording the colors which adorn each letter; weight(w) = α/β where {β x + 1, . . . , α x } are the colors attached to letter x in w. When β | = 0, we simply say the weight of w is α. Colored words also come equipped with shapes which are assigned using the prismatic order on colored letters: u v > r c when r < u or (r = u and c > v). Equivalently, A strict composition (one without zero entries) γ | = n is a shape admitted by a colored word w = w n · · · w 1 if w 1 > · · · > w γ 1 , w γ 1 +1 > · · · > w γ 1 +γ 2 , and so forth. A weak composition γ ′ | = n is a shape admitted by w when the strict composition obtained by removing the zeroes from γ ′ is a shape admitted by w.
A circular representation of colored words is convenient when attaching statistics. We write a colored word w counter-clockwise on a circle and separate its letters into sectors to give a concept of shape.
Definition 6. A circloid C of shape γ | = n is a placement of n distinct colored letters on the perimeter of a subdivided circle such that, reading clockwise from a distinguished point ⋆, γ x colored letters lie in decreasing prismatic order in sector x, for x = 1, . . . , ℓ(γ).
Each circloid C is uniquely associated to a colored word w of the same shape by reading the letters of C in counter-clockwise order. The weight of C is defined to be weight(w). The set of circloids of weight α/β and shape γ is denoted by C(γ, α/β). Note that letter b appears in a circloid C ∈ C(·, α/β) exactly α b − β b times since there are α b − β b colors needed to adorn the set of b's.
Example 7. Circloids C 1 ∈ C((3, 3, 2, 1), (3, 3, 3)) and C 2 ∈ C((3, 1, 2, 2, 1), (3, 3, 3)) with underlying w = 3 2 1 3 3 1 1 1 2 2 2 1 1 2 2 3 3 3 are If unspecified, entries and positions of a circloid are always taken clockwise. For example, 3 2 and 3 3 lie between 1 3 to 2 3 in C 1 since 3 2 and 3 3 are passed when reading clockwise from 1 3 to 2 3 . We consider the following two restrictions of a circloid C, For the traditionalists, we interpret circloids as fillings of shapes. For compositions γ and α/β, a colored tabloid T ∈ CT (γ, α/β) is a filling of shape γ with colored letters so that row entries are increasing from west to east under the prismatic order and the colors adorning letter x in T are {β x + 1, . . . , α x }. As expected, α/β is called the weight of T . It is straightforward to see that a bijection is given by the map defined by putting the colored letters of sector r from circloid C into row r of shape γ so that the row is colored increasing from west to east, for each r = 1, . . . , ℓ(γ).
Circloid statistics
The cocharge statistic on words naturally extends to circloids. Macdonald polynomials turn out to be generating functions of circloids, weighted by cocharge and a second statistic which measures the variation of cocharge from the Lascoux-Schützenberger statistic.
For a circloid C ∈ C(·, µ) of partition weight µ, the cocharge of C is defined by Remark 8. Any word w with partition weight µ can be uniquely identified with a circloid C ∈ C(1 n , µ) of the same cocharge. Place the letters of w counter-clockwise on a circle and color according to the labeling for standard subwords. That is, moving clockwise from ⋆, label the first 1 with i = 1. By iteration, the first x + 1 encountered in the clockwise reading from x 1 is colored 1. Once µ ′ 1 letters have been labeled by 1, repeat with 2 on uncolored letters, and so on.
The second statistic measures how different a coloring is from standard subwords. When choosing which letter x to color j, each candidate passed over in clockwise order increases the statistic by 1. Precisely, where s i, j is the number of i with > j lying between (i − 1) j and i j in C, with the understanding that 0 j = ⋆ for all j = 1, . . . , µ 1 .
Example 9. The circloids in the previous example have a betrayal of 2 and cocharge of 4.
It is through the lens of circloids that we can prove cocharge is as fundamental to the q, t-Macdonald setting as it is to Kostka-Foulkes and Hall-Littlewoods polynomials. We show that a Macdonald polynomial is none other than the shape generating function of circloids weighted by cocharge and betrayal. Moreover, the result follows straightforwardly from a correspondence between circloids and skew fillings.
Proof. Consider any compositions γ and β ⊆ α where |γ| = |α| − |β|. We first establish that f is a bijection where Given a circloid C, let f = f(C). Each letter r in sector x of C corresponds to an entry x in row r of f implying is also the set of colored letters in sector x of D. Since colored letters lie in unique decreasing prismatic order within sectors, every sector of C and D is the same and we see that C = D. That f is bijective then follows by noting that the number of fillings given in (10) matches the number of circloids C ∈ C(γ, ·). That is, again viewing letters from sector x of C as a subset {r (1) We next restrict our attention to circloid C ∈ C(γ, λ) for λ ⊢ n and claim that for f = f(C). Since maj is computed on columns, we need only verify that cocharge(C i ) equals the maj of column i in f to prove that maj( f ) = cocharge(C). A slight reinterpretation of the cocharge definition gives cocharge(C i ) = r L r where L r = λ ′ i − r + 1 when r i occurs (clockwise) between (r − 1) i and ⋆ in C and otherwise L r = 0. In fact, since r i > (r − 1) i and sectors are prismatic order decreasing, L r 0 if and only if r i lies in a sector y strictly larger than the sector x containing (r − 1) i . On the other hand, the action of f dictates that r i is in sector y and (r − 1) i is in sector x of C precisely when y lies in cell (r, i) above x in cell (r − 1, i) of f . We next claim that betrayal(C) = inv( f ) using the observation that betrayal where I i j = { > j : i lies between (i − 1) j and i j } , with the convention that 0 j = ⋆ for all j. For any j, 1 j is in sector x of C if and only if entry x lies in (1, j) of f . Note that ∈ I 1 j implies i lies in sector y < x since i j < i and therefore ∈ I 1 j corresponds uniquely to an inversion of x with the entry y in (i,) of f . When i > 1, for each pair of (i − 1) j in sector x and i j in sector y of C, one of the following relations concerning the sector z with ∈ I i j must be true: x = y and z x, x < z < y, y < x < z, or z < y < x. Correspondingly, entries x, y, and z in cells (i − 1, j), (i, j), and (i,) of f respectively, form a triple inversion.
We can extend the definitions of cocharge and betrayal to colored tabloid, Immediately following from Theorem 12 is an expression using charge and one using fixed weight colored tabloid.
Colorful companions
The subset of Young tableaux with an additional Yamanouchi condition is of particular importance; its cardinality gives tensor product multiplicities of GL n , the Schur expansion coefficients in a product of Schur functions, and the Schubert structure constants in the cohomology of the Grassmannian Gr(k, n) of k-dimensional subspaces of C n . For partition λ, T λ denotes the unique tableau of shape and weight λ. A word w is λ-Yamanouchi when w · word(T λ ) = b n · · · b 2 b 1 has the property that the weight of each suffix b j · · · b 2 b 1 is a partition. A filling is λ-Yamanouchi when its reading word is λ-Yamanouchi and a circloid is λ-Yamanouchi when the counter-clockwise reading of its letters is λ-Yamanouchi. A ∅-Yamanouchi object is simply called Yamanouchi.
Remark 14. Since a word of weight µ has zero charge only when every standard subword is the maximal length permutation, zero charge matches the Yamanouchi condition.
Because many open problems in representation theory, geometry, and symmetric function theory involve a search for contemporary notions of Yamanouchi and tableaux to characterize mysterious invariants, the Yamanouchi condition has been revisited often from different viewpoints. The combinatorics of circloids naturally captures several of these simultaneously.
Companions and the Yamanouchi condition
Van Leeuwen addresses the classical Littlewood-Richardson rule by rephrasing the Yamanouchi condition on skew tableaux P in terms of companion tableaux. A companion of P is any skew tableau Q such that the entries in row x match the row positions of letters x ∈ P and are aligned to meet the condition that entries increase up columns. He proves that a Yamanouchi tableau P always has a companion tableau of (straight) partition shape µ.
We forsake the column increasing condition and instead view a companion as the tabloid where rows are uniquely aligned into a straight shape. Such a companion of semi-standard tableau P is precisely the tabloid obtained by ignoring colors of ι • f −1 (P). This approach opens the door to a more inclusive study allowing for companions of arbitrary fillings.
Definition 15. The companion map is the bijection, The companion of a filling F ∈ F (ν/λ, µ) is the unique colored tabloid c(F).
Following directly from the definition of f and ι, the action of c on a filling F takes entry e in cell (r, c) to the colored letter r c placed in row e of T , arranged so that each row of T is colored increasing. Companions give a valuable mechanism to study Yamanouchi related problems.
Definition 16. A filling F is super-Yamanouchi when the non-decreasing rearrangement of entries within each row is a Yamanouchi tabloid.
Proposition 17. Given partitions µ and ν/λ, consider a filling F ∈ F (ν/λ, µ) and its companion T = c(F).
letters of F increase in columns if and only if T is λ-Yamanouchi.
Proof. (1) F is Yamanouchi if and only if the letter x in a cell (r, c) of F can be paired uniquely with an x − 1 in some cell (r,ĉ) occurring after (r, c) in reading order. Equivalently, each entry r c in row x of T pairs uniquely with an entryrĉ ≤ r c in row x − 1 (in prismatic order). Since row entries in a colored tabloid lie in increasing prismatic order, such a pairing can occur exactly when the entry immediately below r c is smaller in prismatic order.
(2) Consider a colored tabloid T where letters do not strictly increase up some column. If columns of T are not prismatic increasing, F is not Yamanouchi by (1). Otherwise, we can choose b to be the rightmost column of T with an r c in row x and an rĉ in row x − 1 where c <ĉ. Correspondingly, F has an x in cell (r, c) and an x − 1 in (r,ĉ). Since entries in rows of T are prismatic non-decreasing and r c and rĉ lie in column b, the subset of cells in F weakly smaller than (r,ĉ) in reading order contain b x − 1's and b x's. However, the fillingF obtained by rearranging letters in row r of F into weakly increasing order is not Yamanouchi since the x in column c <ĉ of F moves to the east of all x − 1's in that row.
On the other hand, a colored tabloid T with letters increasing up columns has prismatic increasing columns and therefore F is Yamanouchi by (1). Suppose that F has an x and an x − 1 in cells (r, c) and (r,ĉ), respectively, such that when letters in row r are put into weakly increasing order, the resulting filling is not Yamanouchi. Since F is Yamanouchi, this can only happen ifĉ > c and there is an equal number of x − 1's and x's in the subset of cells of F occurring weakly after (r,ĉ) in the reading order of cells. However, under the f-correspondence, T has an r c in row x and an rĉ in row x − 1 lying in the same column. The violation of increasing columns establishes the claim.
A letter in an arbitrary cell (r, c) ofF is larger than the letter in cell (r − 1, c) if and only if entry r c occurs in a later sector then r − 1 c ofĈ = f −1 (F). This is equivalent to the Yamanouchi condition onĈ; r can be paired with a letter r − 1 which occurs earlier than it, for each letter r inĈ. The claim follows by noting that the counter-clockwise reading of letters from the first |λ| sectors ofĈ is word(T λ ).
Reverse companions
The initial study of companions involved only the subset of fillings which are semi-standard tableaux. Proposition 17 pinpoints that dropping the row condition and requiring only that letters increase in columns of a filling imposes the λ-Yamanouchi condition on its companion circloid (or colored tabloid). On the other hand, it is also natural to examine the subset of fillings which are tabloids, that is, fillings which are non-decreasing in rows from west to east. Let T (α, β) be the set of tabloids of shape α and weight β.
A distinguished coloring on circloids comes to light under these conditions. A circloid is reverse colored when the colors adorning letter x increase clockwise from ⋆, for each fixed letter x. A tabloid T is reverse colored if ι −1 (T ) is a reverse colored circloid.
Remark 18. Since the reverse coloring uniquely assigns a color to each letter of a tabloid, reverse colored circloids are a manifestation of tabloids.
Proposition 19. Given compositions γ and β ⊆ α, the companion T of a filling F ∈ F (α/β, γ) is reverse colored if and only if F is a tabloid.
Proof. By definition of f, a filling F has the property that the letter in cell (r, c) is not smaller than the letter in (r, c − 1) if and only if r c does not occur before r c−1 in f −1 (F).
In the combinatorial theory of K-theoretic Schubert calculus, tableaux are replaced by more intricate combinatorial objects such as reverse plane partitions, set-valued tableaux, and genomic tableaux. The later were introduced recently by Pechunik and Yong [PY17] to solve a difficult problem concerning equivariant K-theory of the Grassmannian. We have discovered that reverse-colored companions are closely related to genomic tableaux and carry out the details separately. A glimpse of this application is given in § 6.3 where a crystal structure on reverse colored circloids is used to study the representatives for K-homology classes of the Grassmannian.
Faithful companions
Another useful manifestation of tabloids arises from a second distinguished circloid coloring. A circloid C is faithfully colored when, for each i ≥ 1, if entries of color j < i are ignored, the closest 1 to ⋆ (moving clockwise) has color i = 1 and the closest x + 1 to x i has color i, for x ≥ 1. A colored tabloid is defined to be faithfully colored if it is the ι-image of a faithfully colored circloid. Proof. The number of inversion triples in a filling F matches the betrayal of f(F) by (15). It thus suffices to note, by definition, that restricting the set of circloids to those with zero betrayal gives the subset of faithfully colored elements. Proof. Consider an inversionless filling F ∈ F * (µ, ·) which is super-Yamanouchi. Note that the weight of F must be a partition λ ⊢ |µ| since F is Yamanouchi. Propositions 17 and 23 give that the c-image of F is a faithfully colored tabloid with letters which increase up columns. Since each tabloid has a unique faithful coloring, ignoring colors gives the bijection between {F ∈ F (µ, λ) : inv(F) = 0 and F super − Yamanouchi} ↔ SSYT (λ, µ) .
The maj(F) = cocharge(f(F)) by (15), and any faithfully colored circloid C ∈ C(λ, µ) has the same cocharge as that of its manifest tabloid T by Remark 8. The result then follows from (9).
The comparison of Theorem 24 to (4) suggests that the set U defined by Roberts is related to the super-Yamanouchi condition. Roberts' formula requires inversionless, Yamanouchi fillings with an additional property imposed upon entries in a pistol configuration, one that is made up of cells in row r lying in columns 1, . . . , c and cells of row r + 1 lying in columns c, . . . , µ r+1 , for any fixed r, c. In our language, a filling is jammed if its reverse coloring results in a pistol containing both x y and (x + 1) y+1 for some letter x with color y. When a filling is not jammed, we say it is jamless.
Lemma 25. The set of inversionless, super-Yamanouchi fillings is the same as the set of inversionless, jamless, Yamanouchi fillings.
Proof. Suppose an inversionless, super-Yamanouchi filling F is jammed and for convenience, consider its reverse coloring. Since F is jammed, it has rows r and r + 1 with a pistol containing x y and (x + 1) y+1 . If x i does not lie in a lower row than (x + 1) j of a super-Yamanouchi filling, then j < i. Therefore, x y must lie in cell (r, c) of F and (x + 1) y+1 in cell (r + 1,ĉ), for someĉ ≥ c. Moreover, x y+1 lies in row r of F. Consider the minimal color i adorning x in the set of rows higher than row r. Since x i−1 , x i−2 , . . . , x y+1 lie west of columnĉ in row r, and F is inversionless, an x + 1 lies above each of these x's. Therefore, (x + 1) i lies in row r + 1 contradicting that F is super-Yamanouchi.
On the other hand, suppose that a filling F is jamless, inversionless, and Yamanouchi but not super-Yamanouchi. Then there is some row r and letter x +1 in F where the number y of x +1's weakly below row r is greater than the number of x's below row r. In particular, y is the color adorning the leftmost x + 1 in row r in the reverse coloring of F. Further, the x colored y must lie after this (x + 1) y in reading order since F is Yamanouchi. However, it is not super-Yamanouchi and therefore x y lies in row r. Since F is inversionless and has (x + 1) y west of x y in row r, x must lie below (x + 1) y . When reverse colored, this x has color z < y. In turn, z < y implies (x + 1) z+1 must lie weakly after (x + 1) y (in reading order). However, the Yamanouchi condition requires that (x + 1) z+1 lies before x z . Therefore, the pistol based at x z contains (x + 1) z+1 contradicting that F is not jammed.
Crystals
The quantum enveloping algebra U q (sl n+1 ) is the Q(q)-algebra generated by elements e i , f i , t i , t −1 i , for 1 ≤ i ≤ n, subject to certain relations. For a U q (sl n+1 )-module M and λ ∈ Z n+1 , the weight vectors (of weight λ) are elements of the set M λ = {u ∈ M : t i u = q λ i −λ i+1 u}. A weight vector is said to be primitive if it is annihilated by the e i 's. A highest weight U q (sl n+1 )-module is a module M containing a primitive vector v such that M = U q (sl n+1 )v. The irreducible highest weight module with highest weight λ is denoted V λ .
Kashiwara [Kas90,Kas91] introduced a powerful theory whereby combinatorial graphs are used to understand finite-dimensional integrable U q (sl n )-modules M. The crystal of M is a set B equipped with a weight function wt : B → {x α : α ∈ Z ∞ ≥0 } and operatorsẽ i ,f i : B → B ∪ {0} satisfying the properties, for a, b ∈ B, Y(B, γ).
The tensor product crystal graph B 1 ⊗ · · · ⊗ B k has vertices in the Cartesian product (b 1 , . . . , b k ) ∈ B 1 × · · · × B k which are denoted b = b 1 ⊗ · · · ⊗ b k . Its weight function is defined by
Lascoux and Schützenberger anticipated the necessary ingredients for the Kashiwara type-A crystal in their development of the plactic monoid on words [LS81, LLT02]. It is given by the set
of words in the alphabet B(1) = [n]; the crystal actionẽ i ,f i is defined on b ∈ B(1) n by changing a single i (or i + 1) to an i + 1 (or i) in the restriction of b to the subword w {i,i+1} . Regarding each letter as a parenthesis, i + 1 as a left and i as a right, adjacent pairs of parentheses "()" are matched and declared to be invisible until no more matching can be done. It is a letter in the remaining subword, z = i p (i + 1) q for some p, q ∈ Z ≥0 , which is changed. Precisely,ẽ i (b) = 0 when q = 0,f i (b) = 0 when p = 0, and otherwiseẽ i (b) is the word formed from w by replacing the subword z with i p+1 (i + 1) q−1 andf i (b) is formed by replacing z with i p−1 (i + 1) q+1 . Remark 27. Parentheses pairing of any b ∈ B(1) n has the property that every adjacent (i + 1) i is paired, and the first i in any adjacent pair ii is never the rightmost unpaired entry. Therefore, descents of such pairs are preserved by the action ofẽ i ,f i and Des(ẽ i (b)) = Des(f i (b)) = Des(b) when b is not anhilated.
For µ ⊢ n, sinceẽ i annihilates only the Yamanouchi words, the set of highest weights of B = B(1) n with As dictated by Kashiwara's theory, the crystal graph B(µ) of the irreducible submodule V µ is isomorphic to a connected subgraph of B(1) n which contains a Yamanouchi word of weight µ, and The crystal graph B(m) is isomorphic to the subgraph of elements b ∈ B(1) m with no descents since b = (1, . . . , 1) is the only element in Y (B(1) m , (m)). Therefore, for any γ | = n of length ℓ, the tensor product crystal has highest weight elements given by Yamanouchi words which are non-decreasing in the first γ 1 positions, in the next γ 2 positions, and so forth.
Singly graded Garsia-Haiman modules
A crystal structure on circloids leads us to a characterization for the singly graded decomposition of Garsia-Haiman modules which preserves the spirit of the Garsia-Procesi module decomposition given by (9).
We first refine the decomposition of B = B(1) ⊗ · · · ⊗ B(1). For any D ⊂ {1, . . . , n − 1}, define the induced subposet B(D) of B by restriction to vertex set {b ∈ B : Des(b) = D}. where More generally, the right hand side of (19) reflects the graph decomposition of the crystal B(1) n into B(D), graded by maj λ ′ (D). On the other hand, Macdonald polynomials at q = 1 are presented in (11) as weight generating functions of λ-shaped fillings graded by maj. Each filling f ∈ F (λ, ·) can be uniquely identified with a vertex b ∈ B(1) n by reading the columns of f from top to bottom (choosing any fixed column order). Consider the filling f b identified by vertex b. Since the computation of maj( f b ) relies only on descents in columns of f b , precisely the subset of descents involved in the computation of maj λ ′ (Des(b)), we have that maj λ ′ (Des(b)) = maj( f b ). By definition, maj λ ′ (Des(b)) is constant over all elements b ∈ B(D) and therefore maj( f ) is constant on all fillings f associated to b ∈ B(D).
Remark 29. Although each vertex b ∈ B(1) n could be uniquely identified with the filling f of shape λ ⊢ n whose reading word is b, maj is not constant on all fillings in the same connected component under this correspondence.
The interaction of crystals with the cocharge statistic comes out of a directed, colored graph B(γ) whose vertices are circloids of weight γ. An i-colored edge between circloids is imposed by operators,ẽ i andf i , which move an entry from sector i + 1 to sector i or vice versa using a method of pairing colored letters.
Pairing is a process which iterates over each entry in a given sector. Entries are considered from smallest to largest with respect to the co-prismatic order, defined on colored letters by Pairing is done by writing the entries from sectors i and i + 1 in co-prismatic decreasing order, assigning every entry from sector i + 1 a left parenthesis and every entry from sector i a right parenthesis. Entries are then paired as per the Lascoux and Schützenberger rule for parenthesis.
Definition 30. For a composition α and i ∈ {1, . . . , ℓ(α) − 1}, the operatorẽ i acts on C ∈ C(·, α) by moving the largest unpaired entry in sector i + 1 to the unique position of sector i which preserves the prismatic decreasing condition on circloids. In contrast,f i acts on C by moving the smallest unpaired entry in sector i to sector i + 1.
When γ is a partition, the connected components of B(γ) are constant on cocharge.
Proof. Given γ | = n, let φ i act on F (γ, ·) by the induced action φ i = f •ẽ i • f −1 . When an entry x y in circloid C is paired with u v < ′ x y by the action ofẽ i , an i + 1 in cell (x, y) of φ i ( f −1 (C)) is paired with an i in cell (u, v) where either y > v or (y = v and x > u). Consider the graph on F (γ ′ , ·) where an i-colored edge connects f andf when f ′ = φ i ( f ′ ); when each filling is replaced by its reading word b, this is the crystal graph B(1) n . In particular, a crystal morphism Φ : B(γ) → B(1) n is given by where b is the reading word obtained by reading down the columns of the filling f(C) from right to left. That is, and x > u). This is equivalent to x y > ′ u v . The weight function on B(γ) maps circloid C to its shape by (13).
A highest weight C ∈ B(γ) satisfiesẽ i (C) = 0 for all i if and only if each entry x y in row i + 1 of T = ι(C) ∈ CT (µ, ·) pairs with an entry u v < ′ x y in row i. By rearranging the rows of T so that they are co-prismatically non-decreasing this will result in co-prismatic increasing columns. Note that µ must be a partition, for if a sector i + 1 of C has more entries than sector i, then C has an unpaired entry and is not anhilated.
The circloid crystal captures a formula for the q = 1 Macdonald polynomials which is perfectly aligned with the long-standing formula for q = 0 given by Lascoux-Schützenberger. Proof. We have seen that each b ∈ B(D) corresponds to a filling f b with maj λ ′ (D) = maj( f b ). Theorem 28 gives thatH The claim follows by recalling that maj( f b ) = cocharge(f −1 ( f b )) and that Φ is a morphism of crystals.
From this, it is not difficult to rederive Macdonald's formula taken over standard tableaux.
where T i is the subtableau of T restricted to letters in
Proof. Give a prismatic column increasing circloid C ∈ C(·, µ), replace each entry i c of C with letter i + j<c µ ′ j . The condition on C that x < u or x = u and y < v for any x y above u v implies that letters are strictly increasing in columns of the tabloid T . Since the computation of cocharge on a circloid independently calculates cocharge on standard subwords of a given color, µ − cocharge(T ) = cocharge(C).
Double crystal structure
Characterization of the doubly graded irreducible decomposition of Garsia-Haiman modules presents major obstacles. Although the identification of fillings with elements of B(1) n given by column reading yields connected components constant on the maj-statistic, it is incompatible with the inversion triples. Even the subset of vertices with zero inversion triples is not a connected component. The crystal cannot be applied to (3), even when q = 0, to gain insight on bi-graded decomposition of R µ (x, y) into its irreducible components. However, a double crystal structure using dual Knuth relations (jeu-de-taquin) andẽ i ,f i operators on colored tabloids can be applied to the Garsia-Procesi modules. Double crystal structures on B(µ 1 ) ⊗ · · · ⊗ B(µ ℓ ) have been studied in various contexts [vL01,Shi,Las03], but without regard to graded modules.
For any composition γ of length ℓ, we consider a crystal B † (γ) on vertices CT (·, γ) which is dual to B(γ). An i-colored edge is prescribed by a sliding operation defined on an inflation of rows i and i + 1 in a colored tabloid. The i-inflation of a vertex b ∈ B † (γ) is defined by spacing out the colored letters in row i of b while preserving their relative order as follows: entries e are taken from west to east from row i of b and placed in the leftmost empty cell of row i without an entry e ′ < e directly above it. The operator e † i on b ∈ B † is then defined by a jeu-de-taquin sliding action whereby the largest entry of row i + 1 in the i-inflation of b which lies immediately above an empty cell is swapped with this empty cell, after which all empty cells are removed. When no empty cell lies in row i, e † i (b) = 0. It is convenient to define the inflation a vertex b ∈ B † (γ) as the punctured colored tabloid obtained by inflating rows of b in succession from top to bottom. Note that the inflation of b has entries (prismatic order) increasing up columns.
Definition 36. For any γ | = n, let B † (γ) be the graph on vertices CT (·, γ) with a directed, i-colored edge from We will establish that B † (γ) is a crystal graph doubly related to B(1) n = B(1) ⊗ · · · ⊗ B(1) by the companion map. For each γ | = n, define the map c γ on B(1) n by where f b is the unique filling of shape γ whose reading word is b. Proof. Since the image of the companion bijection c on F (γ, ·) is the set of vertices in B † (γ), the map c γ is a bijection between B(1) ⊗ · · · ⊗ B(1) and the vertices of B † (γ). To check that edges match, we need to prove that (1) ⊗ · · · ⊗ B(1). We will show that an entry x y in row i + 1 of the i-inflation of c γ (b) has an empty cell under it if and only if the corresponding i + 1 in cell (x, y) of b is unpaired. The proof then follows because sliding the rightmost x y down to row i is the equivalent of changing the leftmost unpaired i + 1 to an i in b.
Suppose that x y lies above an empty cell in the i-inflation of c γ (b). Then for each u v < x y in row i the entry The entry x y corresponds to an i + 1 in cell (x, y) of b, and for every i appearing afterward, there is a distinct i + 1 appearing between it and cell (x, y). Therefore the i + 1 in cell (x, y) will be unpaired.
Suppose that an i + 1 in cell (x, y) of b is unpaired. Then every i appearing afterward is paired with an i + 1 that appears between it and cell (x, y) Therefore, for each Thus we are guaranteed that there is an empty cell under x y when we i-inflate c γ (b).
The highest weights of B(1) n are defined by the Yamanouchi property and thus their companions are prismatic column increasing by Proposition 17. Alternatively, e † i anhilates a colored tabloid T when there is no entry in row i + 1 of the inflation of T above an empty square in row i. In particular, T is its own inflation and thus has prismatic increasing columns.
Garsia-Procesi modules
As a first application, we show how the graded irreducible decomposition of a Garsia-Procesi module is readily apparent in the crystal B † . For this, we need only the induced subposet B † 0 (γ) on the restricted set of reversecolored vertices in the crystal graph B † (γ).
Proposition 38. For a composition γ of length ℓ, the companion map is a crystal isomorphism The highest weights of B † 0 (γ) are (reverse-colored) semi-standard tableaux of weight γ.
there is a (largest) unpaired x y in row i + 1 of the inflation b above an empty cell. The definition of inflation thus implies x y−1 cannot lie in row i. Therefore, the image of a reverse colored tabloid under the crystal action remains as such since the action merely slides x y into row i. That is, each connected component in B † 0 is a connected component of B † . We then note that the set of vertices B † 0 is in bijection with B(γ 1 ) × · · · × B(γ ℓ ) since reverse-colored tabloids of weight γ are the companion images of tabloids with shape γ by Proposition 19. In turn, B(γ 1 ) × · · · × B(γ ℓ ) are defined as induced subposets of B(1) n allowing us to apply Theorem 37 to establish the isomorphism. The highest weights of B † (γ) are the colored tabloids with prismatically increasing columns by Theorem 37. Thus, a highest weight element b has entries x y above x y ′ in the same column only when y < y ′ . However, if b is also reverse colored, then y > y ′ and therefore its letters increase up columns.
Define the faithful recoloring of b ∈ B † to be the colored tabloid b 0 obtained by stripping b of its colors and then faithfully coloring its letters.
Lemma 39. For µ ⊢ n and b ∈ B † (µ) with the property that letters increase up columns of the inflation of b, where Z(b) is a set of λ 1 words extracted from the letters of b = (b 1 , . . . , b ℓ ). The first word w = w λ ′ 1 · · · w 1 is constructed by selecting w 1 to be the smallest entry in b 1 . Iteratively, w i is selected to be the smallest entry larger than w i−1 in b i (breaking ties by taking the easternmost). If there is no larger entry available, the smallest entry in b i is selected instead. The first word w is fully constructed after a letter has been selected from b ℓ . The remaining words of Z(b) are constructed by the same process, ignoring previously selected letters of b.
Proof. For b ∈ B(λ 1 ) ⊗ · · · ⊗ B(λ ℓ ), consider first the case that the tabloid f b contains an x in row i and letter z > x in row i + 1. For the colored tabloid T = c( f b ), there is an i in row x of T and and z is the lowest row above x with an i + 1. If f b does not have any z > x in row i + 1, instead take z ≤ x to be the minimal entry in row i + 1 and note that z is the lowest row in T containing an i + 1. More generally, for the i th word w = w λ ′ i · · · w 1 in Z(b), w j records the row containing j i in T . Since each w j+1 > w j contributes λ ′ i − j to the maj and each j + 1 higher than j contributes the same to cocharge(T ) the claim follows.
Theorem 44. For any partition λ,H Proof. The expansion (21) ofH µ (x; 0, t) over elements of B † can be converted to one involving B(λ 1 ) ⊗ · · ·⊗ B(λ ℓ ) by Proposition 38. As reviewed in (17), the highest weights of B = B(λ 1 ) ⊗ · · · ⊗ B(λ ℓ ) are characterized by the Yamanouchi condition. Corollary 43 implies that zmaj is constant on connected components of B, which are in correspondence with Schur functions indexed by the the highest weights.
Zero inversion map on Macdonald fillings
The faithful recoloring of vertices in B † 0 not only gives a formula for the energy function, it exposes an identification between inversionless fillings and tabloids used in [HHL05a]. Define on a filling F simply by rearranging entries in each row into non-decreasing order from west to east. The inverse of s is defined in [HHL05a](Proof of Proposition 7.1) by uniquely constructing an inversionless filling from a collection of multisets, m = {m 1 , m 2 , . . . , m k }. The unique placement of entries from m i into row i of f so that inv( f ) = 0 requires first that entries of m 1 are put into the bottom row in non-decreasing order, from west to east. Proceeding to the next row r = 2, letters of m r are placed in columns c from west to east as follows: an empty cell (r, c) is filled with the smallest value that is larger than the entry in (r − 1, c). If there are no values remaining that are larger than that in (r − 1, c), the smallest available value is chosen. The filling f arises from iteration on rows and by construction, f has no inversion triples.
In fact, s −1 ( f ) is none other than the companion preimage of the faithful recoloring of f 's companion. That is, the companions of f and s( f ) are both manifestations of the same tabloid, one is reverse colored and the other is faithful.
Proposition 45. For any tabloid f ∈ T (γ, ·), Proof. For any tabloid f , s −1 ( f ) and f differ only by the rearrangement of entries within rows. Thus, by definition of companions, T ′ = c(s −1 ( f )) and T = c( f ) differ only by their colorings. Since f is a tabloid, T is reverse colored by Proposition 19 and since s −1 ( f ) is inversionless, T ′ is faithfully colored by Proposition 23. Each of these colorings is uniquely defined on the manifest tabloid and the claim follows.
K-theoretic implications
To give a flavor of how circloid crystals fit into K-theoretic Schubert calculus, consider tabloids with the property that their conjugate is also a tabloid. Such a filling is called a reverse plane partition. If the weight of a reverse plane partition is defined to be the vector α where α i records the number of columns containing an i, the weight generating functions are representatives for K-homology classes of the Grassmannian: for skew partition ν/λ, g ν/λ (x) = r∈RPP(ν/λ,·) x weight(r) .
From this respect, repeated entries in a column of the reverse plane partition r are superfluous motivating us to instead identify r with the tabloid f obtained by deleting any letter that is not the topmost in its column and then left-justifying letters in each row. The inflated shape of tabloid f is defined to be the shape of its inflation. This recovers the shape of the reverse plane partition r from whence f came.
Proposition 46. For skew partition ν/λ, Proof. For any composition γ, since B = B(γ 1 ) ⊗ · · · ⊗ B(γ ℓ ) is a crystal graph underẽ i , it suffices to show that the inflated shape of f ∈ T (γ, ·) is the same as the inflated shape ofẽ i ( f ). From this, the induced subposet of B on vertices with fixed inflated shape is also a crystal and the Schur expansion comes from the highest weights.
Consider a filling f and f ′ =ẽ i ( f ) differing by only one letter i + 1 changed to an i in some row r. The shape of the inflation of f can differ from that of f ′ only if there is an i + 1 in row r + 1 lying above an empty cell. However, since the leftmost unpaired i + 1 in f lies in row r, the process of pairing ensures that every i + 1 in row r + 1 is paired with an i in row r. Therefore, there are more i's in row r of f than there are i + 1's in row r + 1 and the inflation of f must have an entry smaller than i + 1 below the rightmost i + 1 in row r + 1.
An expression for the Schur expansion of g ν/λ over a sum of semi-standard tableaux arises as a corollary of Proposition 46 by applying the companion map to (24) and using Corollary 21. Such an expression opens up the study of problems in K-theoretic Schubert calculus to the classical theory of tableaux. For example, a simple bijective proof of the K-theoretic Littlewood-Richardson was given in [LMS17] using this approach.
Quasi-symmetric expansion
It is not difficult to use dual equivalence graphs instead of crystals to deduce our previous results. For this, we formulate Macdonald polynomials using colored words, without mention of shape, in terms of Gessel's fundamental quasisymmetric function. Defined for any S ⊆ [n], let Betrayal and cocharge are defined on a colored word w by computing these statistics on the circloid C obtained by writing entries of w counter-clockwise on a circle. Since the shape of a circloid has no bearing on the statistics, it suffices to write each entry in its own sector so that C has shape (1 ℓ(w) ). Proof. Given a fixed colored word w of weight µ, consider the set of weak compositions β for which shape(w) = β. For each such β, Des(w) ⊆ set(β). Therefore, a unique circloid C ∈ C(β, µ) is obtained by counterclockwise inscribing the entries of w on a circle, separated into sectors of sizes β ℓ , . . . , β 1 . Since the computation of cocharge and betrayal of circloid C does not involve its shape, every C arising in this way satisfies cocharge(C) = cocharge(w) and betrayal(C) = betrayal(w). Theorem 12 can thus be rewritten as The claim follows by noting that β:sh(w)=β x β = γ| =ℓ(w) n−Des(w)⊆set(γ) i 1 <···<i ℓ(γ) x γ 1 i 1 · · · x γ ℓ(γ) i ℓ(γ) .
|
2017-10-02T17:16:55.000Z
|
2017-10-02T00:00:00.000
|
{
"year": 2017,
"sha1": "9b9a5167dab32b0f8d747cb87c8dafe18860b65b",
"oa_license": null,
"oa_url": "https://www.sciencedirect.com/science/article/am/pii/S0195669819300605",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "4865bbf91cc968738b895b19b551051cdda8048b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
17226193
|
pes2o/s2orc
|
v3-fos-license
|
Mycoplasma hominis brain abscess following uterus curettage: a case report
Introduction Mycoplasma hominis is mostly known for causing urogenital infections. However, it has rarely been described as an agent of brain abscess. Case presentation We describe a case of M. hominis brain abscess in a 41-year-old Caucasian woman following uterus curettage. The diagnosis was obtained by 16S rDNA amplification, cloning and sequencing from the abscess pus, and confirmed by a specifically designed real-time polymerase chain reaction assay. Conclusions Findings from our patient's case suggest that M. hominis should be considered as a potential agent of brain abscess, especially following uterine manipulation.
Introduction
Brain abscess is a life-threatening condition resulting from the invasion of brain tissues by microorganisms. Current microbiological documentation, mostly based on direct examination and culture of pus specimens, may underestimate the role of fastidious microorganisms in brain abscess [1]. Among these, Mycoplasma hominis has rarely been reported [2][3][4][5][6][7]. M. hominis is a fastidious and slowgrowing bacterium, commensal of the genitourinary tract of healthy adults. It mostly causes urogenital infections but may also cause extra-genital infections [8,9]. Infections caused by Mycoplasma sp. require specific antibiotic treatment. Lacking a cell wall and folic acid synthesis, they are resistant to antibiotics that target the cell wall or folic acid synthesis [10]. In particular, they are naturally resistant to β-lactams, which in combination with metronidazole have been recommended as empirical treatment of bacterial brain abscesses [11]. In contrast, M. hominis is sensitive to antibiotics that prevent the synthesis of proteins, including tetracyclines [12]. In addition, this bacterium cannot be Gram stained and requires specific culture media. However, molecular methods were successfully used to detect M. hominis from human samples [13].
Case presentation
In 2006, a previously healthy, 41-year-old Caucasian pregnant woman was admitted to our hospital with vertigo, severe headache, and left hemiparesis. She had no relevant medical history except two previous normal pregnancies and deliveries. A computed tomography (CT) scan and MRI scan of the brain identified a right fronto-parietal hematoma. The hematoma was surgically drained. Then 10 days later, at 22 weeks of gestation, our patient underwent early spontaneous miscarriage that required uterus curettage, complicated by important metrorrhagia. At three days following the miscarriage, our patient developed obnubilation, and subsequently coma. New cerebral CT and MRI scans revealed a fronto-parietal brain abscess. The abscess was surgically removed, and purulent material was sent to our laboratory. A nosocomial infection being suspected, an intravenous empirical treatment associating vancomycin (2 g/day) and meropenem (6 g/day) was started. Gram staining of the abscess specimen showed numerous polymorphonuclear leukocytes but no microorganism. The specimen was then plated onto 5% sheep blood agar and chocolate agar (BioMérieux, Marcy L'Etoile, France) and incubated at 37°C under aerobic, anaerobic, and microaerophilic conditions for 10 days. Plates were examined daily but no growth was observed. For molecular detection, DNA was extracted from the pus sample using the MagNA Pure LC DNA isolation kit II and the MagNA Pure LC instrument as recommended by the manufacturer (Roche, Meylan, France). Amplification and sequencing of the 16S rDNA gene were performed using broad range primers as previously described [14]. By comparison with sequences from GenBank, the sequence obtained from the polymerase chain reaction (PCR) product (1,475 bp) was 100% identical to that of M. hominis (GenBank accession number AF443616). As a consequence, the antibiotic treatment was changed to doxycycline, 200 mg/day for 12 weeks. Our patient recovered rapidly. On follow-up, she remained asymptomatic six months after the discontinuation of antibiotics. In order to determine whether the infection was monomicrobial or polymicrobial, the PCR amplicon was subsequently cloned into Escherichia coli using the pGEM-T Easy Vector System (Promega, Charbonnières, France). A total of 100 clones were analyzed by sequencing. Only 16S rDNA from M. hominis was detected in the 100 clones. The identification of M. hominis in our patient and the previously published cases motivated the development of a specific real time-PCR (RT-PCR) assay for this bacterium. 16S rDNA was selected as target. Using the Primer Express software (Applied Biosystems), specific primers and probes were designed as follows: MHMGB16Sd , and 4 μL of water. DNA was amplified using the following cycling parameters: heating at 50°C for 2 minutes, and then at 95°C for 15 minutes, followed by 50 cycles of a two-stage temperature profile of 95°C for one second and 60°C for 45 seconds. The specificity of the primers and probes was tested using BLAST http://blast.ncbi. nlm.nih.gov/ and by tentatively amplifying DNA from 24 distinct Mycoplasma species. The system was found to be specific to M. hominis, as no amplification was obtained from any other mycoplasmal or human DNA. For our patient, positive amplification was obtained after 22 PCR cycles. Negative controls remained negative.
Discussion
M. hominis frequently colonizes the lower genitourinary tract of women [15]. Host predisposing factors such as immunosuppression, malignancy, trauma, and manipulation or surgery of the genitourinary tract are considered as risk factors of extra-genital infections. It was notably demonstrated that blood spread of mycoplasmas may follow urinary tract catheterization or lithiasis [16]. To the best of our knowledge, M. hominis has previously been reported in only six patients as a cause of brain abscess [2][3][4][5][6][7] (Table 1). In the three female patients, M. hominis infection complicated a traumatic or spontaneous brain hematoma in a context of normal vaginal or cesarean delivery [2,3,7]. In the two male adult patients, the M. hominis infection complicated a head trauma in the context of urinary tract catheterization [4,5]. In female patients, the most likely source of M. hominis was the genital tract whereas it was the urinary tract in men. The most recent patient, a three-week-old baby, most likely acquired the M. hominis infection from passage through the maternal birth canal [6]. In our patient, we assume that the source of infection was the genital tract, as our patient underwent uterine curettage. It should be noted that in most cases, M. hominis superinfected a brain hematoma. By searching the literature for other cases of M. hominis infection of hematomas, we found six articles describing patients who had developed infection of abdominal, peri-nephric, thigh or retroperitoneal hematomas following genitourinary invasive procedures [17][18][19][20][21][22] (Table 2). In an additional patient, infection complicated a peri-hepatic hematoma but the origin of infection was not identified [23]. Therefore, M. hominis appears to have a particular ability for superinfecting hematomas, in particular following genitourinary tract invasive procedures.
In addition, as previously reported [4], bacterial culture and Gram staining results remained negative. M. hominis was only detected by PCR. In addition, in an effort to reduce the diagnostic delay, we developed a specific RT-PCR for M. hominis. This test provides a rapid alternative not only to culture but also to broadrange 16S rRNA PCR and sequencing detection, and may enable rapid antibiotic treatment adaptation.
Consent
Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
|
2017-06-22T20:43:27.620Z
|
2011-07-03T00:00:00.000
|
{
"year": 2011,
"sha1": "3eb971af48363356082710753396f152d740048e",
"oa_license": "CCBY",
"oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-5-278",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cee9640e5dc61670b616d9f05a8b97033a175b58",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244261835
|
pes2o/s2orc
|
v3-fos-license
|
Fetomaternal outcome in pregnancies with reproductive tract anomalies
Background: Congenital reproductive tract anomalies result from abnormal formation, fusion or resorption of the mullerian ducts during fetal life. Pregnancies with reproductive tract anomalies are known to have higher incidence of spontaneous abortions, fetal malpresentations, preterm labour, preterm premature rupture of membranes and increased cesarean section rate. The present study was conducted to describe the fetal and maternal outcomes among pregnant women with uncorrected reproductive tract anomalies in a tertiary care centre, Manipur, India. Methods: A hospital based cross sectional study was conducted among pregnant women with uncorrected reproductive tract anomalies in regional institute of medical sciences, Imphal, Manipur, India between September 2018 to August 2020. Results: A total of 62 pregnant women with uterine anomalies were included in the study. Bicornuate uterus was the most common uterine anomaly (45.2%) followed by arcuate uterus (19.3%). Cesarean section was conducted in 72.6% of the pregnant women and its major indication was fetal malpresentation (breech). Maternal complications were present in56.5% of the pregnancies and fetal complications in 27.4% of the newborns. Conclusions: The current study has shown a significant association between uterine anomalies and maternal and fetal complications including premature rupture of membranes, fetal malpresentation and increased caesarean section rate. Further studies involving bigger sample size will help in understanding the problem more and hence in the prevention of the complications in future.
INTRODUCTION
Normal development of the female reproductive tract involves a series of complex processes characterized by the differentiation, migration, fusion, and subsequent canalization of the mullerian system. 1 Congenital reproductive tract anomalies result from abnormal formation, fusion or resorption of the mullerian ducts during fetal life. 2 These abnormalities are often caused by errors in organogenesis, but other etiologies including deficiencies in steroidogenesis, receptor defects and genetic abnormalities are also involved. 3 The overall incidence of uterine or mullerian anomalies is estimated to be 4% of all women, while in Indian population this incidence is around 0.36%. 4,5 In general fertile population, the frequency of mullerian anomalies is 5% and in infertile population it is 3%. 4 Recurrent miscarriages occur in 5-10% of cases. 6 Prevalence of congenital uterine malformations is approximately 5-25% in women with adverse pregnancy outcomes and up to 25% of women with late first or second-trimester pregnancy loss or preterm delivery. 7,8 The wide range of difference in the prevalence rate is presumably because of use of different classification systems and non-uniform diagnostic tests. Buttram Uterine anomalies are associated with diminished cavity size, insufficient musculature, impaired ability to distend, abnormal myometrial and cervical function, inadequate vascularity and abnormal endometrial development. 11 In many patients, these reproductive tract anomalies have been related with primary or secondary infertility, spontaneous abortions, recurrent pregnancy loss, prematurity, ectopic pregnancies, malpresentations, intrauterine growth retardation, prematurity, intrapartum uterine rupture which increase the fetal morbidity and mortality. Authors who have found an association between uterine anomalies and preterm birth opine that diminished muscle mass, particularly in a unicornuate uterus plays an important role in the mechanism of preterm delivery. 12 A combination of two-dimensional (2D) ultrasound, hysteroscopy and/or laparoscopy is the most widely used method for the traditional diagnosis of müllerian anomalies. 13 Three-dimensional (3D) ultrasound has been recognized recently as another standard for the diagnosis of müllerian anomalies. 14,15 With the advent of better diagnostic and treatment modalities like transvaginal sonography, hysteronsalpingography and laparoscopy, the reproductive outcomes have improved in cases of congenital uterine anomalies. However, mullerian anomalies remain an incidental diagnosis in majority of cases in India. This may be accounted to the limited resource setup in India and lack of health seeking attitude amongst infertile and reproductively challenged couples. Hence this study was undertaken to determine the perinatal outcomes in pregnant women with uncorrected reproductive tract anomalies in a tertiary care centre, Manipur.
Study design, population and duration
A hospital based cross-sectional analytical study was conducted among the pregnant women with uncorrected reproductive tract anomalies in the Regional Institute of Medical Sciences (RIMS), Imphal, Manipur. All the pregnant women with uncorrected uterine anomalies, diagnosed by transvaginal ultrasound and/or hysterosalpingography admitted through emergency or OPD basis. The study was conducted for a period of two years from September 2018 to August 2020.
Inclusion criteria
Inclusion criterion for current study was pregnant women with uncorrected uterine anomalies with singleton pregnancies.
Exclusion criteria
Inclusion criteria for current study were patients with previously corrected uterine anomalies, multiple pregnancies, known congenital and/or chromosomal fetal anomalies and those not willing to participate in the study.
Study procedure
After obtaining permission from the institution ethics committee and informed consent from the participants, the patients were subjected to detailed history and clinical examination. Detailed history included age, menstrual history, parity, history of previous pregnancies (recurrent abortions, preterm delivery etc), family history, gestational age, uterine scar from caesarean section etc. Ultrasonography and/or hysterosalpingography findings were recorded (ectopic pregnancy, abnormal placentation, malpresentation etc). The patients were then followed up till delivery to know the final outcome-abortion, preterm delivery, PROM, obstructed labour, vaginal delivery or cesarean section, malpresentation, IUGR etc.
Working definitions
Reproductive tract anomalies: Abnormal formation, fusion or resorption of the mullerian ducts during fetal life. 2 Abortion: termination of pregnancy before 20 weeks gestation or with a fetus weighing less than 500 g. 16 Recurrent abortions: occurrence of three or more consecutive spontaneous abortions before 28 weeks of gestation. 16 Preterm labour: onset of labour prior to the completion of 37 weeks of gestation and after the attainment of period of viability. 17 PROM (premature rupture of membranes): spontaneous rupture of membranes before the onset of regular uterine contractions at or after 37 weeks of gestation. 17 IUGR (intrauterine growth restriction): failure of a fetus to reach its genetic growth potential inutero putting it at risk of perinatal mortality and morbidity. 17
RESULTS
A total of 62 pregnant women with uterine anomalies were included in the study. The mean age of the participants was 27.2 (5.1) years with a minimum of 19 years and a maximum of 40 years. 38.7% of the pregnant women were primi and 9.7% were grand multipara ( Table 1). Period of gestation was ≤37 weeks in 40.3% of the pregnant women. Recurrent abortions were noted in 9.7% of the study participants. Among the 62 pregnant women with uterine anomalies, there were 28 cases of bicornuate uterus (45.2%), 12 cases of arcuate uterus (19.3%), 9 septate uterus (14.5%) cases, 8 cases of unicornuate uterus (12.9%) and 5 uterus didelphys (8.1%) cases ( Figure 2). 48.4% of the fetuses presented in breech while 38.7% were in cephalic presentation (Figure 3). Transverse lie was noted in 12.9% of the cases. Nearly 3/4 th (72.6%) of the cases delivered by caesarean section (Figure 4).
Maternal complications was present in56.5% (95% CI: 43.3%-68.8%) of the pregnant women ( Table 2). Fetal complications were present in 27.4% (17.2%-40.4%) of the newborns (Table 3). PROM was the most common maternal complication and it was noted in 22.6% of the participants followed by gestational hypertension, recurrent abortions and preterm labor which were found in 11.3%, 9.7% and 9.7% respectively. Among fetal complications, birth asphyxia and prematurity were present in 9.7%, each among the newborns. Neonatal sepsis and meconium aspiration was present in 4.8%, each. There was no maternal or neonatal death ( Figure 5). *others include anaemia and hypothyroidism #Multiple complications possible.
Figure 5: Fetal and maternal complications among the study participants (n=62).
Bicornuate uterus was the most common uterine anomaly (45.2%) followed by arcuateuterus (19.3%) while septate uterus was present in 14.5% of the pregnant women. Surprisingly, these results are in contrast to the available literature where septate uterus followed by bicornuate uterus was believed to be the common uterine anomaly. 18,[22][23][24] In contrast, septate uterus is the most commonly associated with obstetrical complications. 25 Some other studies have reported bicornuate uterus followed by septate uterus to be the common uterine anomaly. 26,27 The reason behind this type of difference is debatable and poorly understood. One possible reason could be the small sample size in most of the studies. 24 Similarly miscarriages were the common complication in a study conducted by Chan et al. 25 A meta-analysis have documented that there was increased relative risk by 2.89 time of first trimester abortions in mullerian anomalies, which is in line with our study findings. 25 The current study reported a very high rate of caesarean section rate of 72.6% and is comparable to study by Raj et al where the caesarean rate was 63.3% as compared to 34.7% in Hua et al. 21,28 This high rate could be due to the reason that most of the anomalies are diagnosed incidentally during the pregnancy in most of the patients.
Various authors have put forward explanations for the mechanism of reproductive failure in infertility. Disorganization of uterine stroma along with high intrauterine pressure caused by an enlarging fetus could lead to cervical incompetence and insufficient uterine expansion. Additionally, poor vascular arrangement in the anomalous uterine fundus, will in turn fail to provide necessary support to the growing fetus. These conditions could lead to their loss in late first trimester and second trimester. Accordingly, recurrent abortions were noted in 9.7% of our study participants. Birth asphyxia and prematurity were present in 9.7%, each among the newborns and neonatal sepsis and meconium aspiration was present in 4.8%, each. 27.4% of the newborn babies were admitted in NICU, while there was no neonatal mortality. The mean age was found to be higher among those pregnant women with maternal complications when compared to those without maternal complications and it was found to be statistically significant (p=0.013). Pregnant women who were unbooked were found to have significantly higher chance of maternal as well as fetal complications (p=0.001). Preterm mothers were found to be associated with more maternal complications. Similarly, there was a significant association for low birth weight and low APGAR score with the fetal complications (p<0.005). However, there was no association for maternal age, parity and gestational age with fetal complications (p>0.05) in our study.
CONCLUSION
A large number of uterine anomalies are detected routinely in reproductive medicine as practiced in current times. This increase is attributed more to availability of better imaging techniques for the uterus rather than increase in prevalence of such anomalies in the general female population. Reproductive tract anomalies remain an incidental diagnosis in majority of cases in India mainly due to lack of health-care seeking attitude of the females with infertility coupled with limited resource setting, which in turn has resulted in inadvertent outcomes both for the mother and the newborn baby.
Subtle mullerian anomalies are difficult to diagnose. HSG gives a view of the endometrial cavity but does not visualize the fundus and the uterine contour and is invasive. 3D-ultrasonography gives a fair idea about the external contour of the uterus but might fail to visualize some lateral fusion defects. MRI is the gold standard diagnostic imaging modality, but it is not available in resource constrained countries like India. One of the important limitations of our study is the small sample size owing to the time constraint and hence the results could only be generalizable to the similar setting. The crosssectional nature of the study confers that the cause-effect relationship cannot be ascertained through our study. However considering the scarcity of evidences on the same subject in this part of the country, the present study is novel of its kind and hence it could serve as a base for further studies to come. Further studies on a multicentre level with a longitudinal component will help in understanding the problem more and hence in the prevention of the complications in future.
|
2021-10-18T18:26:44.539Z
|
2021-09-27T00:00:00.000
|
{
"year": 2021,
"sha1": "9dd7e365648132afe62b4a051b905af2330b04cc",
"oa_license": null,
"oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/10705/6890",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b6e2620eb2a5312ee97b220d834123e75892e410",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59932152
|
pes2o/s2orc
|
v3-fos-license
|
Al-Ṭahṭāwī in Paris : Western Influence on Format and Style in Early Modern Arabic Travel Literature
This study investigates format and style in the first modern Arab travel source, Takhliṣ al-Ibriz fī Talkhiṣ Paris, written by Sheikh al-Ṭahṭāwī in the 19th century. During this century, the connection between the Eastern Self and the Western Other became closer and more immediate culturally and politically, which undeniably impacted literature on both thematic and artistic levels. This paper addresses the extent to which the format and style of al-Ṭahṭāwī was influenced by the Other and to determine how these artistic aspects had changed and were distinct from those aspects in medieval travel literature.
INTRODUCTION
The nineteenth century is a significant era in which modern travelogue literature developed from the medieval model on both thematic and artistic levels due to changes in colonial conditions and closer and stronger connections with the West.Accordingly, the prose area of the travelogue text of Takhliṣ al-Ibriz fī Talkhiṣ Paris [The extraction of pure gold in the abridgement of Paris] will be the primary source for exploration of this Western influence.
Takhliṣ includes an introduction which contains four parts, and six essays, each of which includes various sections.The third essay, which contains thirteen sections and is titled "In the Description of Paris and Its Civilization", is the core of the book; indeed al-Ṭahṭāwī (1801-1873) mentions in the introduction that this essay is the major purpose of his writing.Therefore, this essay contains voluminous and considerably detailed information, despite which, as he acknowledges, he cannot do justice to the whole scope and variety of this great city (Newman,130).For this reason, this paper concentrates mainly on the thirteen sections of this third essay and partly on the fourth, fifth, and sixth essays in all their sections for ana-lytical and critical investigation of the Western influence on format and style.
Before al-Ṭahṭāwī traveled to the West, he was educated by many of the prestigious scholars of al-Azhar al-Sharīf, a traditional school in Egypt which concentrates solely on Islamic and Arabic language studies.The educational system in such a school usually required a classical Arabic writing style because the educational system utilized rote memorization, in which all students follow their 'Ālim Imam in learning.This method of learning is reflected in almost all sources published in Arabic or Islamic domains before and during his lifetime.Sheikh Ḥassan al-'Aṭṭar (1766-1835), was one of the most significant scholars who taught al-Ṭahṭāwī the traditional forms of knowledge and whatever other kinds of knowledge may not have been taught in al-Azhar at that particular time for religious reasons, such as history, geography, literature, etc. (Ḥasan, 22, 26).After traveling in the West, al-Ṭahṭāwī was greatly influenced by Western scientific and cultural ideas, which was reflected in his travelogue text.Such Western influence along with his cultural background produced an overlap in his writing style, language and content which is where we can grasp his fascination with and influence by the Other.The writing style and format is in fact closer to modern standard Arabic language than to classical Arabic.
METHODOLOGY
The methodology that this study is based on is simultaneously analytical and descriptive.These approaches are useful to the critical reading of this voluminous source in deducing the Western impacts on style and format.As the source has already been translated by Newman D. in 2004, we can therefore base the discussion on this translation.
STYLE AND FORMAT
The first Eastern-Western friction for al-Ṭahṭāwī was with the French language, which he started learning by rote.In his introduction, al-Ṭahṭāwī acknowledges that educational attainment in French is quite easy compared to Arabic because French unlike Arabic does not use homographs, synonyms, and complicated grammar.This explicit acknowledgement indicates the Western impact on his literary style as he completely avoids the adoption of superfluous rhyme and homographs and utilizes facilitative language as he mentions in his introduction (Newman,100 & Lūqā,158).Al-Ṭahṭāwī's tendency to use a classical Arabic writing style for the beginning of the Takhliṣ and modern standard Arabic during the middle and end characterizes his travelogue and distinguishes it from the previous travelogues available in the 19 th century.At the beginning of the Takhliṣ, we can observe that al-Ṭahṭāwī utilizes the writing style of classical Arabic sources where he starts with the traditional introduction that includes the statement of purpose of his travel, the mention of fellow travelers, and the narrative of reasons for his travels.The Takhliṣ, as other sources had before and during his time, starts with classical rhymed prose, religious textual quotations and poems.His utilization of the Basmala [An Arabic phrase meaning "in the name of God"], the phrase I'lam [know that], and the long titles at the beginning of the prefatory and introductory sections along with the aforementioned notes are all indicative of the traditional and cultural influences that were prevalent at this time (Lūqā,90,151).Furthermore, al-Ṭahṭāwī adopted rhyme in some contexts only in order to make them interesting to the implied reader, as he strictly avoided it in the context of discussing scientific themes and issues and in comparisons between the Self and the Other.In this way he dramatically navigated his writing style in the Takhliṣ from the weak Arabic literary writing style of previous years, which involved much embellishment with unfruitful purpose, into a more highly evolved stage, which represents the return of artistic merit in Arabic writing style ('Amārah, 119).
However, despite these signs of evolution in writing style, four different markers indicate that some of the classical Arabic writing style features remain present in the Takhliṣ's writing style.First, the Takhliṣ in its initial draft contained a great many linguistic and syntactic errors, and a number of colloquial terms and expressions that were poorly worded.Al-Ṭahṭāwī also included many derogatory words and terms for Islamic customs as well as others for Western people, like the term "disbelievers" instead of Christians, etc.Such observations were highlighted by his mentor Jomard who advised al-Ṭahṭāwī to make changes, which he did after he returned to his native land (Newman,90,Lūqā,141).The fair criticism for these mistakes can be established by two justifications.The first is that the primary reason for al-Ṭahṭāwī to be with the Egyptian scientific mission to France was to be a Muslim religious leader (Imam) for the students (Newman, 38).Thus, he was a student and a religious leader; hence, he had two primary positions on the mission, and a secondary one which was to record his observations on his travels in the West.Therefore, due to his lack of time with the two primary positions, he was rapidly noting every new observation and comment in the secondary one; hence, it was natural for such issues to arise in his writing.The other justification is that al-Ṭahṭāwī aimed to follow a path of simplicity in the usage of language in the Takhliṣ, as he mentioned in the introduction, in order to enable people with varying levels of education to read his travelogue.However, his extreme attention to this led him to engage in vernacular and modern standard Arabic (Lūqā,141,Newman,100).
The second marker is that Takhliṣ includes voluminous detail about the Other which is absolutely natural due to the traveler's desire to be as comprehensive as he claimed to be in his introduction.However, several areas of the book contain a great deal of unhelpful digressions and circumlocutions.His excessive linguistic explanation and analysis of several French terms, for instance, is disruptive of the flow of the text.The reason behind that probably returns to the Arabic culture that he was taught in writing style, where Arabs in Arabic and Islamic studies often utilize a great deal of linguistic interpretation and analysis of new terms.
The third marker can be observed in al-Ṭahṭāwī's massive number of Arabic poem citations, for example, that indicate his firmly held relationship with his culture and ancient heritage; however, these citations become redundant and tedious because no section nor even a page in the Takhliṣ could exist without some cited poems.Some justifications have been presented by Arab researchers such as 'Amārah, in Rifā'ah Al-Ṭahṭāwī R ā'd al-tanwīr Fī al-'aṣr al-Ḥadīth [Rifā'ah Al-Ṭahṭāwī: The Pioneer of the Enlightenment in the Modern Era], which addressed al-Ṭahṭāwī's repeated usage of cited poems.As 'Amārah reminds us, one of al-Ṭahṭāwī's goals during his time in France was to be a religious leader; therefore, he felt he had to mention these citations in his travelogue with his future students in mind.Although it is true that al-Ṭahṭāwī was a religious leader for Egyptian students, a number of these citations were added by al-Ṭahṭāwī when he returned to Egypt.Moreover, any connection between the cited poems and the context in which they were used is tenuous (Newman,91 & Lūqā,; hence, reconsideration of such previous views is extremely significant.The last marker is seen in the sheer volume of repetition in the Takhliṣ.Al-Ṭahṭāwī aimed to describe the West with Eastern audiences in mind, so he demarcated his description of the Other in the Takhliṣ into several domains, as seen in the third essay of the source.However, due to his overambitious explanations in some areas, he fell into the trap of redundant repetition.In the description of the Other's scientific and cultural knowledge, for instance, which should be, according to his organization, fully described in the specified section which is number thirteen of the third essay, he also indicates them in the second section of the introduction, the second and the thirteenth sections of the third essay, the fifth section of the fourth essay, and the whole of the sixth essay.Al-Ṭahṭāwī was conscious of the issue of repetition as he mentioned at the beginning of the fifth section of the fourth essay.However, instead of editing the mentioned data in the specified place, section thirteen, and adding the new information to it, he listed the new information in addition (Ḥijāzī, 292, 332).
CONCLUSION
Al-Ṭahṭāwī's cultural background, along with the adoption of a Western literary writing style, generated a fusion of Arab and French literary styles in the Takhliṣ, which sets apart his writing style from others before and during his time.Despite the similarity of the format at the beginning of the Takhliṣ to those of the medieval Arabic era, as he progressed, al-Ṭahṭāwī remarkably substituted his traditional literary writing style for a new Western literary one.He disposed of the full usage of classical Arabic language, exaggerated rhyme that seems apparent in the title of the Takhliṣ and in a few places in its opening sections, and departed from using long titles at the beginning of each section to bring us gradually to a more facilitative language, explanation, and description in a new framework.This, in itself, in terms of writing style, is a significant development that distinguishes it from previous travelogues available during his time.
|
2018-12-29T20:28:24.743Z
|
2018-02-01T00:00:00.000
|
{
"year": 2018,
"sha1": "2b0d0e5e052035eee048c5da1e260fa423ed16d6",
"oa_license": "CCBY",
"oa_url": "https://journals.aiac.org.au/index.php/alls/article/download/4099/3250",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2b0d0e5e052035eee048c5da1e260fa423ed16d6",
"s2fieldsofstudy": [
"History",
"Linguistics"
],
"extfieldsofstudy": [
"History"
]
}
|
30910732
|
pes2o/s2orc
|
v3-fos-license
|
Primary angiosarcoma of chest wall inside out : A rare presentation
1Department of Dermatology, Venereology & Leprology, SMGS Hospital, Government Medical College, University of Jammu, Jammu-180001 Jammu & Kashmir, India, 2Department of Pathology, SMGS Hospital, Government Medical College, University of Jammu, Jammu-180001 Jammu & Kashmir, India, 3Department of Radio-Diagnosis, SMGS Hospital, Government Medical College, University of Jammu, Jammu-180001 Jammu & Kashmir, India
INTRODUCTION
Angiosarcomas are a subtype of soft-tissue sarcomas and are aggressive, malignant endothelial tumors of vascular or lymphatic origin [1].The clinical presentation of angiosarcoma varies depending upon the anatomic site involved.Angiosarcomas may occur in any region of the body but are more frequent in skin and soft tissue.They can also originate in the liver, breast, spleen, bone, or heart [2].Angiosarcomas have high propensity to metastasize or infiltrate to other sites with substantial mortality rate [3].Herein, we present the case of a 50 year old man with primary angiosarcoma of chest wall (PACW) infiltrating and colonising the skin.The interesting nature and paucity of documentation of this rare presentation in the world literature encouraged us to report the case.
CASE REPORT
A 50 year old man presented with 3 months history of painless vascular lesions over left anterior chest wall.There was preceding history of a skin coloured deep seated nodular swelling in the mammary area which over a period of 1 month developed overlying erythematous exuberant growth.This was followed by the appearance of similar lesion near the vicinity along with small satellite lesions.Patient gave history of ulceration and bleeding from both the larger lesions with no history of any penetrating injury, breast malignancy or irradiation in the past.Rest of the medical history was not significant.
On examination, an indurated, erythemato-voilaceous fungating growth with ulceration was present over the left anterior chest wall deforming the nipple.Similar smaller sized lesion was present over supero-medial aspect of the main lesion.The surrounding normal skin showed multiple small discrete as well as coalascing papular lesions (Fig. 1).The lesion was non-tender, bled on gentle manipulation and was fixed to underlying structures.Regional lymph nodes were not enlarged.
General physical and systemic examination was unremarkable.All routine investigations were within normal limits.For histological characterisation, an incisional biopsy was obtained with differential diagnosis of cutaneous angiosarcoma, Kaposi's sarcoma, dermatofibrosarcoma protuberans and cutaneous lymphoma.Histopathology demonstrated highly vascular lesion in the dermis composed of many variable sized vascular spaces containing papillary configurations having RBCs and lined by pleomorphic cells with vesicular nuclei and prominent nucleoli.Abnormal mitotic figures were also seen (Fig. 2).The histological features were consistent with malignant vascular tumor suggestive of angiosarcoma.Immunohistochemistry (IHC) could not be done due to limitation of resources.Chest X-ray showed an area of haziness in left lower lung zone (Fig. 3), which on lateral view appeared to be extrapulmonic chest wall mass.Ultrasound of the abdomen & pelvis was significant for pleural effusion.Contrast enhanced computerized tomography (CECT) of the chest revealed a heterogeneous enhancing lesion measuring 5.1×5×2.5 cm with non-enhancing central area of necrosis in left anterior chest wall with muscular invasion.The lesion was also seen to infiltrate the overlying skin.An ill defined enhancing nodule was seen medial to the main lesion in parasternal area.Bilateral pleural based soft tissue densities along with secondaries in both lung fields were also visualized (Fig. 4).CT was also significant for bilateral pleural effusion and subcentimetric mediastinal lymph nodes.Based on histopathology and imaging, diagnosis of primary angiosarcoma of chest wall with pulmonary metastasis was reached.The patient was referred to surgery department for further management.As the patient presented at a very advanced stage (stage IV), surgery was not an option.He was started on chemotherapy in the oncology department but died within 3 months of starting treatment.
DISCUSSION
Angiosarcoma is a very rare mesenchymal tumor accounting for 1-2% of all soft-tissue sarcomas [4].It can be primary i.e arising de novo or secondary to irradiation, trauma, lymphedema [5].The common sites affected are skin, soft tissue, liver, spleen, heart and breast [6].Primary angiosarcoma of chest wall can originate from bone, soft tissue or cartilage of chest wall and account for 5% of all thoracic neoplasms [7].Chronic lymphedema, irradiation and exposure to vinyl chloride, thorostat, arsenic, anabolic steroid or foreign bodies are considered as main culprits [8].
None of these factors were present in our patient who was diagnosed as de novo case of primary angiosarcoma of chest wall.The clinical presentation of PACW can vary from chest pain, dyspnoea, hemothrax, chest wall mass or can be totally asymptomatic [9] (as in our case).
In our patient, the first impression was of primary cutaneous angiosarcoma (PCA) but as chest is not a common site for PCA, we considered other possibilities and further evaluated the patient.Histopathology was consistent with angiosarcoma.To rule out any metastasis to internal organs, chest x-ray, CECT and USG abdomen and pelvis were requested.CECT revealed a heterogeneous enhancing mass in the left anterior chest wall with invasion into sub-cutaneous fat planes and infiltrating the overlying skin.Therefore, diagnosis of chest wall angiosarcoma was reached by co-relating clinical, radiological and pathological findings.
We did not come across any such presentation in dermatological literature.Such rare clinical presentations may often confront dermatologists and dermatosurgeons.This emphasizes their role in keeping a high index of suspicion in such cases and also the importance of referring the patient to other specialities relevant to the condition to expedite the diagnosis.This would lead to timely intervention and increment survival as these tumors have bad prognosis owing to their aggressive nature and associated mortality.
Gold Standard and Advances in the Treatment of Angiosarcomas
Angiosarcomas are very aggressive tumors, so treatment should be started early once diagnosis is established.As regards the therapy, it is important to underline that Angiosarcoma treatment has to be planned by a multidisciplinary team of medical experts drawn from all the involved disciplines [10].
Surgical Intervention
In operable patients with localised disease, surgical resection (en bloc) with the aim of obtaining negative surgical margins is the primary mode of treatment [11].
Chemotherapy
Cytotoxic chemotherapy forms the cornerstone of treatment for locally advanced inoperable or metastatic angiosarcoma.The therapeutic goals are to achieve control over the disease, to stop or postpone disease progression and to achieve or maintain symptom control for prolonged periods of time [12].
Doxorubicin
The first-line chemotherapy for advanced, metastatic or non-resectable soft tissue sarcoma is based on anthracyclines, and the most frequently used compound is doxorubicin [13].The response rate to doxorubicin as a single agent or in combination is reported to range between 40% and 65% [14,15].The major adverse effect associated with doxorubicin is cardiomyopathy [12].
Ifosfamide
Ifosfamide, a cytotoxic alkalizing agent is used as a second line drug when doxorubicin has failed or is contraindicated.Ifosmide is usually given at a dose of 8-12 g/m 2 per cycle equally fractioned as single doses over 3-5 days.It achieves results comparable to doxorubicin but it is accocaited with number of severe adverse effects including leucopenia, neutopenia, renal toxicity and encephalopathy [12].
Paclitaxel
Paclitaxel has specific exquisite efficacy in angiosarcoma [12].The activity of paclitaxel in angiosarcoma had been confirmed by a phase II trial assessing the efficacy of the weekly paclitaxel regimen (80 mg/m 2 d1, d8, d15, 21-day-cycle) [16].
Gemcitabine
Only a few anecdotal responses to gemcitabine monotherapy have been reported in angiosarcoma previously treated with anthracyclines and paclitaxel.Tolerability of gemcitabine plus docetaxel is fair, with less cardiac toxicity compared with anthracyclines [17,18].
Pazopanib
Pazopanib, a tyrosine kinase inhibitor is the first non-chemotherapeutic anticancer agent approved by regulatory authorities for soft tissue sarcoma.Pazopanib acts by interfering with the vascular endothelial growth factor and platelet-derived growth factor pathways.The approval of this oral anti-angiogenic agent is based on the EORTC trial 62072 (PALETTE) [23,24].
Radiation Therapy
Preoperative radiation followed by resection is advised in borderline resectable cases.The dose recommendations include 45 to 50 Gy for undissected subclinical disease, 60 to 65 Gy for a postoperative tumor bed with positive microscopic margins, and 70 to 75 Gy for gross disease [27].Adjuvant radiation with or without chemotherapy is indicated for patients with high-grade STS [stage II-III; American Joint Committee on Cancer, Cancer Staging Manual, Seventh Edition (2010)].Alternatively, these modalities may be delivered preoperatively to reduce tumor size or improve resectability, particularly in potentially resectable cases or when there are concerns for adverse functional outcomes after surgery [28].
International Therapeutics Guidelines
A number of academic organisations have published guidelines for the treatment of inoperable, advanced, metastatic sarcoma.
European society for medical oncology (ESMO)
ESMO guidelines recommend as first-line treatment anthracyclines as single agent or in combination with ifosfamide or single-agent ifosfamide if there are specific contra-indications.Second-line treatments include ifosfamide at standard doses if patients have not been previously treated with this agent during first-line treatment.A high-dose ifosfamide schedule is recommended by ESMO if the drug had been previously used at a lower dose [29].
British sarcoma group (BSG)
BSG recommends single-agent doxorubicin or ifosfamide or doxorubicin and ifosfamide in the firstline setting.It recommends second-line treatments with either ifosfamide, trabectedin, gemcitabine and docetaxel or the older drug dacarbazine [13].
National comprehensive cancer network (NCCN)
NCCN guidelines recommend paclitaxel and bevacizumab as treatment options for patients with angiosarcoma [28].
There have been some promising developments in the areas of immunotherapy, vaccine therapy, adoptive immunotherapy, immune synapse blockade and antibody therapy in soft tissue sarcomas but mostly remains experimental.Current clinical experience with these agents/regimen is too, limited to draw any conclusions [30].
The major drawback is the paucity of randomised trials with only a few retrospective case series or case reports, all suggesting that among soft tissue sarcomas, angiosarcoma appears to be more sensitive to cytotoxic chemotherapy.The rarity of angiosarcoma represents a major limitation to the randomized trials [1,10].
Despite all therapeutic efforts, the patients prognosis is still unfavourable [31,32].As our patient presented at a very advanced stage, surgery was not an option.He was started on chemotherapy with cytotoxic drugs but unfortunately he succumbed to the disease within three months of starting the treatment.
CONCLUSION
Primary angiosarcoma of chest wall remains a rare medical condition and its invasion to skin is an exceptionally rare phenomenon.Our case illustrates the insidious nature and cutaneous invasive potential of angiosarcoma with diagnosis only in late stages which harbours worst prognosis with death coming within few months.Medical practitioners in general and dermatologists in particular are likely to encounter such presentation of internal diseases manifesting in the skin which therefore serves as a window to systemic diseases.
Figure 3 :
Figure 3: Chest X-ray showing area of haziness in left lower lung zone (arrow).
Figure 4 :Figure 1 :Figure 2 :
Figure 4: CECT chest showing: (a) A heterogeneous enhancing lesion with non-enhancing central area of necrosis in left chest wall with muscular invasion, infi ltration into the overlying skin & two small well defi ned enhancing nodules medial to the main lesion (arrows); (b) Bilateral pleural based soft tissue densities along with multiple secondaries in both lung fi elds & bilateral pleural effusion (arrows).ab
|
2017-10-27T13:59:56.243Z
|
2017-07-03T00:00:00.000
|
{
"year": 2017,
"sha1": "1e60c9d0163e7364f83ff22620489e8d6f29014e",
"oa_license": "CCBY",
"oa_url": "http://www.odermatol.com/odermatology/20173/19.Primary-MushtaqS.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1e60c9d0163e7364f83ff22620489e8d6f29014e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257377582
|
pes2o/s2orc
|
v3-fos-license
|
Predicting healthcare professionals’ acceptance towards electronic personal health record systems in a resource-limited setting: using modified technology acceptance model
Objectives Personal health record systems allow users to manage their health information in a confidential manner. However, there is little evidence about healthcare providers’ intentions to use such technologies in resource-limited settings. Therefore, this study aimed to assess predicting healthcare providers’ acceptance of electronic personal health record systems. Methods An institutional-based cross-sectional study was conducted from 19 July to 23 August 2022 at teaching hospitals in the Amhara regional state of Ethiopia. A total of 638 health professionals participated in the study. Simple random sampling techniques were used to select the study participants. Structural equation modelling analysis was employed using AMOS V.26 software. Result Perceived ease of use had a significant effect on the intention to use electronic personal health records (β=0. 377, p<0.01), perceived usefulness (β=0.104, p<0.05) and attitude (β=0.204, p<0.01); perceived ease of use and information technology experience had a significant effect on perceived usefulness (β=0.077, p<0.05); and digital literacy (β=0.087, p<0.05) and attitude had also a strong effect on intention to use electronic personal health records (β=0.361, p<0.01). The relationship between perceived ease of use and the intention to use was mediated by attitude (β=0.076, p<0.01). Conclusion Perceived ease of use, attitude and digital literacy had a significant effect on the intention to use electronic personal health records. The perceived ease of use had a greater influence on the intention to use electronic personal health record systems. Thus, capacity building and technical support could enhance health providers’ acceptance of using electronic personal health records in Ethiopia.
INTRODUCTION
Over the past decades, a variety of eHealth technologies have been accessible as nations have implemented eHealth efforts to support the objectives for health education and person-centred care. 1 Adoption of personal health records (PHRs) has been linked to numerous advantages, including improved patient-provider relationships,
WHAT IS ALREADY KNOWN ON THIS TOPIC
⇒ Adopting a sustainable electronic personal health records (ePHRs) in Ethiopia is challenged by a lack of top-level commitment and a physician-led aversion to using the system. ⇒ A personal health record system is a crucial intervention for various health management purposes. ⇒ For effective implementation of a personal health record system, considering acceptance to use the system is crucial.
WHAT THIS STUDY ADDS
⇒ This study introduces a modified technology acceptance model. ⇒ This study assessed users' acceptance in Ethiopia, which aided in the development of a locally relevant automated record system for Ethiopia's healthcare system improvement. ⇒ The results of this study were used as input to design and test the effectiveness of a locally developed record system in Ethiopia.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ The findings may alleviate any concerns about the acceptance of personal health records, and because there is limited evidence on the acceptance of personal health records, it serves as a baseline for researchers in a resource-limited setting. Practically, this study offers insights for policymakers, developers, managers and decision-makers in the healthcare industry to improve the use and acceptability of the ePHRs.
Open access patient engagement improvements, better medication adherence, good health outcomes (such as blood pressure and glycaemic management) and higher organisational efficiencies. 2 Even though PHRs are intended to be consumer-oriented tools, simply understanding the consumer's perspective is not enough. Although these issues have gotten less attention, consumer PHR use has significant consequences for healthcare providers and delivery systems as well. 3 The value that consumers obtain from using a PHR will probably be directly influenced by the acceptance and behaviours of healthcare professionals and team members within the context of the clinical setting, despite the fact that PHRs have received a lot of attention as tools to help consumers. 4 Although electronic PHRs (ePHRs) have a great deal of potential to enhance healthcare, there are obstacles to their widespread implementation. 5 Despite general agreement on the advantages of ePHRs, healthcare professionals have not been made aware of or receptive to this technology. 6 According to preliminary research findings in the literature, patient adoption of a PHR may be influenced by provider endorsement, and continuing physician involvement in patient PHR use may be necessary to achieve and maintain predicted good health outcomes. 7 Providing proper control for patient information disclosure and finding out how to process potentially enormous amounts of self-reported data within the constrained time allotted for the clinical visit are healthcare providers' tasks. 8 In Ethiopia, eHealth has been developing slowly. Technology problems, a lack of government support and budget over-runs are few of the reasons for this slow progress. 9 Given this, ePHRs are not widely used and accessible electronic records are likewise reluctant to catch on. The most significant factors influencing health providers' support for a national patient portal were expected positive influences on their work, the usability of the portal and benefits for the patients, according to a study conducted in Finland with a wide range of health providers (including nurses, pharmacists, health officers, doctors, physical therapists and psychologists). 10 In Ethiopia, there is little evidence about acceptance of healthcare professionals of using PHR system to change the current healthcare system through eHealth technologies. The study may have effects on practice, policy and upcoming researches. Accordingly, this study investigates, introduces and empirically tests a modified theoretical model based on technology acceptance model (TAM) to identify the main factors influencing healthcare professionals' acceptance of using ePHRs.
Theoretical background and hypothesis
Several models have been used to predict factors associated with the acceptance of health information system technologies. 11 12 The TAM is primarily applied at the individual level (but can also be applied in organisational settings), whereas Unified Theory of Acceptance and Use Technology 2 is primarily applied at the organisational level one of the most used models, and focuses on factors influencing end users' behavioural intentions to use new technologies. 13 14 Perceived usefulness (PU) and perceived ease of use (PEU) are considered to be the main factors that either directly or indirectly determine behavioural intentions to use or embrace new technology in TAM. 13 15 In this study, we included information technology and digital literacy components to measure the behavioural intention of health professionals to use ePHRs in low-resource settings. Since the actual use of ePHRs in the setting was unclear, the construct 'actual use' was not used (figure 1).
The following parts provide an explanation of the research question hypotheses that were produced for this study's examination based on our model, which we adopted.
Perceived usefulness PU describes how much users believe the new technology will help them in their jobs, and studies showed that PU influences acceptance of using ePHRs. 15 16 Based on those findings, the following hypotheses were tested.
H1: PU has a positive influence on the user's attitudes towards ePHRs.
H2: PU has a positive influence on intention to use ePHRs.
H3: PU mediates the relationship between PEU and attitude towards ePHRs.
H4: PU mediates the relationship between information technology experience (ITE) and attitude towards ePHRs.
Perceived ease of use PEU is the degree to which a person believes that using technology will be simple and easy, and studies showed that PEU influences acceptance of using ePHRs. 13 15 16 The following hypotheses were tested.
H5: PEU has a positive influence on the perceived usefulness of ePHRs.
H6: PEU has a positive influence on the user's attitudes towards ePHRs.
H7: PEU has a positive influence on intention to use ePHRs.
Attitude
Attitude exhibits how individuals' thoughts toward a new technology affect their feelings and behaviour, and studies showed that attitude influences acceptance of using ePHRs. 13 16 This study tests the following hypotheses: H8: attitude towards eHealth positively influences intention to use ePHRs.
H9: attitude mediates the relationship between PU and intention to use ePHRs.
H10: attitude mediates the relationship between PEU and intention to use ePHRs. H11: attitude mediates the relationship between ITE and intention to use ePHRs.
H12: attitude mediates the relationship between digital literacy and intention to use ePHRs.
Information technology experience ITE focuses on the information technology expertise of healthcare professionals, their exposure to technology and their comprehension of its fundamental advantages, and studies showed that ITE influences acceptance of using ePHRs. 13 17 This study tests the following hypotheses: H13: healthcare providers' ITE has a positive influence on users' PU of ePHRs.
H14: healthcare providers' ITE has a positive influence on attitude towards ePHRs.
H15: healthcare providers ITE has a positive influence on intention to use ePHRs.
Digital literacy
Digital literacy describes a person's capacity to seek, evaluate, and communicate information using writing and other media across a range of digital platforms, and influences acceptance of using ePHRs. 18 The following hypotheses were examined in this study: H16: healthcare providers' digital literacy has a positive influence on attitudes toward ePHRs.
H17: healthcare providers' digital literacy has a positive influence on intention to use ePHRs.
Study design and setting
An institution-based cross-sectional study was employed to determine health professionals' acceptance of using ePHR and its predictors in the University of Gondar and Tibebe-Ghion Specialized Teaching Hospital in Amhara regional state, Ethiopia from July 19 to August 23 2022.
Study participants and sample size determination
All healthcare professionals who worked in Amhara regional state teaching hospitals were the source Table 1 Mediating effects of attitude (AT) and perceived usefulness (PU), and predicting health professionals' acceptance of using electronic personal health record systems in a resource-limited setting, 2022 Open access population, whereas healthcare professionals who worked in Amhara regional state teaching hospitals during the study period were the study population. A 1:10 ratio of respondents to free parameters to be estimated was suggested for the estimation of sample size based on the number of free parameters in the hypothetical model. 19 As a result, taking participants to a free parameter's ratio of 10, a non-response rate of 10% and the 58 parameters were estimated based on the hypothesised model. Finally, a sample size of 638 was calculated.
Sampling procedure Participants in the study were selected from the University of Gondar and Tibebe-Ghion Specialized Teaching Hospital, located in the northwestern part of Amhara regional state of Ethiopia using a simple random sampling method.
Data collection tools, procedures and data quality control In this study, we applied a standard questionnaire, which is adapted from the original instrument developed by Davis's study and previous studies of the modified TAM. 13 15 16 18 The questionnaire consists of sociodemographics, TAM constructs (PU, PEU, attitude and behavioural intention), and additional elements of ITE and digital literacy. The constructs were measured using a 5-point Likert scale, in which 1 denotes 'strongly disagree' and 5 denotes 'strongly agree'. 20 The survey was a self-administered questionnaire. Two days of training were given for data collectors and supervisors. Pretesting of the questionnaire was conducted among 10% of the total study participants outside the study. After obtaining feedback from the respondents, language experts modified the wording of the questions and verified the internal consistency of the items using the Cronbach's alpha coefficient, composite reliability and standard loading. The three tests' results showed that all of the items' scores were above the standard, so the original data collection was continued.
Data processing and analysis
To analyse descriptive data, respondents' data were entered into Epi-info V.7 and exported to SPSS V.25. Model constructs were assessed by the structural equation modelling analysis using Analysis of Moment Structure (AMOS) V.26 software.
Confirmatory factor analysis with standardised values was applied to the test the measurement model. To examine the goodness of fit, we used the Χ 2 ratio (≤5), Tucker-Lewis index (TLI >0.9), comparative fit index (CFI >0.9), the goodness-of-fit index (GFI >0.9), adjusted GFI (AGFI >0.8), root mean square error approximation (RMSEA <0.08) and standardised root mean squared residual (SRMR <0.08). 16 Construct reliability was evaluated using Cronbach's alpha test and composite reliability, with each construct in the study reaching the necessary threshold of 0.70. 21 Convergent validity was determined using the average variance extracted (AVE) method, with values greater than 0.5 and item loading greater than 0.6, and divergent validity was examined using the Fornell-Larcker criterion; the root of the AVE for a particular construct is greater than its correlation with all other constructs. 16 The relationship between exogenous and endogenous variables was assessed using squared multiple correlations (R 2 ), the critical ratio and the path coefficient to test a structural model. The statistical significance of the predictors was determined using 95% CIs and a p value of <0.05.
As indicated in table 1, each of the model's six potential mediation paths was examined for its impact and level of significance. When a construct's direct, indirect and total effects are all significant, it is referred to as partial mediation, but when direct and indirect effects are significant but the total effect is insignificant, that is referred to as full mediation. Generally, we considered a significant indirect effect with a p value of <0.05 to confirm mediation.
RESULTS
Sociodemographic characteristics of healthcare professionals A total of 638 study subjects were included in the study; 610 (response rate: 95.61%) of them gave their Open access consent and responded to the questions. Of the total n=610 respondents, 344 (56.4%) of them were male, and almost half of the respondents (313; 51.3%) had less than or equal to 3 years of work experience. Two hundred eighty-seven (47%) of the respondents had a Bachelor of Science degree, around two-thirds of the respondents (67.5%) had used social media and around three-fourths (455; 74.6%) of the participants had taken basic computer training. In addition, the median age of the respondents was 31.5 (IQR: 27-38) years (table 2).
Measurement model
Reliability and validity of the construct As the results shown in table 3, using Fornell-Larcker criterion, the square root of the AVE of the construct or the bolded values (diagonal values) were higher than the other values in that column and row. As a result, the discriminant validity of the model's constructs has been established. Table 4 demonstrates that all of the constructs have Cronbach's alpha and composite reliability scores above 0.70. AVE scores were >0.5 and factor loading >0.6. Therefore, there was substantial convergent validity for all of the constructs.
According to the findings, PU, PEU, ITE, and digital literacy accounted for 55% and 68% of the variance in attitude and intention to use ePHR systems, respectively (figure 2).
Mediation effects
The results shown in table 1 indicated that the relationship between the PEU and the intention to use an Open access ePHR was mediated by attitude. Both the relationship between attitude and intention to use ePHRs, as well as the regression coefficient between attitude and PEU, were statistically significant. The indirect effect's standardised value was 0. 076. In the context of this, the indirect effect was statistically significant.
DISCUSSION
This study aimed to introduce modified TAM and determine factors influencing health providers' acceptance in Ethiopia. As a result, H5, H6, H7, H8, H10, H13 and H17 were supported. The results showed that PEU had both direct and indirect implications on the desire to accept an ePHR, with favourable direct influences on PU and attitude. This indicated that when healthcare professionals assessed the system's simplicity or ease of use, their perceptions of the system's usefulness, attitude and intention to use it greatly improved. This result is consistent with results from other research conducted in various nations. 13 22 23 This might be the case because the effort required to operate the system has a significant impact on a person's attitude toward and acceptance of using ePHR systems. The effectiveness of the system will increase if it can more easily influence people's inclinations to use ePHRs. To achieve long-term system acceptance, the system should be easy for healthcare practitioners.
Their tendency to use ePHR systems has been favourably impacted by the attitude of health professionals. This result is consistent with findings from related studies conducted in other settings. 1 24 25 This may be due to the fact that new systems may irritate medical professionals who have a fixed, favourable opinion of ePHR systems. The availability of computers at work, continuing training and support, and knowledge sharing about eHealth technology are examples of actions that might be prioritised in order to change attitudes.
The intention to employ ePHRs was positively impacted directly by digital literacy. This proves that if healthcare workers were digitally literate, their intention to adopt technology would improve. The outcome is consistent with research done in various nations. 5 9 26 The possible explanation could be due to the potential role of digital literacy in the adoption of digital health technologies being mandatory.
Open access
Users' perceptions of the use of ePHRs were positively impacted by the ITE of healthcare practitioners. This demonstrates how healthcare professionals' perceptions of the significance of new technology will change when they receive information technology skills and training from actual working environments. This result is consistent with research from other settings. 13 17 The reason can be that experienced users are assured of their technical comprehension of what they need to pursue their hobbies, perform the tasks required to do so and readily realise how to improve their health. As a result, in order for end users to accept new technology and see its value, they must have extensive experience using it in environments with low resources.
Limitations and future research
As a cross-sectional survey, the study might be biased toward social desirability. The quantitative study was not supported by the qualitative findings and the study only included teaching hospitals and required the inclusion of private, primary and general hospitals. As a result, future studies can employ a mixed-methods approach that incorporates qualitative and quantitative research techniques, and also add external variables that may influence the acceptance of using ePHRs, to gain a deeper understanding and more precise generalisation of the findings.
CONCLUSION
This study examined factors affecting healthcare professionals' acceptance of PHRs in Ethiopia. The proposed model has a strong ability to predict the hypothesised model because it could explain 68% of the variance in intention. PEU, attitude and digital literacy have a significant effect on the intention to use ePHRs, PU and attitude; healthcare providers' ITE also has a significant effect on PU. PEU had a more significant impact on healthcare providers' acceptance of using ePHR systems. Thus, capacity building and technical support could enhance health providers' acceptance of using ePHRs in Ethiopia.
|
2023-03-08T06:18:25.661Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "df9258e7e8f74095cf1ee7cae069f1f1ef4e97a2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2d5dd897b73c8935967bdd0f9aad87d1de00aa86",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232086447
|
pes2o/s2orc
|
v3-fos-license
|
Surgical management of symptomatic vertebral hemangiomas: A case report and literature review
Background: Vertebral hemangiomas (VHs) are common benign tumors that only rarely become symptomatic. There is a paucity of data regarding their surgical management and outcomes. Here, we reported a case involving an aggressive cervical VH, discussed its surgical management and outcomes, and reviewed the literature. Methods: We assessed the clinical, radiological, and surgical outcomes for a patient with an aggressive cervical VH. We also performed a systematic review of the literature according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines to describe surgical outcomes for symptomatic VH. Results: A total of 154 studies including 535 patients with VH were included in the study. The majority of patients were female (62.8%), the average age was 43 years, and the thoracic spine was most commonly involved (80.6%). Utilizing Odom’s criteria, outcomes were excellent in 81.7% (95% CI 73.2–90.2) of cases. For those presenting with myelopathy (P = 0.045) or focal neurological deficits (P = 0.018), outcomes were less likely to be excellent. Preoperative embolization was not associated with excellent outcome (P = 0.328). Conclusion: Surgical outcomes for VH are predominantly favorable, but aggressive VHs have the potential to cause significant residual postoperative neurological morbidity.
INTRODUCTION
Vertebral hemangiomas (VHs) are benign vascular tumors comprised capillaries and venous structures. ey have a prevalence of 10-12%, are usually asymptomatic, and rarely require surgery. [7] However, approximately 1% of VHs demonstrate aggressive features including damage to the surrounding bone and soft tissue with subsequent spinal cord and/or nerve root compression. [5,6] Due to their rarity, the optimal surgical management of VH and predictors of postoperative outcomes are not well defined. Here, we reviewed an unusual case of a multilevel cervical VH and systematically reviewed the literature regarding their management and outcomes.
Illustrative case
Presentation A 14-year-old male presented with 12 weeks of mechanical neck pain, hand weakness, and distal upper extremity paresthesias. MRI revealed abnormal anterior/posterior bony element enhancement from C4-C6 with subtle epidural enhancement [ Figure 1]. CT revealed a severe osteolytic fracture of the C5 body with retropulsion into the canal and osteolysis of the C4 and C6 endplates. After a fall, the patient became quadriplegic with 0/5 strength below C7 and a sensory level at the chest. e second CT demonstrated a Grade 4 retrolisthesis at C5-C6 with severe spinal canal stenosis [ Figure 1c].
Operation e patient underwent an emergent C3-C7 anterior corpectomy with fusion. e C4, C5, and C6 vertebral bodies were removed en bloc and an expandable cage was inserted into the corpectomy defect followed by a plate spanning C3-C7. e posterior longitudinal ligament was abnormally vascular. An additional C4-C6 laminectomy with posterior C2-T3 fusion was performed in the same setting.
Pathological findings
e histologic evaluation of the lesion [ Figure 2] demonstrated marrow space replacement with thin-walled vessels, which surrounded boney trabeculae and focally eroded mature cortical bone, consistent with hemangioma. ere was no evidence of malignancy.
Postoperative course
e patient did not neurologically improve and was discharged to an inpatient rehabilitation center. Postoperative CT showed adequate decompression of the spinal canal, reduction of the C5 angulation, and correction of the kyphosis [ Figure 3]. e 5-month follow-up X-ray showed a stable construct [ Figure 3b], but he failed to regain neurological function.
METHODS
A systematic review was performed utilizing the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. MEDLINE, EMBASE, and Scopus were searched using the criteria "vertebral hemangioma" OR "spine hemangioma. " All case reports and series reporting outcomes after open surgery with or without instrumentation for symptomatic VH were included. Outcomes were categorized as excellent, good, fair, or poor according to Odom's criteria. Random effects model was used to calculate the pooled rate of an excellent outcome. Weighted least squares regression was used to identify factors associated with an excellent outcome.
DISCUSSION
Our case is unique for several reasons. Less than 6% of the cases in our review involved three or more levels as ours did. Only 3% involved the cervical spine. Presentation in a pediatric patient is also rare, with <15 cases reported worldwide. To the best of our knowledge, the only other report of a pediatric patient with an aggressive, symptomatic, multilevel VH involved the thoracic spine. [3] e 535 patients undergoing surgery in this review displayed similar characteristics as seen in other VH series. We identified a female predominance, which may be due to the growth-stimulating effect of progesterone on the hemangioma. e median age in our analysis was 43 years, which is similar to other cohorts. e most common surgical technique was decompression without fusion. is was likely due to VHs predilection for the thoracic spine, which permits more aggressive decompression without instrumentation as the ribs provide stability. Although not included in our review, percutaneous vertebroplasty or radiotherapy is reasonable treatment options for patients presenting with only axial pain. [2,10] Both myelopathy and neurologic deficit at presentation were negatively associated with excellent outcomes, suggesting that injury to neural elements is less likely to be reversed from surgery. us, early surgery before the development of neurologic compromise may be indicated for symptomatic VHs, especially if they display an epidural component. Further, there are multiple reports describing rapid progression of neurologic deficits. [1,4,8,9,[11][12][13] Although percutaneous or endovascular embolization of symptomatic VH has been proposed as standalone treatments, we favor surgery to relieve mass effect and increase the chance of a favorable outcome.
CONCLUSION
VHs can cause significant neurologic morbidity and outcomes after surgical intervention are predominantly favorable. e presence of preoperative myelopathy and neurologic deficit was negatively associated with an excellent outcome, suggesting that a prophylactic approach to symptomatic VHs is warranted.
Declaration of patient consent
Patient's consent not required as patients identity is not disclosed or compromised.
|
2021-03-03T05:23:47.004Z
|
2021-02-17T00:00:00.000
|
{
"year": 2021,
"sha1": "74bff52683738cf61d8ca419da09416eacfb30fa",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7911040",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "74bff52683738cf61d8ca419da09416eacfb30fa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3416141
|
pes2o/s2orc
|
v3-fos-license
|
Regional differences in antihyperglycemic medication are not explained by individual socioeconomic status, regional deprivation, and regional health care services. Observational results from the German DIAB-CORE consortium
Aims This population-based study sought to extend knowledge on factors explaining regional differences in type 2 diabetes mellitus medication patterns in Germany. Methods Individual baseline and follow-up data from four regional population-based German cohort studies (SHIP [northeast], CARLA [east], HNR [west], KORA [south]) conducted between 1997 and 2010 were pooled and merged with both data on regional deprivation and regional health care services. To analyze regional differences in any or newer anti-hyperglycemic medication, medication prevalence ratios (PRs) were estimated using multivariable Poisson regression models with a robust error variance adjusted gradually for individual and regional variables. Results The study population consisted of 1,437 people aged 45 to 74 years at baseline, (corresponding to 49 to 83 years at follow-up) with self-reported type 2 diabetes. The prevalence of receiving any anti-hyperglycemic medication was 16% higher in KORA (PR 1.16 [1.08–1.25]), 10% higher in CARLA (1.10 [1.01–1.18]), and 7% higher in SHIP (PR 1.07 [1.00–1.15]) than in HNR. The prevalence of receiving newer anti-hyperglycemic medication was 49% higher in KORA (1.49 [1.09–2.05]), 41% higher in CARLA (1.41 [1.02–1.96]) and 1% higher in SHIP (1.01 [0.72–1.41]) than in HNR, respectively. After gradual adjustment for individual variables, regional deprivation and health care services, the effects only changed slightly. Conclusions Neither comprehensive individual factors including socioeconomic status nor regional deprivation or indicators of regional health care services were able to sufficiently explain regional differences in anti-hyperglycemic treatment in Germany. To understand the underlying causes, further research is needed.
Introduction
Diabetes has been proclaimed to be one of the most challenging health problems of the 21 st century [1]. In a Germany-wide survey, the prevalence of known type 2 diabetes mellitus was estimated to be 7.2% in 2012 [2]. However, regional prevalence estimates showed a southwestto-northeast-gradient of type 2 diabetes prevalence with the lowest prevalence in the south (KORA S4; 5.8%) and the highest estimates in the east (CARLA, 12.0%) [3]. As revealed in further analyses, regional differences in type 2 diabetes mellitus prevalence were not solely attributable to individual characteristics: regional deprivation on municipality and district level as well as neighborhood unemployment rate turned out to influence type 2 diabetes prevalence independently [4][5][6][7].
Regional differences were also observed in terms of type 2 diabetes mellitus therapy and outcomes and regional deprivation turned out to be an additional independent factor of growing importance for health care utilization and outcomes [8]. In a systematic review summarizing the results of 21 studies published between January 2002 and December 2011, Grintsova et al. pointed out that people with low socioeconomic status (SES) tended to receive worse diabetes care (e.g. low frequency of HbA1c measurement) and have poorer intermediate diabetes outcomes [8]. Living in deprived areas was associated with less frequent achievement of glycemic control targets, a trend towards higher blood pressure and worse lipid profile control. These results were confirmed by a recently published population based study from North Karelia Finland [9].
Regional differences in Medicare reimbursement per patient in 2014 were found in the U.S. with almost two-fold higher expenditures in Florida compared to Alaska [10]. In Germany, Schipf et al. analyzed the regional prevalence of anti-hyperglycemic medication [3] among participants from five population-based studies and found variations between 75.4% (HNR baseline study) and 86.3% (KORA S4). To explain these differences, another study investigated associations with participants' individual characteristics including socioeconomic status [11] based on data from two of these regional studies (KORA F4, HNR follow-up study). However, despite considering a wide selection of covariates, among them education, body mass index, blood pressure, comorbidity, health insurance status, family status, and lifestyle measures, regional differences in any and newer antihyperglycemic medication (mainly introduced around the year 2000) could not be explained.
The aim of this study was to extend knowledge on factors explaining regional antihyperglycemic medication patterns. Therefore, individual baseline and follow-up data from four regional population-based studies in Germany were analyzed and complemented by the German Index of Multiple Deprivation [4,5] and indicators of regional health care services.
Research design and methods
The current study is based on the study methods and contents of Tamayo et al. [11] adding further study regions, significantly increasing the study population, and extending the study period.
Data sources and description of variables
2.1.1 Regional studies and study population. Baseline and follow-up data from four regional population-based cohort studies carried out in Germany were included ( Table 1).
All study methods were approved by the Ethics Committee of the Medical Faculty of the Martin-Luther-University Halle-Wittenberg and by the State Data Privacy Commissioner of Saxony-Anhalt (CARLA), by the Medical Ethics Committee of the University of Greifswald (SHIP), by the Ethics Committee of the Bavarian Medical Association (KORA) and by the institutional local ethical committees (baseline: Medical faculty University of Essen; follow-up: Medical faculty University of Duisburg-Essen) (Heinz Nixdorf Recall (HNR). Primary study data of interest were pooled and frequencies compared.
To increase comparability, participants' age was limited to 45 to 74 years at baseline, corresponding to 49 to 83 years at follow up. A further inclusion criterion was having type 2 diabetes mellitus at baseline or follow-up examination (defined as self-reported physician's diagnosis of diabetes and age at diabetes onset of at least 30 years) resulting in the final study population of 1,437 participants (Fig 1, Table 1).
Anthropometry, laboratory, comorbidity.
Furthermore, data on body mass index [kg/m 2 ] (calculated from measured weight and height), systolic and diastolic blood pressure [mmHg] (mean of the second and third measurement taken by trained personal using validated automatic devices), and HbA 1c [%, mmol/mol] (included despite different assessment methods to consider the confounding effect in stratified analyses) were included as well as data on comorbidity (self-reported history of medically confirmed stroke or myocardial infarction), intake of cardiovascular medication [ATC C]).
Lifestyle.
Lifestyle was described using the following components: self-reported smoking status (divided into "current smokers", i.e., !1 cigarette/day vs. "never-smokers" or "ex-smokers" previously smoking !1 cigarette/day, but quitting smoking >1 year ago), and alcohol consumption ("high-risk": >20/40 g/day in women/men [18,19]; calculated from selfreported weekly consumption of beer, wine, and liquor [19]. 2.1.5 Individual sociodemographic and socioeconomic variables. Family status was approximated using the dichotomous variable "living with a partner" (yes/no). Educational level was defined based on the highest achieved schooling degree ("low": no schooling degree; "intermediate": junior high school attendance or secondary school certificate graduation [corresponding to at least 8 completed years of schooling]; "high": high educational graduation [corresponding to at least 12 completed years of schooling]). Furthermore, the highest achieved level of vocational qualification was included ("low": no vocational qualification; "intermediate": apprenticeship, completed vocational, technical or master school; "high": university degree; "other": other vocational qualification). Net household income per month and household size were used to calculate equivalent income (income/household size 0,36 ) as suggested in the Luxembourg Income Study and used in earlier studies of the DIAB-CORE consortium [5,11].
2.1.6 Regional deprivation. Individual participant data were supplemented by the German Index of Multiple Deprivation (GIMD) already used in a number of studies [4,5]. Using data derived from official statistics (here: mostly from 2006) the index exists on municipality and district level including seven deprivation domains (income, employment, education, municipal/district revenue, social capital, environment, security) with higher values representing more deprived areas. In this study the index was used on district level.
2.1.7 Regional health care services. Indicators of regional health care services on district level were collected from various sources between 1997 and 2010 ( Table 2): the number of hospital beds, physicians, internists per 100,000 inhabitants from official statistics [20,21], the number of diabetologists/100,000 inhabitants from the German Diabetes Association (DDG), and the number of diabetes disease management program (DMP) participants/100,000 inhabitants (federal state level) derived from yearly quality reports of the Federal Association of Statutory Health Insurance Physicians (Kassenärztliche Bundesvereinigung) [22].
Statistical analysis
Individual, regional deprivation and health care services data were merged based on official district keys. The description was stratified both by examination (baseline, follow up) and by regional study. Means and standard deviations (SDs) were used for the description of continuous variables. Categorical variables were described by numbers and proportions. Furthermore, proportions of treatment with anti-hyperglycemic pharmaceuticals were determined. The association between study region and anti-hyperglycemic treatment was analyzed for two dependent outcome variables: I. intake of any anti-hyperglycemic medication in the total sample, II. intake of newer anti-hyperglycemic medication among participants with any antihyperglycemic treatment. Since the prevalence of medication intake was the outcome of interest both baseline and follow-up data were analyzed cross-sectionally. In accordance with Zou et al. [23] prevalence ratios (PRs) for the intake of any/newer anti-hyperglycemic medication were estimated by multivariable Poisson regression models with a robust error variance using log link function. This methodological approach was preferred because of the high prevalence of all outcomes resulting in overestimations of the true effects when odds ratios from logistic regression models would be computed instead [24].
To account for the variation between baseline and follow-up examinations, a mixed model approach was applied for outcome I (any anti-hyperglycemic medication) using "person" as random effect [25]. Districts and federal states were not considered as random effects because the number of districts (n = 11) and federal states (n = 4) in the study regions was considerably low. Since a poisson model was the model of choice, mixed effects poisson models (PROC GLIMMIX) were calculated. Because of differences in the study periods affecting the availability of newer anti-hyperglycemic medication, the association for outcome II was examined solely among participants of the follow-up examination (N = 894). Hence a standard poisson model with robust error variance was calculated.
For both outcomes, six basis models were fitted.
• Model 1: crude model Because of the high correlation between the regional variables it was not possible to adjust for all regional health care structure variables in a joint model. Furthermore, the huge number of regression models did not allow showing PRs for each variable included in the regression models. For ease of clarity only PRs of the study regions were presented in the resulting tables.
All analyses were performed using SAS statistical software version 9.3 (SAS Institute Inc., Cary, NC, USA).
Study population
Descriptive data of the study population and anti-hyperglycemic treatment in total as well as stratified by study and examination are summarized in Tables 3 and 4. Table 5 summarizes pairwise PRs for any anti-hyperglycemic medication. According to the results of the crude model 1, treatment patterns varied by study region. KORA participants were more likely to receive any anti-hyperglycemic medication than participants from all other studies, while least prescriptions were found in HNR. These differences were independent from all individual variables (models 2-4). The regional differences persisted after adjustment for regional deprivation (model 5). Further adjustment for single indicators of health care structure (model 6) resulted in mostly minor variations of PRs. The statistically significant difference in the medication prevalence between KORA vs. HNR reported in models 1 to 5 remained in the same order of magnitude with PRs ranging between 1.12 after adjustment for DMP participants/100,000 inhabitants and 1.15 after adjustment for diabetologists/100,000 inhabitants.
Determinants of any anti-hyperglycemic medication in the total study population
Regarding the effect of other independent variables, the prevalence of receiving anti-hyperglycemic medication increased significantly with increasing diabetes duration, HbA 1c and intake of cardiovascular medication in all regression models (all p<0.001; data not shown). Furthermore, increasing diastolic blood pressure was predominantly associated with decreased prevalence of anti-hyperglycemic medication.
Determinants of newer anti-hyperglycemic medication among people with any anti-hyperglycemic treatment
As shown in Table 6, regional differences in the prevalence of receiving newer anti-hyperglycemic medication were more pronounced than regarding any anti-hyperglycemic medication. According to the crude model 1, regional differences varied between 1% and 49% with significantly higher proportions in KORA and CARLA compared with HNR. After adjustment for individual basis variables (model 2), the difference between KORA vs. HNR increased while the difference between CARLA vs. HNR decreased and was no longer statistically significant. Adjustments for further individual variables (models 3-4) changed PRs only marginally. Additional adjustment for regional deprivation (model 5) increased regional differences in the prevalence of receiving newer anti-hyperglycemic medication between KORA vs. HNR, while the difference between CARLA vs. HNR decreased (not statistically significant). After further inclusion of single indicators of health care structure, significant differences in the regional prevalence of newer anti-hyperglycemic medication between KORA and HNR mainly persisted while the direction of the change in PRs for CARLA or SHIP vs. HNR was inconsistent and depended on the included indicator of health care structures.
Regarding the effect of other independent variables, the prevalence of receiving newer antihyperglycemic medication was higher with increasing diabetes duration and HbA 1c in most regression models, but lower with increasing age (data not shown). https://doi.org/10.1371/journal.pone.0191559.t005 Regional differences in antihyperglycemic medication
Key results
Analyses of medication data from four longitudinal, population-based German studies partly showed considerable regional differences in anti-hyperglycemic medication with differences in newer medication prevalence of up to 49%. Regarding any anti-hyperglycemic medication, neither adjustment for individual variables, regional deprivation nor indicators of health care services could completely explain regional differences. Compared with any anti-hyperglycemic medication, regional differences in the prevalence of receiving any newer anti-hyperglycemic medication were more pronounced. The prevalence of receiving anti-hyperglycemic and newer medication was thereby highest in the south and lowest in the west. After adjustment for individual variables statistically significant regional differences persisted only for KORA (south) vs. HNR (west). Despite extensive adjustment regional differences in the prevalence of any as well as newer anti-hyperglycemic medication mainly remained indicating associations with further influencing factors not captured in the present analyses.
Comparison with other studies
Although the interest in health care differences and the underlying causes is high and further growing, corresponding literature remains scarce. Especially individual and regional associations have rarely been analyzed simultaneously. Compared with previous results by Tamayo et al. adjusted for individual variables [11], regional differences in any anti-hyperglycemic medication between KORA (south) and HNR (west) were confirmed, and further differences between CARLA (east) vs. HNR (west) appeared. However, there was no consistent difference between the studies from western (KORA, HNR) and the studies from eastern Germany (SHIP, CARLA) in contrast to some other previously reported health outcomes [26]. The current findings support the results by Tamayo et al. suggesting that individual variables explain regional differences inadequately. Although regional deprivation could partly explain these differences in type 2 diabetes mellitus prevalence [4][5][6], regional deprivation seems not to be of significant importance in the explanation of differences in antihyperglycemic medication in this study either. Comparisons with studies from other countries are limited because of differences in regional structures and health care systems (especially regarding the reimbursement of antihyperglycemic medication) [27]. The association between regional deprivation and worse diabetes outcomes reported in the review by Grintsova et al. [8], was not found for anti-hyperglycemic medication in this study. However, it is unknown if a higher proportion of medication use implicate a worse or better quality of treatment or even a mixture of both masking potential apparent differences in diabetes care as a consequence. In Belgium, Wens et al. demonstrated regional differences in the first utilization of anti-hyperglycemic medication (sulphonylureas vs. biguanides) independent of body mass index, HbA 1c , serum cholesterol and triglycerides [28]. In Germany, the utilization of the biguanide metformin is still recommended as first choice medication in the national diabetes guidelines {Bundesärztekammer (BÄ K), Kassenärztliche Bundesvereinigung (KBV), Arbeitsgemeinschaft der Wissenschaftlichen Medizinischen Fachgesellschaften (AWMF), Arzneimittelkommission der deutschen Ä rzteschaft (AkdÄ ), Deutsche Diabetes Gesellschaft (DDG), Deutsche Gesellschaft für Allgemeinmedizin und Familienmedizin (DEGAM), Deutsche Gesellschaft für Innere Medizin (DGIM), Verband der Diabetesberatungs-und Schulungsberufe Deutschland [29]. Correspondingly, metformin utilization at follow-up was consistently higher in all current studies than utilization of sulfonylureas. Moreover, analyses of the variability of prescribing for diabetes and secondary preventative therapies revealed large differences between eight Irish health board regions not explainable by differences in the distribution of age and gender [30]. However, socioeconomic differences and differences in health care services between the regions were not considered in the respective study.
Implications
Summarizing the current results neither individual variables including individual socio economic status nor regional variables describing regional deprivation and health care services were able to sufficiently explain regional differences in any and newer anti-hyperglycemic treatment. To understand the underlying causes, future studies may also consider the influence of individual health behavior (e.g., dietary behavior, diabetes knowledge, health literacy, attitudes, wishes, and compliance regarding diabetes therapy), physicians 0 attitudes (e.g., regarding continual medical education and prescriptions), physician-patient interactions and reimbursement, the quality and utilization of regional health care services, and the prevalence of chronic stress and mental health problems. Further, possible differences in the detection of diabetes have to be taken into account. In addition, analyses of regional differences based on small-area data, e.g. at the municipal level, might be more informative. Another point of interest is the association between regional differences in anti-hyperglycemic treatment and longterm diabetes outcomes. Interestingly, regional differences as reflected by PRs seemed to be more pronounced regarding the prevalence of newer anti-hyperglycemic medication compared with any antihyperglycemic medication (in line with the study by Tamayo et al. [11]). The underlying causes are not known to date. Maybe, differences in the budgets of the treating physicians depending on the regionally organized Associations of Statutory Health Insurance Physicians (Kassenärztliche Vereinigungen) and differences in health insurance membership of the treated patients (selective contracts, statutory vs. private health insurance) are of importance in this context.
Strengths and limitations
The strengths of our study are the utilization of data from four German regional populationbased studies with high comparability regarding sampling procedure, study design, and assessment tools. For the first time in Germany and elsewhere, individual socio economic status, lifestyle factors, regional deprivation, and differences in health care services were considered together trying to explain differences in antihyperglycemic medication in general and stratified by type of medication in a large sample of participants.
However, this study is limited by differences in the periods of data collection between the studies. Although response rates were similar (baseline 56%-69%, follow-up 80%-90%), differences in nonresponse may have biased the regional analyses. The use of districts as regional reference may have led to distortion of the results because of heterogeneous geographical sizes and numbers of inhabitants. Another limitation is that individual data on health insurance status (statutorily or privately insured) were not available for all studies. Furthermore, the results of model 6 should be interpreted with caution in the context of an increased uncertainty of estimates and standard errors due to high collinearity of regional variables with study region. To ensure a uniform procedure it was decided to limit the analysis of new anti-hyperglycemic medication to follow-up data, although the time of the baseline examination of some studies overlapped with the study period at follow-up of others. In addition, PRs regarding new antihyperglycemic medication were often not statistically significant despite partly considerably regional differences due to low statistical power.
In conclusion, for the first time, regional differences in any and newer anti-hyperglycemic treatment have been demonstrated based on data from four regional population-based studies in Germany. Because neither comprehensive individual variables nor regional deprivation and health care services were able to sufficiently explain regional differences, further research is needed to understand the underlying causes, assess implications for type 2 diabetes mellitus outcomes, and plan interventions for deprived target groups.
|
2018-04-03T02:19:30.351Z
|
2018-01-25T00:00:00.000
|
{
"year": 2018,
"sha1": "da5d56ecfc95d7bed8718185b282ca89338b13c2",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0191559&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a3c4766637425ca52785931cc0efbc69fb980a2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214246043
|
pes2o/s2orc
|
v3-fos-license
|
Improvement on Thermal Stability of Nano-Domains in Lithium Niobate Thin Films
: We present a simple and effective way to improve the thermal stability of nano-domains written with an atomic force microscope (AFM)-tip voltage in a lithium niobate film on insulator (LNOI). We show that nano-domains in LNOI (whether in the form of stripe domains or dot domains) degraded, or even disappeared, after a post-poling thermal annealing treatment at a temperature on the order of ∼ 100 ◦ C. We experimentally confirmed that the thermal stability of nano-domains in LNOI is greatly improved if a pre-heat treatment is carried out for LNOI before the nano-domains are written. This thermal stability improvement of nano-domains is mainly attributed to the generation of a compensating space charge field parallel to the spontaneous polarization of written nano-domains during the pre-heat treatment process.
By applying a polarization reversal voltage, via an atomic force microscope (AFM)-tip, domain reversal and domain patterning can be realized in lithium niobate thin films. Based on this technique, Gainutdinov et al. [13] reported that the size and shape of domain patterns can be precisely controlled, enabling the realization of periodically poled lithium niobate (PPLN) with period of hundreds of nanometers. PPLN films can be used for quasi-phase-matching (QPM) devices, such as PPLN microcavities [14] and PPLN waveguides [15], to achieve frequency conversion. Obviously, the stability of written domains is very important for LNOI-based applications such as PPLN microcavities, PPLN waveguides, and nonvolatile ferroelectric domain memories [16][17][18][19].
Several groups have studied the thermal stability of domains in various ferroelectric materials, such as Rb-doped KTiOPO 4 [20], LiTaO 3 [21], Pb(Zr 0.4 Ti 0.6 )O 3 [22], and LiNbO 3 [23,24]. They reported that the ferroelectric domains would degrade, or even disappear, after heat treatment. Moreover, Shao et al. [25] reported that the domain structures fabricated on LNOI are unstable even at room temperature. Obviously, such instability would prevent the ferroelectric domains from applications where the device temperature will rise due to light absorption or due to high temperature environments.
In this paper, we propose a simple and effective method to improve the thermal stability of nano-domains in lithium niobate thin films. We confirmed that nano-domains written in LNOI by applying an AFM-tip voltage were unstable at high temperatures in the order of ∼100 • C. However, we found that the domain stability can be significantly improved if the LNOI sample experiences a pre-heat treatment before the nano-domain fabrication process. The underlying mechanism was also discussed.
Materials and Methods
The schematic experimental setup is shown in Figure 1, in which the structure of the LNOI sample used in our experiments is also clearly shown. The LNOI sample was composed of a 300-nm thick +Z-cut ion-sliced LiNbO 3 thin film, a 100-nm thick Cr thin film, a 2-µm thick SiO 2 layer, and a 500-µm thick LiNbO 3 substrate, which were all layered or bonded to one another in sequence. The 100-nm thick Cr layer served as a bottom electrode when an AFM-tip voltage was applied on the top 300-nm thick LiNbO 3 thin film. Here, different metals may be used as the bottom electrode and different metal-lithium-niobate interfaces may have an effect on the domain poling process, but this is not the main topic of the current paper and will not be explored here.
In the experiments, the top LiNbO 3 film was poled directly by applying a DC voltage through an AFM conductive probe tip, contacting the film top surface with the Cr layer being grounded. The dot domains were written under the AFM-tip voltage step by step, and the stripe domain patterns were written using a raster lithography method with graphic templates. The reversed domain structures were characterized by using piezoresponse force microscope (PFM), a versatile and powerful method to image domain structures with nano-size features. The tip radius, R, and the resonance frequency, f R , of the pt-coated Si probe tip used in the experiments were R = 20 nm and f R = 100 kHz, respectively. All AFM and PFM experiments were carried out with an MFP-3D Infinity atomic force microscope (Asylum Research, Goleta, CA, USA). The thermal heat treatments, including the post-poling annealing treatment after the domain writing process and the pre-heat treatment with the virgin LNOI sample without domain structure, were carried out by using an electric drying oven. The sample was heated to a temperature ranging from 90 • C to 210 • C in air, with a heating rate of 5 • C/min from room temperature, and then maintained at the high temperature for a certain time. After that, the sample was moved out from the drying oven and cooled down naturally to room temperature in air with a cooling rate of ∼20 • C/min. Note that no oxidation or reduction effect was observed in lithium niobate thin films during the thermal annealing treatment at a temperature of the order of 100 • C.
Thermal Stability of Nano-Domains in Lithium Niobate Thin Films
To begin with, we will explore the thermal stability of nano-domains in lithium niobate thin films without any pre-heat treatment in this part. As QPM devices and ferroelectric domain memory are two important potential applications for domain structures, both stripe domains and dot domains were fabricated and studied. Here, the stripe domains were fabricated using a raster lithography method with an AFM-tip voltage of 35 V. The rate of lithography was fixed at f = 2 Hz. Periodical stripe domains with a fixed period of 1 µm and an averaged stripe length of ∼4 µm but with different stripe widths of w = 396 nm, 205 nm, and 156 nm, were fabricated. The PFM images of these as-written stripe domains were measured, and the results are shown in Figure 2a-c, respectively.
We confirmed experimentally that these as-written stripe domians were stable at room temperature, and no degradation was observed even for several days. Then, the stripe domains were thermally annealed at a high temperature T = 120 o C for t = 1 hour and then cooled down naturally to room temperature in air again. For comparison, the PFM images of the stripe domains after the thermal annealing treatment are shown in Figure 2d-f, respectively. The stripe domains were significantly degraded in both width and length dimensions after the thermal annealing treatment. In addition, dot domains with different diameters were also fabricated by applying different AFM-tip voltages for a fixed time t w = 1 s. Each dot domain was separated from one another by 1 µm in both the horizontal and vertical directions. Figure 3a shows the PFM images of the fabricated dot domains, in which four dot domains in each row were fabricated with the same tip voltage. These voltages were, from the bottom up, 40 V, 45 V, 50 V, and 55 V. The averaged diameter of the as-written dot domains in each row was measured to be 215 nm, 255 nm, 294 nm, and 333 nm, respectively. Here, the diameter D of a dot domain was estimated by equaling the area of the dot domain to a circle with a diameter D.
These as-written dot domains were also stable at room temperature. After that, the dot domains were annealed thermally at a high temperature T = 120 • C for one hour, and then the dot domains were cooled down to room temperature in air. Again, the PFM images of the dot domains after thermal annealing treatment were measured, and the results are shown in Figure 3b. It is evident that the dot domains are significantly degraded and even disappear for those small dot domains. This observed thermal instability is likely detrimental for practical applications such as QPM devices and ferroelectric domain memory devices.
Improvement on the Thermal Stability of Nano-Domains in Pre-Heated Lithium Niobate Thin Films
Here, we introduce a simple but effective way to improve the thermal stability of nano-domains in lithium niobate thin films. First, a virgin single-domain sample without any domain structures was put into the electric drying oven to undergo a pre-heat treatment at T p = 150 • C for 2 h. Then, nano-domains were written with the same tip voltage as those in Figure 2 for stripe domains and in Figure 3 for dot domains. In the experiments, the period of the stripe domains was set to be 1 µm, and the width of the stripe domains was set to be 333 nm, 215 nm, and 137 nm, respectively. The PFM images of these as-written stripe domains were measured and are shown in Figure 4a-c, respectively.
After that, the sample with the stripe domains was thermally annealed at T = 120 • C for 1 h and then cooled down naturally to room temperature in air. The PFM images of the stripe domains were measured again for comparison, after the thermal annealing treatmen, and the results are shown in Figure 4d-f, respectively. As shown in Figure 4, although the stripe domains with a pre-heat treatment also degrade after the post-poling thermal annealing treatment, the degradation is significantly suppressed as compared to the case without the pre-heat treatment. The thermal stability of the dot domains in the pre-heat treated samples was also studied. In the experiments, the dot domains were written in the pre-heat treated sample under the same tip voltage and writing time t w as those in Figure 3. Similarly, the separation distance between the nearest neighboring dot domains was set to be 1 µm in both the horizontal and vertical dimensions, and dot domains with different averaged diameters of 215 nm, 255 nm, 294 nm, and 333 nm were prepared. Again, the diameters of the dot domains were averaged over four dot domains fabricated under the same tip voltage and writing time t w . Then, the sample with the dot domains underwent the same thermal annealing process as that in Figure 3.
The PFM images of the dot domains before and after the post-poling thermal annealing treatment were measured for comparison, and the results are shown in Figure 5. Compared to the case without pre-heat treatment in Figure 3, the thermal stability of the dot domains in the pre-heat treated samples is significantly improved.
Discussions
To show quantitatively the improvement on thermal stability of nano-domains in the pre-heat treated samples, we introduced a thermal stability parameter P, defined as P = S remain /S initial , where S initial and S remain are the areas of the nano-domains before and after the post-poling thermal annealing treatment. The domain is more stable for a larger P. Table 1 lists the values of the thermal stability parameter P for both stripe domains and dot domains, as shown in Figures 2-5.
In general, as compared to the case without pre-heat treatment, the thermal stability parameter P is much larger for nano-domains in the pre-heat treated samples, indicating that the thermal stability of nano-domains in samples with pre-heat treatment is significantly improved. Note that the length of stripe domains also shrinks, and the length shrinkages were measured to be 0.294 µm, 0.235 µm, and 0.588 µm, in the case without pre-heat treatment, while in the case with pre-heat treatment, the length shrinkages were reduced to be 0.125 µm, 0.121 µm, and 0.093 µm, for stripe domains with widths of 333 nm, 215 nm, and 137 nm, respectively. The dependence of the thermal stability parameter, P, on the post-poling annealing temperature, T, was studied for both stripe and dot domains without pre-heat treatment, and the results are shown in Figure 6. Here, stripe domains with different widths of 372 nm, 196 nm, and 155 nm and dot domains with different diameters of 333 nm, 255 nm, and 215 nm, were prepared. The length of all stripe domains was set to be ∼4 µm. In all cases, the post-poling thermal annealing time, t, was set to be one hour. For both stripe domains and dot domains, the thermal stability parameter, P, decreases with the increase of the post-poling annealing temperature, T, and the domain degradation at 120 • C is typical of the representative results within the studied temperature range, which is practically reachable in nano-size photonic structures, such as PPLN microcavities and PPLN ridge waveguides. It has been reported that the domain structures in bulk lithium niobate crystals are stable at temperatures on the order of 100 • C but decay also at a much higher temperature above 600 • C [26,27], indicating that the domain structure in bulk crystal sheets is much more thermally stable when compared to that in lithium niobate thin films.
Furthermore, we studied the dependence of the thermal stability of nano-domains on the experimental pre-heat treatment conditions. In the experiments, pre-heat treatment on virgin single-domain samples was carried out at different high temperatures, T p , for different time periods, t p , and then stripe or dot domains with different sizes were fabricated by applying appropriate tip voltages. After that, the nano-domains were thermally annealed at T = 120 • C for 1 h. The PFM images of all nano-domains were measured and the thermal stability parameter P was characterized for each nano-domain. Figure 7a,b shows the dependence of the thermal stability parameter, P, on the pre-heat temperature, T p , with t p = 2 h for the stripe domains and dot domains, with various sizes. P increases with the increase of the pre-heat temperature, T p , in both the stripe domain and the dot domain cases, indicating that the nano-domains are more thermally stable with higher T p . Figure 7c,d depicts the dependence of the thermal stability parameter, P, on the pre-heat time, t p , for the stripe and dot domains of various sizes. Here, the pre-heat treatment temperature was set to be 150 • C for both cases. The thermal stability parameter P is larger with longer pre-heat treatment time, t p . In addition, the nano-domains with larger sizes are more stable for both cases, as shown in Figure 7.
From the above results, we see that domain degradation or even domain back switching may occur in lithium niobate thin films during a thermal annealing process at temperatures on the order of a hundred degrees Celsius. Fortunately, such domain degradation or back switching can be greatly suppressed through a simple pre-heat treatment for the virgin single-domain lithium niobate thin films. It is well known that the domain kinetics in ferroelectric lithium niobate are related to the local field distribution within lithium niobate crystals. At room temperature, the depolarization field, E d , is fully compensated by the screening field, E sceen , due to surface charges or bulk charges in lithium niobate crystals. When the crystal temperature increases, the spontaneous polarization, P s , and therefore the depolarization field, E d , decreases. This breaks the balance between the depolarization field, E d , and the screening field, E sceen . Therefore, the thermally actived bulk charges, such as protons in lithium niobate may drift in bulk, or the surface charges may accumulated on the surface, to compensate for this field imbalance [28,29]. This will result in a space charge field, E sc , in lithium niobate with its direction antiparallel to the spontaneous polarization, P s . It is this space charge field that results in the degradation or back switching of the nano-domains in lithium niobate thin films. Note that the component of the space charge field induced by the thermally activated charges are fixed after the crystal is cooled down to the room temperature. This space charge field component induced by the thermally activated charges in crystal is also formed during the pre-heat treatment, and its direction is antiparallel to the spontaneous polarization in the virgin single-domain crystals but parallel to the reversed spontaneous polarization of the stripe or dot domains, which, therefore, results in a great suppression on the degradation of nano-domains. Comprehensive domain kinetics in lithium niobite thin films are an interesting but complicated topic, and they surely deserve a full-length study beyond the scope of this paper; for more details, please refer to Ref. [30].
Conclusions
In conclusion, we demonstrated a simple yet effective way to improve the thermal stability of nano-domains fabricated in lithium niobate thin films. We confirmed that the nano-domains in lithium niobate thin films are thermally unstable even at a temperature on the order of ∼100 • C, which can be easily reached locally in nano-size photonic structures, due to light absorption. Therefore, such thermal instability of nano-domains could be very detrimental to practical applications, such as PPLN microcavities, PPLN ridge waveguides, and ferroelectric domain memories. We demonstrated that the thermal stability of nano-domains can be greatly improved when the lithium niobate thin film undergoes a pre-heat treatment before the fabrication of nano-domains. This thermal stability improvement is attributed to the generation of a space charge field during the pre-heat treatment, which is parallel to the spontaneous polarization of nano-domains. Our results should be useful for nano-domain-based photonic devices such as PPLN microcavities, PPLN ridge waveguides, and ferroelectric domain memories.
Author Contributions: Guoquan Zhang conceived the idea of the work. Yuejian Jiao designed and performed the experiments. Zhen Shao and Sanbing Li participated in the experiments. Yuejian Jiao and Guoquan Zhang wrote the paper. All authors participated in the data analysis and paper preparation. All authors have read and agreed to the published version of the manuscript.
|
2020-02-06T09:04:03.957Z
|
2020-01-30T00:00:00.000
|
{
"year": 2020,
"sha1": "0d8d63fc9ca99cdaa9a6847d2de75908d4d9a852",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/crystals/crystals-10-00074/article_deploy/crystals-10-00074-v2.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3a5782e115b3d755f47760f6f65e600a98d2ea23",
"s2fieldsofstudy": [
"Physics",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
269136452
|
pes2o/s2orc
|
v3-fos-license
|
Oral pathology and oral medicine in Latin American countries: current stage
Background Oral Pathology (OP) and Oral Medicine (OM) are specialties in dentistry whose main objective is the diagnosis and treatment of oral and maxillofacial diseases, and aspects related to the academic training of professionals and fields of practice are distinct and heterogeneous around the world. This study aimed to evaluate professional training and areas of activity in OP and OM in Latin American countries. Material and Methods A questionnaire was sent to 11 countries, with a professional in each country responsible for answering it. The questionnaire had 21 questions related to the process of professional training, areas of practice, the existence of scientific events in each country, and also collected demographic and population information. Results OP and OM are practiced in all the countries studied, but the specialty is not recognized in all of them. Brazil was the first to recognize both as a specialty. Postgraduate programs designed to train specialists are available in various countries. Two countries offer residency programs, 6 countries provide specialization courses, 6 offer master's programs, and 3 have doctoral programs. Brazil boasts the highest number of undergraduate courses (n=412), while Uruguay has the lowest (n=2). Professional societies representing the specialty exist in ten countries. Brazil has the highest number of OP and OM specialists (n=422 and 1,072), while Paraguay has the smallest number (n=1 and 3). Conclusions Although both specialties are widely practiced around the globe, professional training, the number of dentists trained and the fields of professional practice are very different between the countries studied. Key words:Oral pathology, oral medicine, dentistry, education.
Introduction
Oral and Maxillofacial Pathology (OP) and Oral Medicine (OM) are closely related areas of specialization and are considered vital to a comprehensive healthcare system (1).In Southern Europe and Ibero-American countries, such as Brazil, the term "Stomatology" is used to define the specialty of OM (2).The National Commission on Recognition of Dental Specialties and Certifying Boards of the United States defines OP as the dental specialty that deals with the nature, identification, and management of diseases affecting the oral and maxillofacial regions (https://ncrdscb.ada.org/).OM is considered a young specialty of dentistry recognized across the world (2).It aims to diagnose and to provide (mostly non-surgical) treatment for primary diseases of oral mucosa and the jawbones, as well as salivary glands disorders, orofacial pain, and maxillofacial manifestations of systemic diseases and their treatment, such as cancer, infectious diseases, auto-immune disorders, and others.Furthermore, OM specialists provide comprehensive dental care for patients within a range of complex medical scenarios that impact oral health, including radiation therapy, chemotherapy, bone marrow transplantation, molecular targeted therapy in oncology, bone-modifying agents and antiresorptive drugs, cardiopathies, solid organ transplantation, AIDS, and COVID-19 (3,4).During the COVID-19 pandemic, both areas of OP and OM had highlights, despite all the sanitary limitations observed.Discussions on teaching methodologies (5), forms of continuing education in the OP and OM (6), adequacy of the functioning of oral pathology laboratories (7), descriptions of oral lesions (8), evaluation of remote teaching in master programs (9), implication in oral oncology practice (10), were some of the topics studied.Some studies around the world have analyzed the state of art and professional perspectives in OP and OM (2,3,11,12).However, no previous studies specifically evaluated the current stage of OP and OM in the context of South America and Mexico.Therefore, this study evaluated different dimensions of the two specialties, OP and OM, in the aforementioned countries.
Material and Methods
A cross-sectional, observational and convenience study was conducted.A questionnaire was sent to ten countries in South America (Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador, Paraguay, Peru, Uruguay and Venezuela) and Mexico.In each country, a professional responsible for collecting the requested data was chosen.
After the invitation of the OP and/or OM specialists, we sent a structured questionnaire, by personal e-mail, with 21 questions involving the specialties and characteristics of each country.The questionnaire included details of country demographics and the current status of the two specialties mentioned.Demographic data were consulted from population-based sources provided by official government bodies in the respective countries covered by the study.While data related to the specialty and training in dentistry in the country were collected from official government sources and the professional councils in each participating country in the study.Among the questions related to the specialties of OP and OM were: official recognition of OM and OM as a specialty in each country and the year of this event; the existence of a national scientific society; existence of scientific events in the country; existence of postgraduate programs, such as specialization, master's and PhD degrees; number of specialists working in the country, and professional practice (how a professional carries out his role or carries out his activities in his area of expertise).When questionnaires returned, it was constructed a database with data from the participating countries.The SPSS (Statistical Package for Social Science for Windows, Inc., USA) version 22.0 for Windows® was used to perform the statistical analysis.The results were presented descriptively.
Results
The questionnaires were returned from all 11 countries participating in the study.Table 1 shows the data on recognition of OP and OM, academic training and areas of practice by country.With the exception of Uruguay, all other countries have a recognized OP specialty.
Results: OP and OM are practiced in all the countries studied, but the specialty is not recognized in all of them.Brazil was the first to recognize both as a specialty.Postgraduate programs designed to train specialists are available in various countries.Two countries offer residency programs, 6 countries provide specialization courses, 6 offer master's programs, and 3 have doctoral programs.Brazil boasts the highest number of undergraduate courses (n=412), while Uruguay has the lowest (n=2).Professional societies representing the specialty exist in ten countries.Brazil has the highest number of OP and OM specialists (n=422 and 1,072), while Paraguay has the smallest number (n=1 and 3).Conclusions: Although both specialties are widely practiced around the globe, professional training, the number of dentists trained and the fields of professional practice are very different between the countries studied.
Key words:
Oral pathology, oral medicine, dentistry, education.uate programs, the most common was Master's degree programs in six countries (Argentina, Brazil, Mexico, Peru, Uruguay and Venezuela).On the other hand, the PhD level is offered by three countries, Argentina, Brazil and Mexico.Residency programs in OP and OM are offered only in Brazil and Peru.The main field of action for the OP and OM specialties was in university teaching and in private clinics.Seven (63.63%) countries reported the work of professionals in hospitals.In Brazil, there are two important spaces for the work of OP and OM, which are the public health system and military forces.Ten countries have at least one professional society representing the specialties.Only Ecuador does not have a scientific society of the specialties studied.Table 2 show population data, number of dentists, ratio of population and dentists by country and the number of OP and OM in each country.
Brazil and Mexico were the first countries among those studied to recognize OP as a specialty in 1971 and 1975, respectively.On the other hand, Paraguay and Peru were the most recent to recognize OP as a specialty.In the case of OM, from the countries studied, Uruguay and Bolivia do not recognize the specialty.In the case of Chile, OP and OM are recognized as a single specialty.Colombia and Brazil were the first countries to recognize OM as a specialty, respectively, in 1986 and 1992.The specialty is recognized by the professional regulatory bodies in each country that oversee the practice of the profession.In this case, the dental councils or boards.To obtain the title of specialist, you need to have completed a specialization or residency course recognized by the government and which has the minimum workload required, which varies from country to country.Regarding the existence and modalities of postgrad-
Discussion
This was the first study to specifically evaluated the current stage of OP and OM in the context of Latin America.Data from this study provide important information on the two specialties in the countries analyzed.
Our study evaluated countries with very different populations and economic characteristics.An international multicenter study on specialized training and education in OP showed that training varies across the world.However, the authors feel there is sufficient commonality for the development of an agreed indicative framework on education and training in Oral and Maxillofacial Pathology (11).Although our study did not have the objective of directly evaluating the training received by OP and OM, it is clear that it would be very important to have better dialogue and build common actions between these Latin America countries.
A favorable factor is the presence of professional societies for both specialties (OP and OM) in all 11 countries except Ecuador.Scully et al., (2016) (2) evaluating OM across the globe: birth, growth and future, highlighted the importance of professional societies multinational, national or multistate and their performances.A second proposition of this study is to stimulate greater interaction between these professional societies through joint actions, as is sometimes observed in scientific events and multicenter studies in order to know and reduce the differences between OP and OM in the countries studied.Recently, we described the first 50 years of the history of the Brazilian Society of Oral Medicine and Oral Pathology (SOBEP) and it is observed that since its foundation, back in the seventies, of the last century, the SOBEP has expanded to over 300 active members.
Annually, approximately 1,000 attendees meet in a national itinerant conference, which is held every July (3).
Another important result was to analyze the employment outcomes of the OP and OM.University teaching and private clinics were the most common work environments in all countries.The insertion in the hospital environment has also grown and this interaction with medicine in general is essential.Recently, we conducted a study with OP and OM and the findings suggest that Some limitations of this study were related to the impossibility of deepening some of the dimensions analyzed, for example, the reality of postgraduate programs and the distribution of OP and OM professionals in each country.
In summary, our study proposes a permanent collaboration between the countries of Latin America, sharing successful experiences in the fields of OP and OM and to develop joint actions for the limitations highlighted between the countries, such as continuing education in postgraduate programs (curricular structure), collaborative research projects, and the quantity that are training in OP and OM.
Table 1 :
General characteristics of Oral Pathology (OP) and Oral Medicine (OM) specialties in Latin America.
The largest populations in the countries studied were from Brazil and Mexico, while Uruguay and Paraguay were the smallest.The ratio of dentists to the general population was quite varied.The lowest indicator was observed in Brazil (1:531.81) to the highest (1:3,255.61) in Ecuador.In addition to the OP and OM numbers in each country studied, we analyze the ratios between the number of OP and OM by the population of each country and in relation to the overall number of dentists.It can be seen that the numbers are quite varied.Paraguay has only one OP working in the country.Countries such as Peru and Venezuela have one OP for more than 4.5 million inhabitants.When looking at the number of OP in relation to dentists, the numbers also vary.Countries such as Paraguay and Peru have one OP for more than 9,000 dentists, while Mexico and Chile have one for approximately 440 and 269 dentists, respectively.In the case of OM, the numbers also vary greatly.Brazil,
Table 2 :
(13)lation data, number of dentists and specialists in Oral Pathology (OP) and Oral Medicine (OM) in Latin America.Brazilian postgraduates in these fields have more opportunities for employment in private settings.Teaching/research were the most prevalent employment activity, despite being less than half of the sample(13).With respect to continuing education programs, at the postgraduate level, of the eleven countries participating in the study, two do not have any type of postgraduate program (Ecuador and Paraguay).Bolivia had a single specialization class in Oral and Maxillofacial Pathology in 2013, which lasted two years.A new specialization course is currently underway, also lasting two years.Three countries offer postgraduate degrees at the PhD level (Argentina, Brazil and Mexico).Postgraduate, master's and specialization levels were the most common among the countries studied.In the present study, it was not possible to analyze the curricular structures of the postgraduate programs offered in each country, as well as the legal requirements in the regulatory agencies of each one.In Brazil, for example, there is the presence of a regulatory agency to control the supply of master's and PhD degrees (https://sucupira.capes.gov.br/sucupira/).The legal requirements are mainly related to the body of researchers and scientific production.Master's degrees have a duration of two years and PhDs of four years.Currently, in the country, there is only one Institution (FOP-UNICAMP) offering both levels of training in the areas of OP and OM (https://www.fop.unicamp.br/cpg/index.php/home-estomatopatologiabr).The primary objective of master's and PhD courses is to prepare individuals as educators and researchers in the field.The majority of graduates are equipped to practice, teach, and conduct research in OP and OM.Another important dimension of the present study was the number of OP and OM in relation to the population of each country and in relation to the number of practicing dentists.The results were quite heterogeneous.In general, the number of OP was much lower than OM specialists.Countries such as Paraguay, Argentina, Peru and Venezuela have a very limited number of OP in relation to the population in general and, in the case of Paraguay in particular, a single professional.In OM the numbers also vary greatly, both in relation to the population of each country and in relation to dentists.Countries such as Brazil, Mexico, Paraguay and Peru have a very small number of OM for the general population.Although it is a limitation of the present study not to know the main reasons for these numbers, it is known that other dental specialties arouse greater interest in graduates, as well as the limitations as well as the limitations of insertion in the work environments.
|
2024-04-15T06:17:11.616Z
|
2024-04-14T00:00:00.000
|
{
"year": 2024,
"sha1": "148338547f1a9ca32c7ae95f4b71e40b73960ac2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4317/medoral.26500",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "197cee5da21f808111620b1f5629593a0f7c1d3b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
243751442
|
pes2o/s2orc
|
v3-fos-license
|
Improving Postural Stability among Amputees by Tactile 1 Sensory Substitution
Background For lower-limb amputees, wearing a prosthetic limb helps restore their motor abilities for 25 daily activities. However, the prosthesis's potential benefits are hindered by limited 26 somatosensory feedback from the affected limb and its prosthesis. Previous studies have examined various sensory substitution systems to alleviate this problem; the prominent 28 approach is to convert foot-ground interaction to tactile stimulations. However, positive 29 outcomes for improving amputees' postural stability are still rare. We hypothesize that the 30 intuitive design of tactile signals based on psychophysics shall enhance the feasibility and 31 utility of real-time sensory substitution for lower-limb amputees. were tested with a classical postural stability task in which visual disturbances perturbed their quiet standing. With a brief familiarization of the system, the participants exhibited better posture stability against visual disturbances when switching on sensory substitution than without. The 45 body sway was substantially reduced, as shown in head movements and excursions of the center of pressure. The improvement was present for both amputees and able-bodied controls and was particularly pronounced in more challenging conditions with larger visual disturbances. postural stability for lower-limb intuitive of the mapping the interaction the tactile is the surrogated tactile signals for postural control, for situations that their postural control is
73
The lower-limb amputee lacks direct foot contact with the ground and the feedback from foot 74 mechanoreceptors, critical for balance control (11). With a broken sensorimotor loop, 75 amputees often show poor balance and gait function with fear of falling and a high prevalence 76 of falls (12,13). When an amputee wears a prosthesis, the residue limb of the amputee 77 physically interacts with the prosthetic sockets and provides limited haptic feedback that 78 indirectly reflects foot-ground interaction. Augmenting this essential feedback for prosthesis 79 wearers has the potential to close the sensorimotor control loop and subsequently improve 80 their gait control and postural stability (14, 15). 81 5 Sensory substitution is to encode the missing sensory information and route it to the nervous 82 system via an alternative, intact sensory channels. For example, auditory and haptic feedback 83 has been used to surrogate visual feedback for the blinded to explore the surroundings (16).
84
For upper-limb amputees, sensory substitution has been shown to provide effective sensory 85 feedback for controlling robotic arms (17). Previous researchers have also explored the 86 coding of movement-related information via visual, auditory, or tactile channels for lower-87 limb amputees. For example, Zambarbieri, Schmid (18) used a pressure-sensing insole to 88 estimate the center of pressure (CoP) underneath the foot and visually present the estimate to 89 the participant. This method is apparently impractical since the processing of the surrogated 90 visual information is cognitively demanding and thus limits the benefit of sensory substitution 91 for gait and postural control, which are typically controlled with minimal cognitive load.
92
Other researchers have also used auditory feedback to deliver gait balance information and 93 demonstrated a positive effect on gait asymmetry (19,20 amputated leg with a force magnitude linearly scaled by the pressure measurements from the 103 insole of the prosthesis. They found that, based on the data from a single transtibial amputee, 104 the intensity and the order of pressing forces applied by the balloon actuators could be 105 estimated with decent accuracy (24, 25). However, they did not assess the efficacy of the 106 system in any motor task with prosthesis use. Furthermore, the large size of the balloon 107 actuators might prevent its wide use in the amputee population. Plauché, Villarreal (29) and
148
In the present study, we designed an intuitive tactile stimulation system to provide real-time 149 feedback on plantar pressure. We tested its efficacy in improving postural stability among 150 amputees and the non-disabled. We measured plantar pressure at four insole locations and 151 mapped it nonlinearly to tactile intensity. Critically, to make the learning of sensory 152 substitution easy and intuitive, our system only encodes CoP excursions in the anteroposterior 153 direction, a more critical direction of instability among amputees than other directions (38).
214
To reduce the ambiguity of vibrotactile signals, we only activated one vibrator at a time: 215 when the BI was larger than the EP, the vibrator placed in the front would vibrate to signal a 216 forward lean, and vice versa. The intensity of vibration for each vibrator was determined by 217 the absolute difference in BI between the current state and the equilibrium state at EP: Where BIEP is the average BI estimated at EP when no visual perturbation was applied, and 220 BImax is the maximum BI in the forward or the backward direction estimated from the trials 221 12 when the participants first encountered visual perturbation on day 1 (sensory substitution was 222 off; see below). The relation between the vibration intensity and the BI followed a logarithmic 223 function ( Figure 1B). When the BI slightly oscillated around the equilibrium point as 224 participants maintained a relatively neutral position, the vibrotactile feedback was weak. As 225 the BI deviated more from EP, the intensity would increase, approaching the maximum
523
The oscillation frequency of visual disturbance showed an inconsistent effect on body sway.
524
For example, control participants tend to increase their power of CoP displacement and head 525 movement with increasing stimulus frequency, but amputee participants showed an opposite 526 tendency ( Figure 5 and 7). When visual stimuli moved with a lower frequency (e.g., 0.1 Hz), 527 the body swayed periodically in synchrony with the driving visual stimuli. When visual 528 stimuli moved with a high frequency (e.g., 0.5 Hz), it became hard for the body sway to keep 529 up with the stimuli, resulting in a smaller power (42,56). This saturation effect appears to be 530 more evident for amputees than for non-disabled participants. 531 29 We also computed the performance difference before and after sensory substitution to 532 compare the effect size of sensory substitution across conditions. Three out of the four 533 measures (i.e., the power of CoP displacement, the range and the power of head movement) 534 showed a larger effect size in conditions with larger visual-stimuli amplitudes. The range of 535 CoP displacement, the last measure, did not increase with visual amplitude, but it did increase 536 with visual frequency. Thus, the sensory substitution system benefited both groups of 537 participants more when they were faced with more challenging visual disturbance.
538
We found that sensory substitution stabilized the head and CoP with similar effect sizes. For 539 the CoP range, the effect size of sensory substitution was 0.60 in partial 2 , which is 540 equivalent to a 35.3% reduction after sensory substitution. In comparison, for the head 541 movement range, the effect size was 0.48 with a 24.8% reduction. The same pattern was
545
Furthermore, if we assume that the standing body resembles an inverted pendulum as in 546 typical postural models (58), the head movement should decrease more when the CoP 547 decreases. Thus, theoretically, we shall expect a more significant stabilizing effect of sensory 548 substitution for the head than for the CoP. The lack of difference between the head and the 549 CoP, or even a slightly more significant effect for the CoP, does not fit the theoretical 550 prediction. We postulate that this might be attributed to the specificity of surrogated sensory 551 information delivered by our sensory substitution system: the vibrotactile feedback reflects 552 30 plantar pressure changes directly related to CoP excursion, not to head movement. Thus, 553 when the nervous system integrates this surrogate sensory information, it readily responds to 554 CoP displacement induced by visual disturbances. Therefore, our findings appear to suggest 555 that sensory substitution exerts its influence on motor control in a stimulus-specific way, at 556 least for the situation investigated here where sensory substitution is adopted for a short 557 period of time. Future studies could test this hypothesis by comparing the responses to 558 substituted stimuli that encoded different body motion signals, e.g., head motion instead of 559 CoP displacement.
560
Interestingly, no group difference of postural stability between amputees and the control 561 reached significance for all the performance measures investigated. We expected that 562 amputees would be perturbed more by the visual disturbances since previous studies have 563 shown that amputees are more dependent on visual inputs (39)(40)(41). However, we recognize 564 that these studies used paradigms that reduced visual sensory feedback for the participants.
565
Understandably, it was harder for amputees than the non-disabled to accommodate visual 566 deprivation due to the loss in somatosensory feedback associated with amputation. In the 567 present study, however, we used a visual perturbation paradigm rather than visual deprivation.
568
According to multisensory integration theory in postural control (45-47), both amputees and 569 the non-disabled could adjust the weights of different sensory channels when sensory inputs 570 (i.e., visual input) became inaccurate. Furthermore, previous studies reported worse standing 571 balance among amputees typically used short trials, e.g., 20 s per trial (59). Our experiment 572 instead used as long as 140 s per trial; thus, both groups had ample time to adjust their 573 31 weights of different sensory channels and adapt to the visual stimuli. The other factor is that 574 most of our participants have worn artificial limbs for more than ten years. After prolonged 575 use of prosthesis, their performance in simple motor tasks such as quiet standing become 576 indistinguishable from that of the non-disabled. In sum, the lack of group difference thus 577 suggests that lower-limb amputees can effectively accommodate continuous visual 578 disturbances.
579
The development of robotic artificial limbs has been made dramatic progress in fusing signals 580 from various sensors for sensing the environment and the internal state of the prosthesis, but 581 the research focus is more on intelligent control of prostheses (60). It is equally essential to 582 route real-time sensory feedback for the agent, i.e., the human controller, to reduce the fear of 583 falling, enhance the sense of embodiment of the prosthesis, and better motor control. This 584 sensory augmentation for the agent can be achieved by invasive methods such as electrical 585 peripheral nerve stimulation of the sciatic nerve (61) or noninvasive methods such as sensory 586 substitution. As we pointed out in the introduction, substituting the missing feedback of foot-587 ground interaction is probably most important for lower-limb amputees. Still, the previous 588 endeavors have been hampered by high demands of cognitive loads, unintuitive design, and 589 inconsistent behavioral benefits. Our study has shown that these shortcomings of noninvasive 590 sensory substitution can be overcome. It paves the way for us to integrate this method with 591 robotic lower limbs. As most actuated lower-limb prostheses still lack afferent feedback to the 592 user, it would be interesting to examine the outcome when our sensory substitution system 593 integrates with these systems to achieve better human-centered close-loop control.
|
2021-08-20T18:54:22.825Z
|
2021-04-06T00:00:00.000
|
{
"year": 2021,
"sha1": "4b12dc777fb70f8065ddd65badabd6ac888fe694",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-364267/v1.pdf?c=1631876137000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "594d3448fc82c161d64b30dece2475d7ac587b54",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
}
|
260091858
|
pes2o/s2orc
|
v3-fos-license
|
On the double covers of a line graph
Let $L(X)$ be the line graph of graph $X$. Let $X^{\prime\prime}$ be the Kronecker product of $X$ by $K_2$. In this paper, we see that $L(X^{\prime\prime})$ is a double cover of $L(X)$. We define the symmetric edge graph of $X$, denoted as $\gamma(X)$ which is also a double cover of $L(X)$. We study various properties of $\gamma(X)$ in relation to $X$ and the relationship amongst the three double covers of $L(X)$ that are $L(X^{\prime\prime}),\gamma(X)$ and $L(X)^{\prime\prime}$. With the help of these double covers, we show that for any integer $k\geq 5$, there exist two equienergetic graphs of order $2k$ that are not cospectral.
Introduction
In this paper, we restrict ourselves to finite graphs with no self-loops and multiple edges. We denote the cycle graph, the path graph, the complete graph and the star graph on n vertices by C n , P n , K n and K 1,n−1 respectively. A graph Y is a covering graph of a graph X if there is a map from the vertex set of Y to the vertex set of X such that the neighbourhood of a vertex v in Y is mapped bijectively onto the neighbourhood of f (v) in X. If each vertex of X has exactly two preimages in Y then we say that Y is a double cover of X. One of the easy ways to construct the double cover of a graph X is to take the Kronecker product of X by K 2 and it is denoted by X ′′ . The Kronecker product X 1 × X 2 of graphs X 1 and X 2 is a graph such that the vertex set is V (X 1 ) × V (X 2 ), vertices (x 1 , x 2 ) and (x ′ 1 , x ′ 2 ) are adjacent if and only if x 1 is adjacent to x ′ 1 in X 1 and x 2 is adjacent to x ′ 2 in X 2 . In Section 3, we show that L(X ′′ ) is a double cover of L(X). The double cover for a graph X is not unique (see Example 1.1). Many researchers used covering of graphs in the construction of Ramanujan graphs (see [11]) and in the construction of pairs of cospectral but not isomorphic graphs. Additional information on covering of graphs can be found in [5,14]. Example 1.1. In this example, we demonstrate the two non-isomorphic double covers of K 4 . It is interesting to see that M +M T , where A T denotes the transpose of A, is a symmetric matrix with entries 0 or 1. We call M + M T symmetric edge adjacency matrix of X, and the graph whose adjacency matrix is M + M T is called symmetric edge graph of X, denoted by γ(X). We define γ k (X) = γ(γ k−1 (X)), where k ∈ N with γ 0 (X) = X. Later, we will see that γ(X) is also a double cover of L(X). In Figure 2, for a graph X we have given its line graph and the three non-isomorphic double covers of L(X).
In the literature, a lot of work has been done on the properties of L(X) in relation to X (see Chapter 8 of [6]). In Section 2, we study various properties of γ(X) with respect to X. We provide a decomposition of γ(X) in terms of crown graphs. With these three double covers of L(X) in hand which are L(X) ′′ , L(X ′′ ) and γ(X), we will study the relation amongst them in Section 3. In Theorem 3.3, we characterize all graphs X so that γ(X) = L(X ′′ ), γ(X) = L(X) ′′ and L(X) ′′ = L(X ′′ ). In the rest of this section, we will discuss why the matrix M is important for the Ihara zeta function of a graph, the properties of the matrix M , and the symmetric edge graphs. A path P = e 1 e 2 ⋯e t , where e i is an oriented edge, is said to backtrack if e k+1 = e −1 k for some k ∈ {1, 2, 3, . . . , t − 1}, i.e. it crosses the same edge twice in a row. A path P is said to have a tail if e t = e −1 1 , i.e. the last edge of P is the reverse of the first edge. A closed path C = e 1 e 2 ⋯e t is said to be prime or primitive if it has no backtrack or tail and C ≠ D f for some closed path D and f > 1. The Ihara zeta function of a graph X is defined to be where the product is over the primes [C] of X and ℓ(C) is the length of cycle C. The fundamental group π 1 (X, v) of a connected graph X is the free group consisting of all closed walks starting and ending at the vertex v together with the operation which concatenates walks. The rank r of the π 1 (X, v) is the number of elements in a minimal generating set of π 1 (X, v) which is also the number of edges left out of a spanning tree of X. The computation of Ihara zeta function using the definition is difficult except for the cycle graph. The following two results by Bass [2] and Hashimoto [7] simplified the evaluation of the Ihara zeta function for graphs that have a minimal degree of at least 2.
Let A(X) or A be the adjacency matrix of X and Q(X) or Q be the diagonal matrix with j th diagonal entry q j such that q j + 1 is the degree of the j th vertex of X. Suppose that r is the rank of the fundamental group of X; r − 1 = E − V . Then The main purpose of introducing the matrix M can be seen in the following result.
Let M be the edge adjacency matrix of a graph X. Then Now we will state a few properties of matrix M . Many of these have been discussed in the thesis of Horton [8] and one can also find them in the book by Terras [14].
where A, B, C, D are m × m matrices with the following properties: Now we provide two examples of γ(X), from where one can note that γ function does not preserve connectivity and K 1,3 is a tree but γ(K 1,3 ) is a cycle graph. After that, we shall state Theorem 1.6, which is essential for further discussion. (1) If X = C n , then γ(X) = 2C n and γ k (X) = 2 k C n . (2) If X = K 1,3 , then γ(X) = C 6 and γ k (X) = 2 k−1 C 6 .
1.6, we can see that the spectrum of A(L(X)) is contained in the spectrum of A(γ(X)). The following are a few immediate observations of the graph γ(X).
(1) The number of vertices of γ(X) is twice the number of edges of X.
(2) We have, where e denotes the column vector with all entries one and J(M (4) It is easy to see that if X is Eulerian, then γ(X) is Eulerian provided γ(X) is connected which follows from the fact that if X is Eulerian then L(X) is Eulerian (see Harary [6]). But if γ(X) is Eulerian, then X need not be Eulerian which is clear from Part 2 of Example 1.5. (5) It is well known that if X is regular, then L(X) is regular. This shows that the map γ maps regular graphs to regular graphs. Conversely, if γ(X) is regular, then X is either a regular graph or a semi-regular bipartite graph. It can be seen from Lemma 6.2 in [13].
For further information on the matrix M and the Ihara zeta function, one can refer to [14]. For other results and proofs related to graph theory, we refer to [6,12]. We recall once again that L(X) and X ′′ denote the line graph and Kronecker double cover of X, respectively.
Properties of γ(X)
We begin this section by stating the famous Whitney theorem and then we present the analogous result for the γ function.
Theorem 2.1. [15] Let X and Y be connected graphs with isomorphic line graphs. Then X and Y are isomorphic, unless one is K 3 and the other is Proof. Suppose that γ(X) is isomorphic to γ(Y ), then by Property 6 of M we note that L(X) is isomorphic to L(Y ). By Theorem 2.1 and Part 2 of Example 1.5, the result follows.
Next, we prove that the γ function is additive with respect to the disjoint union.
Proof. We give the proof for k = 2 and the general case follows by induction on k. Let X 1 , X 2 be graphs with m 1 , m 2 edges, respectively. Then A(γ(X 1 ⊍ X 2 )) and A(γ(X 1 )⊍γ(X 2 )) have the following block structures, respectively.
It is easy to see that We will see a few examples to observe the pattern of graphs under the γ function. For more examples, one can refer the Table 3.
(1) If X = P n then γ n−1 (X) is a null graph. Table 1 shows the effect of repeated application of the γ function on the path graph. (2) If X = K 1,n , then γ(X) is a crown graph on the 2n vertices. In particular, if X = K 1,4 then γ(X) is a cube. Recall that a crown graph on 2n vertices is a graph with two sets of vertices The following results provide how the γ function preserves connectedness and bipartiteness. Unless specified otherwise, we assume that A(γ(X)) = Proposition 2.5.
(1) Let X be a connected graph. Then γ(X) is connected if and only if X is not a cycle graph or a path graph. Moreover, γ(X) cannot be a cycle graph unless X = K 1,3 .
(2) Let γ(X) be a connected graph, then γ(X) has a cut edge if and only if X contains a pendant vertex which is adjacent to a vertex of degree two. is bipartite.
Proof. Proof of Part 1. Let us suppose that γ(X) is not a connected graph.
Then B + C = 0 and hence B, C = 0. Thus we conclude that the degree of each vertex in X is at most 2. Since X is a connected graph, X is either a cycle graph or a path graph. From part 1 of Example 1.5 and 2.4 one can see that the converse also holds.
For the second part of the Proposition, let γ(X) be a cycle graph on 2k (k ≠ 3) vertices. From the structure of the adjacency matrix of a cycle graph, we see that when we add the four blocks of A(C 2k ), we obtain 2A(C k ). On adding all the blocks of A(γ(X)), we get 2A(L(X)). We deduce that L(X) is a cycle graph on k vertices. However, we know from [6] that a connected graph is isomorphic to its line graph if and only if it is a cycle graph. This implies that X is a cycle graph on k vertices, which is a contradiction to Part 1 of Example 1.5. If X = K 1,3 , then from Part 2 of Example 1.5 we have already seen that γ(X) is C 6 .
Proof of Part 2. Let γ(X) have a cut edge and no pendant vertex. From the structure of A(γ(X)), it can be observed that γ(X) has two copies of a graph each of whose adjacency matrix is A 0 . Since γ(X) is connected, the edges corresponding to the matrix B 0 connects the two copies of the graph given by A 0 . As B 0 is symmetric, no edge given by the matrix B 0 can be a cut edge. Also, note that no edge in the two copies given by A 0 in A(γ(X)) can be a cut edge. Therefore γ(X) has a pendant vertex which implies that X has a pendant vertex that is adjacent to a vertex of degree 2. The converse is easy to see as well.
Proof of Part 3. Suppose that X is bipartite with vertex partitions Choose an orientation in such a way that e ′ i s are the directed edges from v i to v ′ j for all 1 ≤ i ≤ n, 1 ≤ j ≤ k. Then Therefore, γ(X) is bipartite. The converse is easy to see.
From the proof of Part 3 of Proposition 2.5, one can see that if X is bipartite, the spectrum of γ(X) is given by the union of spectra of A(L(X)) and −A(L(X)). It is possible to know the number of triangles in γ k (X), once we know the number of triangles in X from the following result. Proposition 2.6. Let t i be the number of triangles in γ i−1 (X), where i ≥ 1.
Proof. We shall prove the result by induction on i. We begin by proving for i = 2. It is easy to see that If each of M ij , M jk and M ik are nonzero, then e k = e −1 k . Consequently, X has multiple edges, which is a contradiction. Similarly, one can show that (M (M 2 ) T ) ii = 0. Thus, 3t 2 = T r(M 3 ). From Property 7 of M , we have another identity t 2 = N 3 3 . Hence, the result follows from the fact that t 1 = N 3 6 , as each vertex of a triangle can be an initial vertex and two directions. Assume that the result is true for all i ≤ k − 1. Clearly, t k = 2t k−1 . By the induction hypothesis, the proof is complete.
Next, we will present a characterization of symmetric edge graphs analogous to that of line graphs, as given by Krausz in [10]. By the star graph at the vertex u in a graph X, denoted by St(u), we mean a subgraph of X with V (St(u)) = {w w is adjacent to u} ∪ {u} and E(St(u)) = {e u is incident with e}. The approach used in the proof of Theorem 2.8 is motivated by the proof of Theorem 8.4 in [6]. Proof. Let Y be the symmetric edge graph of X. Without loss of generality, X is connected. Let v be any vertex of X, then by Part 2 of Example 2.4 we see that St(v) induces a crown subgraph of Y . The edges of Y are exactly in one of the subgraphs. For any e ∈ E(X), there exists exactly two vertices a, b ∈ V (X) such that e ∈ St(a) ∩ St(b), which shows that no vertex of Y is in more than two of the subgraphs.
Let H 1 , H 2 , . . . , H n be the partition of the graph Y satisfying the hypothesis. We explain the construction of X from Y, where Y = γ(X). Let H = {H 1 , H 2 , . . . , H n }, U be the set of vertices of Y which lies in only one of the partitions H i . Also, note that e i ∈ U if and only if e −1 i ∈ U. Let U 1 ⊂ U such that U 1 contains half of the elements of U and either e i or e −1 i ∈ U 1 . The vertices of X are given by H ∪ U 1 . Two vertices of X are adjacent if they have a nonempty intersection. Corollary 2.9. Let X be a connected graph. Then γ(X) is unicyclic if and only if X is a tree with ∆(X) = 3, where ∆(X) denotes the maximum degree of X and there is exactly one vertex of degree three.
Proof. Suppose that γ(X) is unicyclic, which implies that X does not contain a cycle. By Theorem 2.8, it is clear that there does not exist a vertex in X with a degree greater than or equal to 4. If there exists more than one vertex of degree 3, then we get a contradiction to the hypothesis. The converse is easy to follow by Theorem 2.8.
Double covers of line graph
Let X be a connected graph with n vertices and m edges.
. We illustrate this labelling in Example 3.1. Recall that the adjacency matrix of a bipartite graph can be written as is called the bi-adjacency matrix.
Example 3.1. In this example, we illustrate the labelling of X ′′ , where X is given in Figure 2. We label the edges of X ′′ in the following manner:
Figure 6
The rows and columns of A(L(X ′′ )) are indexed by E(X ′′ ). It is easy to see that A(L(X ′′ )) has the following structure where P, Q, R are m × m matrices with the following properties: (1) P = R. Since P ij = 1 implies that e i is adjacent to e j , the labelling defined above shows that e m+i is adjacent to e m+j .
(2) Q = Q T . Since Q ij = 1 implies e i is adjacent to e m+j , the labelling defined above shows that e j is adjacent to e m+i . (3) P + Q = A(L(X)). Note that if P ij = 1 then Q ij = 0 and vice-versa. If (P + Q) ij = P ij + Q ij = 1, then from the definition of covering graph we have A(L(X)) ij = 1. We obtain L(X ′′ ) is a double cover of L(X). Also, from the point 3 mentioned above and Theorem 1.6 we see that the spectrum of A(L(X)) is contained in the spectrum of A(L(X ′′ )). To proceed with the proof of Theorem 3.3, we need to define claw free graphs. Recall that a claw is another name for the complete bipartite graph K 1,3 . In contrast, a claw-free graph is a graph in which no induced subgraph is a claw. It was proved by Beineke in [3] that the line graph of any graph is claw-free.
Proposition 3.2. Let X be a connected graph. Then
(1) L(X ′′ ) is disconnected if and only if X is bipartite.
Proof. Proof of Part 1 is easy to follow from the result proved in [9], which discusses that a Kronecker double cover of a graph X is connected if and only if X is connected and non-bipartite.
Proof of Part 2. We know from the definition of a line graph that t ′ = t 1 + ∑ i d i 3 . From Proposition 2.6, we know that 2t 1 = t 2 . Since X ′′ is bipartite, we have We are now interested to see the relationship among γ(X), L(X ′′ ) and L(X) ′′ for a connected graph X. We begin with an example. From Figure 7, we see that γ(X) = L(X) ′′ and γ(L(X)) = L(L(X) ′′ ), but it is not true in general, one can check with X = C 3 . In the next theorem we characterize all those graphs which satisfy this property. (1) γ(X) is isomorphic to L(X) ′′ if and only if X is bipartite.
(2) γ(X) is isomorphic to L(X ′′ ) if and only if one of the following is true: • X is a path graph.
• X is a cycle graph on even vertices. • X = K 4 , K 4 − {e} or a triangle with a pendant vertex. Table 2 (3) L(X ′′ ) is isomorphic to L(X) ′′ if and only if X is either a cycle graph or a path graph.
Proof. Proof of Part 1. If X is bipartite, then by Part 3 of Proposition 2.5, γ(X) is bipartite which shows that Proof of Part 2. In order to prove this, we first prove that γ(X) is a line graph of some graph if and only if V (X) ≤ 4 or X is either a cycle graph or a path graph.
Suppose that γ(X) is a line graph of some graph. Clearly ∆(X) ≤ 3, since if any vertex v in X has a degree greater than or equal to 4, then by Theorem 2.8, v induces a crown graph on at least 8 vertices. Hence, γ(X) cannot be a claw-free graph.
Case 1: If ∆(X) ≤ 2, then X is either a cycle graph or a path graph. From Part 1 of Example 1.5 and 2.4, it is clear that γ(X) is a line graph of 2X. Case 2: Let ∆(X) = 3 and V (X) > 4. Let v be a vertex of degree 3 and vertices adjacent to v be x, y, z. Since V (X) > 4, if we add a pendant edge on any of the vertices x, y, z, then the graph γ(X) is not a claw-free graph, which is clear from Figure 7. Conversely, if X = C n (or P n ), then γ(X) is a line graph of two copies of C n (or P n ). If X = K 1,3 , then by Part 2 of Example 1.5 γ(X) is C 6 which is a line graph of C 6 . γ(X) for other non-isomorphic graphs with V (X) = 4 are described in Table 2 and Figure 7.
Suppose that γ(X) = L(X ′′ ). Then by the above statement, it can be noted that V (X) ≤ 4 or X = C n or P n . If X = C n and n is odd, then L(X ′′ ) = C 2n ≠ 2C n = γ(X). If X = C n (n is even) or P n , then L(X ′′ ) = γ(X). For X = K 4 or K 4 − {e} or a triangle with a pendant vertex, we can see from Table 2 and Figure 7 that γ(X) = L(X ′′ ). If X = K 1,3 it can be seen that L(X ′′ ) ≠ γ(X). The converse part of the same is easy to follow. Proof of Part 3. Assume that L(X) ′′ = L(X ′′ ). From here it is clear that the degree of each vertex of X is less than or equal to two. Hence, X is either a cycle graph or a path graph. Conversely, if X = C k and k is odd then X ′′ = C 2k , L(X ′′ ) = L(X) ′′ = C 2k . If X = C k (k is even) or P k then X ′′ = 2X, the result follows.
We conclude from Theorem 3.3 that if X(≠ K 4 , K 4 − e, C n , or a triangle with a pendant vertex) is non-bipartite then γ(X), L(X) ′′ and L(X ′′ ) are three non-isomorphic double covers of L(X). We have already seen that for a graph X, the spectrum of A(L(X)) is contained in the spectrum of A(γ(X)) and A(L(X ′′ )). An immediate question arises about the remaining eigenvalues that is the eigenvalues given by A 0 − B 0 and P − Q. If X is bipartite, then A 0 − B 0 = −A(L(X)) and P − Q = A(L(X)). If X is nonbipartite we have Theorem 3.5. We shall discuss an example for further clarity.
Example 3.4. Let X be the graph given in Figure 2. For a graph X ′′ we will continue to use the labelling defined in Example 3.1. The adjacency matrix corresponding to L(X ′′ ) is equal to It is easy to see that P + Q = A(L(X)) and Now, we use the upper diagonal entries of matrix P − Q to assign an orientation to the graph X such that A 0 − B 0 = −(P − Q). For example: (P − Q) 14 = −1. From Example 3.1, we see that e 1 is an edge between 1 ′ and 2 ′ + 6 ′ , and e 4 is an edge between 2 ′ and 3 ′ + 6 ′ . Hence in X, we put e 1 from 1 to 2 and e 4 from 2 to 3. Similarly, we repeat the same process for all of the remaining upper diagonal entries in P − Q and obtained the oriented graph given in Figure 8. It is easy to check that for the graph in Figure 8 Proof. If X is bipartite, then we are done. Suppose that X is non-bipartite. It is clear that A 0 − B 0 and P − Q have zero entries at the same positions. Suppose that (P − Q) ij = −1. This implies that edge e i is adjacent to e m+j in X ′′ . Let e i be an edge between vertices v ′ a and v ′ n+b , and e m+j be an edge between vertices v ′ a and v ′ n+c . In graph X, we label the edge from vertex v b to v a as e i and the edge from v a to v c as e j . This shows that (A 0 − B 0 ) ij = 1.
Recall that two graphs of the same order are called equienergetic (resp., cospectral) if they have the same energy (resp., spectrum). In [1] Balakrishnan showed that for any integer k ≥ 3, there exist two equienergetic graphs of order 4k that are not cospectral. Let X be a graph on m edges where m ≥ 5. Then from Theorem 3.5, we see that γ(X) and L(X ′′ ) are equienergetic graphs of order 2m that are not cospectral. We exclude the graphs given in Part 2 of Theorem 3.3. Using Theorem 3.5, we will provide a relation between the zeta function of γ(X) and L(X ′′ ) in Corollary 3.6. Consequently, we obtain that the zeta function of L(X) divides the zeta function of γ(X), L(X ′′ ) and L(X) ′′ . The Kronecker product of matrices A = [a ij ] and B is defined to be the partitioned matrix [a ij B] and is denoted by A⊗B.
|
2022-02-11T06:48:22.613Z
|
2022-02-08T00:00:00.000
|
{
"year": 2022,
"sha1": "a500103fd57b110552ea0780f47af68f6b9b25b1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a500103fd57b110552ea0780f47af68f6b9b25b1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
8108576
|
pes2o/s2orc
|
v3-fos-license
|
The distribution of ductal carcinoma in situ (DCIS) grade in 4232 women and its impact on overdiagnosis in breast cancer screening
Background The incidence of ductal carcinoma in situ (DCIS) has rapidly increased over time. The malignant potential of DCIS is dependent on its differentiation grade. Methods Our aim is to determine the distribution of different grades of DCIS among women screened in the mass screening programme, and women not screened in the mass screening programme, and to estimate the amount of overdiagnosis by grade of DCIS. We retrospectively included a population-based sample of 4232 women with a diagnosis of DCIS in the years 2007–2009 from the Nationwide network and registry of histopathology and cytopathology in the Netherlands. Excluded were women with concurrent invasive breast cancer, lobular carcinoma in situ and no DCIS, women recently treated for invasive breast cancer, no grade mentioned in the record, inconclusive record on invasion, and prevalent DCIS. The screening status was obtained via the screening organisations. The distribution of grades was incorporated in the well-established and validated microsimulation model MISCAN. Results Overall, 17.7 % of DCIS were low grade, 31.4 % intermediate grade, and 50.9 % high grade. This distribution did not differ by screening status, but did vary by age. Older women were more likely to have low-grade DCIS than younger women. Overdiagnosis as a proportion of all cancers in women of the screening age was 61 % for low-grade, 57 % for intermediate-grade, 45 % for high-grade DCIS. For women age 50–60 years with a high-grade DCIS this overdiagnosis rate was 21–29 %, compared to 50–66 % in women age 60–75 years with high-grade DCIS. Conclusions Amongst the rapidly increasing numbers of DCIS diagnosed each year is a significant number of overdiagnosed cases. Tailoring treatment to the probability of progression is the next step to preventing overtreatment. The basis of this tailoring could be DCIS grade and age.
Background
Ductal carcinoma in situ (DCIS) is a "neoplastic proliferation of cells within the ductal-lobular structures of the breast that has not penetrated the myoepithelial-basement membrane interface" [1]. Before the introduction of mammography screening, DCIS was rarely diagnosed. In 1989, 366 women in the Netherlands were diagnosed with DCIS. In 2003, more than 10 years after the introduction of mass screening, 1171 women had a DCIS diagnosed. With the introduction of digital screening this figure rose to 2046 women in 2011, and most recently to 2406 in 2014 [2].
The extent to which DCIS represents overdiagnosis has been extensively debated in relation to organised screening programmes [3][4][5][6]. Overdiagnosis is defined as a lesion diagnosed by screening in an asymptomatic woman that would not have been detected during the woman's lifetime in the absence of screening [4]. To predict the probability of a DCIS to progress to invasive carcinoma, six different grading systems were proposed, based on morphology or molecular profile [7]. All of these classify DCIS into three categories of malignant potential: low (I), intermediate (II), or high (III). The grade of DCIS is correlated with the risk of progression, as well as with the grade of concurrent invasive carcinoma [8][9][10][11][12][13]. The transition from low-grade DCIS to high-grade DCIS or to high-grade invasive carcinoma is deemed unlikely [8][9][10]12].
The grade distribution of DCIS has been studied in mostly small series [6,[14][15][16][17][18], or only included screendetected cases (Table 1) [19]. More insight in this distribution based on larger numbers in both screened and non-screened populations is of paramount importance and may improve our estimates of overdiagnosis.
The aim of this study was to establish the distribution of different grades of DCIS in different subgroups based on mass screening status and age group, and to estimate the overdiagnosis rate for each grade and age group specifically.
Patient selection
We obtained 17,744 excerpts from 12,301 women with DCIS from the years 2007, 2008 and 2009 from the 'Nationwide network and registry of histopathology and cytopathology in the Netherlands' (PALGA). PALGA is a national database containing the excerpts and coded diagnoses of all pathological and cytological examinations performed in the Netherlands [20]. The mass screening status of these women was established by linking the database to the databases of the screening organisations by an independent third party, with the permission of the screening organisations. Our database contained anonymised records of mass screening status (positive, negative, year of last mass screening and number of mass screening examinations), age, year of diagnosis, and a short summary of the conclusion of the original pathology report.
From the 12,301 women, we excluded those who also had a concurrent invasive breast cancer (ipsilateral or contralateral, N = 7089), those who had a lobular carcinoma in situ and no DCIS (N = 6), those who turned out after excision biopsy or ablation not to have any malignancy (N = 131), those who had recently been treated for invasive breast cancer (N = 247), those who had no grade mentioned in the excerpt (N = 17), those who had an inconclusive excerpt on invasion or otherwise (N = 242), and women who had a prevalent DCIS, rather than a new diagnosis in the study period (N = 354). We excluded contralateral disease because our model does not include bilateral disease.
Grading of DCIS
In line with the Dutch guidelines, the classification by Holland et al. is almost exclusively used [21]. At the start of the mass screening programme in the early 1990s, pathologists were instructed on how to uniformly classify each DCIS.
DCIS grade was determined using the information in the short summary of the pathology report by description, i.e. high, moderate, or low differentiation; low, intermediate, or high malignancy potential; or grade I, II, or III. If the summary contained more than one grade, this case was graded according to the highest grade mentioned. If there was a discrepancy between grades in different specimens of the same patient, the grade was based on the most representative specimen, i.e. resection is more representative than biopsy, but biopsy is more representative than cytology.
Statistical analysis
Proportions of DCIS grades were calculated by year, age group, and screening status. We compared these proportions between screening groups using the Pearson chi-square test. Multivariate analyses on age groups were performed with a logistic regression model. The statistically significant parameters were identified by the introduction of variables in a stepwise manner. All calculations were performed using IBM SPSS version 20.0 (IBM Corp., Armonk, NY, USA).
Modelling approach
The MISCAN model is a microsimulation model that simulates the individual life histories of women [22]. The probability of each woman to have an onset of breast cancer is determined by calibrating the model to the incidence rate in 1989 (the year before screening was introduced), adjusted with an annual percentage change of 1.4 % to account for the rising background breast cancer incidence [23]. The natural history of breast cancer is modelled as a Markov-like progression through the successive preclinical stages of the disease. Details of the model have been described previously [4]. For this analysis we added the three DCIS grades to the model, using the age-dependent grade distribution found in this study ( Fig. 1).
Following onset, breast cancer in a preclinical stage can progress to the next preclinical stage (dependent on the duration of the previous state), or become clinically detected. In addition, the DCIS stages may also regress to normal [24,25]. Screening is superimposed on this life history.
The transition probabilities, duration of tumour stages, and test sensitivities were calibrated using data from the Dutch population and Dutch breast cancer screening from 1975 to 2010 on breast cancer incidence by stage, age, and detection mode. The Dutch nationwide breastcancer screening programme has invited all women aged 50-69 since 1990 and women aged 50-75 since 1998 biennially for a mammographic screening examination, free of charge. The attendance rate is approximately 80 % [26].
We chose to look at model outcomes for the years 2000-2009 because there was a steady state situation in these years, more than 10 years after the start of the screening programme. We evaluated the following output: incidence rate by detection mode (screen detected or clinically detected), age, and year of diagnosis. The model compares women in the situation with screening, to the same women in the situation without screening; if a woman has a screen-detected cancer, but would not have had a diagnosis in the situation without screening, this case is regarded as overdiagnosed (Fig. 2).
The estimates and definitions of overdiagnosis vary widely among international publications [4]. To minimise confusion, we used the definitions of overdiagnosis which were deemed most useful by an independent review panel in the UK; from a population perspective: the proportion of all cancers ever diagnosed in women of the screening age and over (50-100 years) that are Screening affecting three women differently. The first box is the life history of a woman who has an onset of breast cancer, is diagnosed clinically, and dies of breast cancer. The second box is the life history of a woman who also has an onset of breast cancer, but who dies of other causes before this would be detected. The third box is the life history of a woman who has an onset of breast cancer, but also a spontaneous regression, this woman would not have been diagnosed without screening. The fourth box indicates the situation for these three women had screening been introduced. The woman in the first box no longer dies from breast cancer; the other two women do not benefit from screening, they have been overdiagnosed overdiagnosed; and from an individual perspective: the proportion of all cancers ever diagnosed in women of the screening age (50-75 years) that are overdiagnosed [27].
Assumptions on natural behaviour of DCIS
In the original model a 2 % regression rate, an 11 % progression rate, and a 5 % clinical detection rate was assumed for all DCIS, resulting in a proper fit of incidence [28]. Little is known about the natural history of DCIS without treatment. Small studies were published, indicating a progression rate of one in two to one in three for low-grade DCIS, one in three for intermediate-grade DCIS and two in three in high-grade DCIS [29,30]. Progression rate may differ from the rate assumed in the original model. In the new model we assumed that intermediategrade DCIS has the same transition probabilities as all DCIS had in the original model. We lowered the regression rate to 1 % for high-grade DCIS, and increased the regression rate to 4 % for low-grade DCIS, based on the findings of Sanders et al. [30]. The probability for a DCIS to be clinically detected was assumed independent of grade. The probability of progression: 16 % for low-grade DCIS, 31 % for intermediate-grade DCIS, and 53 % for high-grade DCIS, was estimated by correcting the probabilities of low-grade DCIS and high-grade DCIS by the progression found in literature [29,30]. Adjusting the progression rate and therefore the duration of the state, influences all successive states, because the progression of each successive state is dependent on the duration of the previous state. High-grade invasive breast cancer follows high-grade DCIS and low-grade invasive breast cancer follows low-grade DCIS. We calibrated DCIS incidence rate to observed data for the period 1990-2010.
Patients/distribution of DCIS grade
Patient characteristics are summarised in Table 2. There was no significant difference in the distribution of grades between the DCIS detected by mass screening and the DCIS not detected by mass screening (from the interval group); 16.4-18.8 % were low grade, 27.2-31.6 % were intermediate grade, and 52.0-54.0 % were high grade ( Table 3).
Univariate analysis of the group, not detected by mass screening, showed that DCIS grade has an inverse linear association with 5-year age group (P value = 0.015), and with age as a linear variable (P value = 0.018). Year of diagnosis did not contribute in this group. Overall the year of diagnosis was a significant independent variable (P value = 0.02) ( Table 4).
Estimating overdiagnosis
The distribution of DCIS grade was included in the model and the new model was calibrated estimating dwell times and probabilities of transition on incidence data from the Cancer Registry and grade distribution from our study (Fig. 3).
Discussion
This is the largest study on the distribution of DCIS grade and the first modelling study to estimate overdiagnosis rate by DCIS grade. The distribution of grades in DCIS is dependent on age, but not on mass screening status. This is in accordance with earlier studies on grade distribution. The overall distribution is also consistent with these studies (Table 4) [6, 14-16, 18, 19, 31]. The incidence rate of DCIS has increased rapidly over recent years. DCIS is unequivocally associated with mammography screening. Approximately one third of the cases in the database were detected by mass screening, which corresponds to the overall distribution of breast cancers detected by mass screening (both in situ and invasive) of all breast cancers in the Dutch population, and to the findings of Shin et al. [32]. However, in our study, when linking Dutch pathology reports to the records of the screening organisations, most DCIS were not known at mass screening organisations. This can partly be explained by the fact that one of the nine organisations that were responsible for screening at the time did not deliver data to be linked to the PALGA database. This organisation represents approximately 15 % of all screened women annually. Second, we do not know how the diagnoses not detected by mass screening were established. Given the age distribution and the fact that DCIS is generally not palpable, we assume that the majority of these cases are diagnosed through screening outside the mass screening programme. As expected, and in line with previous studies, we found more low-grade DCIS in older women [33]. In general, more aggressive cancers are diagnosed earlier in life. Those that remain for detection at an older age are more likely to be less aggressive [34].
In the Netherlands, a transition to screening with digital mammography was made between 2005 and 2010. In 2010, the detection rate of DCIS in mass screening increased substantially, probably as a result of the introduction of digital mammography screening. Currently, it is not yet clear whether this is a prevalence effect or a lasting effect. We studied the years 2007, 2008 and 2009; thus, an increasing proportion of the DCIS we considered has been found with digital screening. We have no knowledge which DCIS were detected by digital mammography or film screen mammography. Also, the DCIS detected outside the mass screening programme are equally likely to have been detected with digital mammography. We did not find a difference in grade distribution in screendetected DCIS over this period; therefore it seems unlikely that digital screening will have significantly altered the grade distribution, which is also in accordance with the findings of Bluekens et al. [19].
We have found that grade distribution for DCIS in the years 2007, 2008 and 2009, was inversely related to age, but we have no information on historical development of this distribution. For our study, we assumed the distribution to be stable over time.
Considerable controversy exists on whether DCIS is the ideal stage of the disease for early detection, or whether the detection of DCIS represents overdiagnosis, and, consequently, overtreatment. However, agreement exists that it is essential to determine which individual diagnosis is overdiagnosis and which is not. Central to this discussion is the natural behaviour of DCIS. Now that we have specified grades of DCIS in the microsimulation model, we can estimate overdiagnosis more accurately. Only 16.4 % of DCIS detected by mass screening are low grade, 60 % respectively 61 % of which are overdiagnosed, depending on the definition of overdiagnosis. We found that 50.9 % of all DCIS detected by mass screening are high grade, and therefore have a high risk of progression. In these cases we are bound to find aggressive cancer earlier and to prevent fast-growing invasive cancer, but even so, 45 % of these cases are overdiagnosed, independent on the definition of overdiagnosis. For younger women (age 50-60) with a highgrade DCIS however, overdiagnosis estimates vary between 21 % and 29 % from an individual perspective, therefore for these women screening is most protective.
We found an increasing amount of overdiagnosis in older women with high-grade DCIS; this is the result of a longer dwell time in the model in high-grade DCIS in women over 60. This dwell time was calibrated by the model. A disease with a longer dwell time is more likely to be detected by screening. The longer dwell time of high-grade DCIS in older women correlates to the findings of Weigel et al., who found a higher detection rate of high-grade DCIS in older women [33].
Our overdiagnosis estimates make a general decision on treatment from a population-based approach a very difficult one for women with DCIS. We estimate that 60 % of these women would be overtreated if they undergo treatment for this disease, of which they would never have been aware in the absence of screening. On the other hand, they are diagnosed with an entity that carries a specific risk for progression to an invasive and potentially lethal disease and will therefore lean towards treatment, rather than active surveillance. If this entity would be named differently this might be perceived differently [35]. DCIS can also be regarded as a risk factor like lobular carcinoma in situ (LCIS). One can question whether the increased risk in DCIS, as compared to LCIS, justifies the current practice of invasive treatments.
Specific estimates for overdiagnosis rate by grade will become increasingly important. These estimates may change when the treatment for DCIS can be even more customised according to grade [36]. To our knowledge, a trial to compare treatment of DCIS to active surveillance is planned [37].
Limitations of the study
We did not review grading or examine inter-observer variation between pathologists, because this was beyond the scope of our study. PALGA and the Dutch association of pathologists will be conducting a study to evaluate the inter-observer variation in the near future. We believe our study to be a proper representation of the current Dutch situation. There is no reason to suspect that DCIS not detected by mass screening represents a different patient group than DCIS detected by mass screening, and for that reason, for both groups the same dilemma with regard to a possible inter-observer variation exists.
Assumptions on behaviour of DCIS were done on older studies. Advances have been made in the evaluation of biopsies. Currently more sampling is done and pathologists are more aware of the possible findings in DCIS, this could influence the assumptions on behaviour of DCIS if the studies on which they are based were repeated now.
Conclusions
DCIS grade is almost equally distributed across the screened population in the breast cancer screening programme and the population not subjected to/ participating in mass screening. DCIS has been divided into three grades, each constituting a unique entity with its own natural history. We found that the distribution of these grades is not dependent on mass screening status, but is dependent on age. When taking the different grades into account, overdiagnosis rates of breast cancer in mass screening are 60 % for low-grade DCIS and 45 % for high-grade DCIS from a population perspective, and 61 % and 45 % respectively from an individual perspective. When taking the younger ages and high grade into account overdiagnosis rate from an individual perspective is 21-29 %.
These figures underline the necessity of large randomised trials for watchful waiting in low-grade DCIS, whether these are detected in a mass screening programme or not.
Ethics statement
Since the research was retrospectively performed on data, and did not involve subjecting patients to certain acts or appointing them behavioural changes, consent from the medical ethics commission was not required according to Dutch law. We only ever received fully anonymised data.
Consent statement
By participating in the programme, women automatically consent to the use of their data to evaluate and improve the programme. Information about the use of data is provided with a flyer accompanying the invitation letter. If a woman does not want the screening organisation to use her data for this purpose, she can return the signed corresponding form to the screening organisation. Only a minor fraction (0.01 %) used this possibility.
|
2017-08-03T02:19:47.215Z
|
2016-05-10T00:00:00.000
|
{
"year": 2016,
"sha1": "d8d5672131d049c3d611e190ce0d380c6866e0ae",
"oa_license": "CCBY",
"oa_url": "https://breast-cancer-research.biomedcentral.com/track/pdf/10.1186/s13058-016-0705-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8d5672131d049c3d611e190ce0d380c6866e0ae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
29887157
|
pes2o/s2orc
|
v3-fos-license
|
Antiretroviral adherence and virological outcomes in HIV-positive patients in Ugu district, KwaZulu-Natal province
Adherence to antiretroviral therapy is crucial to ensure viral suppression. In the scientific community it is widely accepted that an adherence level of at least 90% is necessary to achieve viral suppression. This study uses pharmacy refill records to describe antiretroviral adherence in HIV-positive patients in Ugu District, KwaZulu-Natal, South Africa and to describe pharmacy refill records as reliable monitoring method of antiretroviral therapy. In total, 61 patients’ records were reviewed. Overall, 50 (82%) of the patients achieved an optimum adherence level of at least 90%, whereas 19 (38%) of these patients did not show any related viral suppression. A statistically significant relationship between adherence and viral suppression was not demonstrated. Therefore, pharmacy refill records cannot be recommended as an alternative method of monitoring response to antiretroviral therapy, but laboratory tests including CD4 cell count and or viral load must be combined with the pharmacy refill method for monitoring of antiretroviral therapy in HIV-positive patients.
Introduction
Worldwide, the number of people newly infected with human immunodeficiency virus (HIV) continues to decline (United Nations Programme on HIV/AIDS (UNAIDS) 2012). There were 2.3 (1.9-2.7) million new HIV infections globally in 2012, showing a 33% decline in the number of new infections from 3.4 (3.1-3.7) million in 2001 (UNAIDS, 2013). Also, the annual number of people dying from AIDS-related causes declined by at least 50% from 2005 to 2011 because of scaled-up antiretroviral therapy and the steady decline in HIV incidence since the peak in 1997. As programmatic scale-up has continued, health gains have accelerated and the number of life-years saved by antiretroviral therapy in sub-Saharan Africa quadrupled in the last four years (UNAIDS, 2012). In addition to the effects on acquired immune deficiency syndrome (AIDS) mortality and overall HIV prevalence, it is believed that improved treatment access could help to lower HIV incidence by reducing the viral load at the individual and community level (Centres for Disease Control and Prevention (CDC), 2013). Antiretroviral therapy (ART) aims to reduce and sustain plasma viral load levels to below the level of detectable limit of the assay. The sustained inhibition of viral replication results in partial reconstitution of the immune system in most patients, substantially reducing the risk of clinical disease progression and death (Adler, Edwards, Miller, Sethi, & Williamset, 2012). Adherence to antiretroviral therapy is crucial to ensure viral suppression, and decrease. The best biological marker of adherence is an undetectable viral load in patients on ART (Meintjes et al., 2012). In the scientific community, it is widely accepted that an adherence level of at least 90% is necessary to suppress the virus sufficiently to avoid the risk of mutation and to prevent the development of drug resistant strains and drug failure (Van Dyk, 2013). Improving the ability of providers to access adherence is essential for routine care of HIV-infected patients, especially in settings where viral load monitoring is limited.
Statement of the research problem
The South African antiretroviral treatment guidelines recommend monitoring viral load at six months after starting ART, one year and then annually, to identify treatment failures and problems with adherence (Statistics South Africa, 2013). The researchers identified that despite standardised and supportive policy, plasma viral load measurements are not promptly done for HIV-positive patients on ART, which could lead to the emergence of drug resistance and result in therapeutic failure. Thus the need for an alternative strategy method for monitoring the response to antiretroviral therapy in HIV-positive patients is imperative. Based on the above statement, the Provincial Department of Health in KwaZulu-Natal came up with the pharmacy refill system as a way to monitor ART adherence. This study looked and evaluated the efficacy of this new system of monitoring adherence to ART (Statistics South Africa, 2013).
Research purpose
The purpose of this study was to describe antiretroviral adherence in HIV-positive patients using pharmacy refill records and to describe pharmacy refill records as an alternative method of monitoring response to antiretroviral therapy.
Research objectives
The objectives for this study are to • describe adherence to antiretroviral therapy by HIV-positive patients in Ugu district • establish if pharmacy refill records are a reliable monitoring method of HIV-positive patients on antiretroviral therapy.
Research questions
The study sought the answer the following questions: • To what extent do HIV-positive patients in Ugu District adhere to ART? • Can pharmacy refill records be used as a reliable method of monitoring patients adherence to antiretroviral therapy?
Significance of the study
The findings will help specifically in the clinical practice to achieve the following: • To assess the response to antiretroviral therapy by using a very simple measure of adherence, namely pharmacy refill for clinics that do not have CD4 counts or viral load monitoring capabilities. • For clinics to be able to perform viral load assessment in all patients routinely. Adherence monitoring using pharmacy refill could guide decision-making on timing of these tests. • For clinics that are unable to perform routine viral load measurement, the findings of this study will help to recommend the use of pharmacy refill as practical monitoring tool for early identification of patients at high risk of virological failure.
Research design and method
The research design is the overall plan for obtaining answers to the questions being studied and for handling some of the difficulties encountered during the research process. A design is the blueprint for conducting a study that maximises control over factors that could interfere with the validity of the findings (Polit & Beck, 2008;Burns & Grove, 2009). The researcher used a cohort study, which was quantitative, retrospective and descriptive in nature. Quantitative research is a formal, objective, systematic process in which numerical data are used to obtain information about the world (Burns & Grove, 2009). Descriptive research refers to research that has as its main objective the accurate portrayal of the characteristics of persons, situations, or groups, and/or the frequency with which certain phenomena occur (Polit & Beck, 2008). Most importantly, the purpose of descriptive research is to explore and describe phenomena in real-life situations. In addition, this approach is used to generate new knowledge about concepts or topics about which limited or no research has been conducted (Burns & Grove, 2009). Retrospective design involves collecting data on an outcome occurring in the present, and then linking it retrospectively to antecedents or determinants occurring in the past (Polit & Beck, 2008). Baseline demographic data, medical records on file, HIV viral load measurements as well as pharmacy refill records of HIV-positive patients who were initiated on ART between January 2011 and December 2012 were retrieved from the hospital' records. Therefore, the data collected were used to measure the adherence to antiretroviral therapy by HIV-positive patients.
This study systematically and objectively reviewed viral load measurements of HIV-positive patients who have been on ART and their pharmacy refill records in order to describe antiretroviral adherence in HIV-positive patients and to determine the ability of pharmacy refill adherence to detect virological outcomes.
Research setting
The researchers conducted the study at one of the district hospitals in Ugu Health District found in the lower south coast of the province of KwaZulu-Natal in South Africa. The district provides health service to the population using the primary health care approach through the district health system and this is done at all levels of care. The Ugu district has three district hospitals, one regional hospital, one specialised hospital, two community health centres, 56 fixed clinics (including three gateway clinics) and 15 mobile clinics. The hospital where the study was conducted had about 1 987 patients who were still attending the institution for routine check-up and collection of antiretroviral drugs (ARV) at the end of December 2012.
Research population and sampling
The population is all the elements (individuals, objects, or substances) that meet certain criteria for inclusion in a given universe (Burns & Grove, 2009). The population for this study were the records of HIV-positive patients who attended designated district hospital for antiretroviral therapy between January 2011 and December 2012, and that met the eligibility criteria.
Sampling is a process of selecting subjects, events, behaviours, or elements for participation in a study (Burns & Grove, 2009). A sample is a subset of the population that is selected for a particular study, and sampling defines the process for selecting a group of people, events, behaviours, or other elements with which to conduct a study (Burns & Grove, 2009). The sampling plan specifies in advance how the sample will be selected and recruited, and how many subjects there will be (Polit & Beck, 2008).
Probability sampling design using systematic sampling technique was used to select every 10th patients' records that meet the following criteria: 18 years and older, had completed at least 12 months of treatment and had at least two viral load measurements recorded after initiation of antiretroviral therapy (Polit & Beck, 2008). The medical records on file, HIV viral load measurements as well as pharmacy refill records were utilised as data sources of information for the study.
Data collection instrument
Quantitative researchers typically develop a detailed data collection plan; researchers often use formal data collection instruments (Polit & Beck, 2008). The gathering of information to address a research problem was done by using a checklist as data collection instrument. It was developed by the researchers for recording the variables related to patient demographic information, medical information, and pharmacy refill records. The instrument used was not adapted from any previously published literature.
Ethical considerations for this study
Permission to conduct this study was granted by the Higher Degrees Committee of the University of South Africa (UNISA). Further permission was granted by the Provincial Department of Health KwaZulu-Natal. The hospital where the records accessed are kept also gave permission for this study to be conducted.
Data collection
In quantitative research, data collection involves obtaining numerical data to address the research objectives, questions, or hypotheses (Burns & Grove, 2009). Data were collected by the researchers and the field workers using the checklist for recording of patient demographic data, clinical data and pharmacy drug information retrieved from patient's records.
Data analysis
Data analysis is defined as the systematic organisation and synthesis of research data (Polit & Beck, 2008). Analysis of the data was carried out by using the Statistical Package for Social Sciences (SPSS) for Windows (Version 17), and a statistician assisted the researcher in analysing and interpreting collected data.
Descriptive statistics were used to describe key research variables and summarise sample characteristics in terms of frequency distribution, measures of central tendency and measures of variability. Once these features were known, the researchers used bivariate descriptive statistics to describe the relationship between antiretroviral adherence and virological outcomes.
Findings
A total of 61 (30.8%) records of HIV-positive patients who met the inclusion criteria were reviewed for this study. Out of 198 (100%) records of patients selected, 137 (69.2%) records were not included because they did not meet the above set criteria. Out of 61 (100%) of patients' records reviewed, about 41 (67.2%) were initiated on antiretroviral therapy in year 2011, while 20 (32.8%) were initiated in year 2012. Table 1 shows that there were more females (33 or 54.1%) than males (28 or 45.9%) living with HIV. This confirms the inequalities between men and women that are created and reinforced by gender roles, typically leaving women especially vulnerable to HIV infection. According to Van Dyk (2013), women are more likely than men to become infected with HIV during unprotected vaginal intercourse. There are various biological, cultural, and social reasons which make women more susceptible to HIV infection than men. As the recipients of semen, they are exposed to semen for a longer time. They also have a larger surface area of mucosa (the thin lining of the vagina and cervix) exposed to the partner's secretions during sexual intercourse. Apart from their biological vulnerability, women become more vulnerable in societies in which they are seen as having lower status than men, which makes them dangerously vulnerable in sexual relationships (Van Dyk, 2013).
Gender distribution (n = 61)
The findings of this study are supported by the previous studies which found that in South Africa, just over 51% (27.08 million) of the population are females and the ratio of new female infections to male for those aged 15 to 49 was 1.5 by 2013 (El-Khatib et al., 2011;Statistics SA, 2013).
Age in years (patients)
Most records of patients reviewed showed that 22 (36.1%) were aged between 30 and 34 years old. The key age group of adults aged between 20 and 49 years represented the majority of patients, with about 57 (93.4%); among these 31 (54.4%) were females and 26 (45.6%) were males. This shows that there are higher HIV infection rates among young and working class women especially those aged between 20 and 49 years compared to young men.
The finding of this study correlate with the evidence from South African studies which show that some gender norms related to masculinity encourage men to have more sexual partners and older men to have sexual relations with much younger women. In addition, this contributes to higher HIV infection rates among young and working class women, especially those aged between 15 and 49 years compared to young men (Mutinta, Gow, George, Kunda & Ojteg, 2011). Table 2 shows that about 37 (60.7%) patients were single while 24 (39.9%) were married. This shows that in relation to the marital status and HIV infection, being single amplifies the risk of getting infected with HIV because these individuals are likely to be engaged in many risk-taking behaviours including casual sex, multiple and concurrent sexual partnership and failure to use condoms during sex. This finding agrees with a study done in Zimbabwe which found that being single was associated with HIV infection (Ministry of Health and Child Care, 2014). In addition, Shisana et al. (2014) found that in South Africa HIV infection varies considerably by marital status. Those that are married are less likely to be HIV positive compared to any other reported marital status. Figure 1 depicts that about 72% (44) of all patients whose records were reviewed showed that they were unemployed, with about 25 (56.8%) being females and 19 (43.8%) being males. This finding suggests that the socio-demographic context in which people live highly influences the individual (2011) which found that being HIV positive is associated with increase in the likelihood of being unemployed.
Employment status
McLaren (2011) found that in South Africa, individuals with HIV tend to be unemployed, and unemployed people are more likely to be HIV positive. Furthermore, Levinsohn et al. (2011) found that being HIV-positive is associated with a 6 to 7% point increase in the likelihood of being unemployed. Table 3 shows that about 56 (92%) of patients had disclosed their HIV status to someone and only 4 (7%) had not disclosed their HIV status. This disclosure of the status to a confidant could improve adherence as patients will be reminded by the confidants. Sendagala (2010) noted that, having disclosed their status, people living with HIV would probably adhere to the treatment as they will get both physical and psychological support. In addition, disclosure of HIV status and support by treatment partner or peer counsellor have been shown to have a great impact on adherence (Meintjes et al, 2012).
Adherence level
Of the 61 records reviewed for this study, overall 50 (81.9%) of the patients achieved an optimum adherence level of 90% and above, while 6 (9.9%) reached an adherence level between 80 and 89%, and 5 (8.2%) achieved an adherence level of between 70 and 79%. The mean the average adherence level was 94.8%, ranging from 71% to 100% (Figure 2).
In this study, adherence is measured as the consistent collection of antiretroviral medications from the pharmacy at prescribed intervals. It is assumed that if patients collect their medication, then they are likely to take the medication. Adherence level is expressed as a percentage of the number of times they should have collected medication over the period of 12 months or more.
Adherence to antiretroviral therapy results in suppression in plasma viral load combined with increase in CD4 count. Therefore, the treatment adherence of antiretroviral should be monitored by checking the plasma viral load and CD4 counts measurement.
In this study, the mean CD4 T-cell count at antiretroviral therapy initiation was 250.67 cells/mm3. About half (31 or 50%) of patients had severe immune suppression with recorded CD4 T-cell counts lower than 200 cells/mm 3 , while 19 (31%) had CD4 T-cell counts of 200-349 cells/ mm 3 . After 12 months of antiretroviral therapy, the mean CD4 T-cell count was 347.56 cells/mm 3 with 19 (31%) of patients recording CD4 T-cell counts of lower than 200 cells/ mm 3 and 15 (25%) patients achieved a CD4 T-cell count of more than 500 cells/mm 3 . Thirty-seven (60%) achieved a sustained viral load of less than 50 copies/ml 12 months after commencing antiretroviral therapy.
The finding in this study shows that although overall 50 (82%) of patients had adherence levels of 90 or above, only 15 (25%) patients achieved immunological recovery with CD4 T-cell counts of more than 500 cells/mm 3 while 19 (31%) still had severe immune suppression, with lower CD4 T-cell counts of less than 200 cells/mm 3 . Only 37 (60.7%) patients achieved a sustained viral load less than 50 copies/ ml after 12 months of antiretroviral therapy. However, the mean CD4 T-cell count increased from 250.67 cells/mm 3 to 347.56 cells/mm 3 .
These findings suggest that patients may not take all collected medications or that they take them in not prescribed amounts or off the prescribed schedule, or fail to match the dose with food as directed. In addition, patients may share or sell their own medications and may hoard medications to avoid discrimination and stigma in the community/family. These findings are supported by the Zaragoza-Macias et al. (2010), which found that the relationship between refills and actual ingestion of medications is not clear and it is therefore difficult to measure adherence in the outpatient setting accurately and correctly.
According to literature, antiretroviral therapy reduces the HIV viral load as much as possible, preferably to undetectable levels for as long as possible. By doing so, the CD4 T-cell lymphocyte count usually increases progressively. Typically, the CD4 count increases rapidly by approximately 50 to 100 cells/mm 3 /year (Van Dyk, 2013; Meintjes et al., 2012). In addition, CD4 responses are highly variable and may fail to increase despite virological suppression and a small proportion of patients who start antiretroviral therapy with a very high viral load may not be fully suppressed despite being adherent to the treatment (Meintjes et al., 2012;Wilson et al., 2010).
Relationship of adherence level to virological suppression
The findings of the study showed that about 50 (82%) of patients achieved at least 90% of treatment collection adherence. However, 19 (38%) of these patients did not show any related viral suppression.
From the results, an examination of the relationship between adherence and virological suppression show that from the 61 records of patients reviewed, there were 37 (60.7%) who achieved virological suppression and 24 (39.3%) who did not achieve virological suppression. These findings show that most patients achieved a viral load measurement of less than 50 copies/ml. Most patients (50 or 82%) had an adherence level of at least 90%, while 11 (18%) patients did not achieve an adherence level of 90% or more. Among those who achieved adherence levels of at least 90%, only 31 (62%) patients achieved a viral load measurement of less than 50 copies/ ml and 19 (38%) did not achieve viral load measurements of less than 50 copies/ml within 12 months of commencing antiretroviral therapy. However, six patients that did not have an adherence level of at least 90% also achieved a viral load of less than 50 copies/ml within 12 months of treatment.
These finding are supported by study done by Zaragoza-Macias et al. (2010) and study by Henderson, Hindman, Johnson, Valuck, & Kiser (2011, p. 221), which showed that virological suppression was associated with adherence with medication pick up of more than 90%. In addition, Nachega et al. (2007) identified a statistically significant dose-response relationship between viral load suppression and pharmacy claim adherence across all adherence strata. They found that every 10% increase in adherence beyond 50% was associated with a mean absolute increase of 0.10 in the proportion of patients with sustained virologic suppression (p < 0.001). Even though Sayles et al. (2012) found that despite high antiretroviral therapy (ART) coverage rates, a substantial portion of people living with HIV taking ART were not achieving HIV viral load suppression, which leads to suboptimal treatment outcomes. In contrast, several studies using pharmacy-based adherence measures with stratified adherence estimates failed to detect a threshold to achieve virological suppression (McMahon et al., 2011).
The relationship between adherence level and virological suppression was investigated using bivariate statistical analysis and Pearson's correlation coefficient (r) was calculated. A statistically significant association between adherence and viral suppression was not demonstrated (r = 0.094, p > 0.05). Thus, there is no relationship between adherence and virological suppression. This finding agrees with a study done by Sayles et al. (2012), which found no association between taking antiretroviral medication and achieving HIV viral load suppression. However, there are some studies that have shown a relationship between adherence level and virological suppression.
Pharmacy refill records adherence
Antiretroviral medications work only if they are taken regularly every day for the rest of the life. Adherence refers to the willingness and ability of patients to follow healthrelated advice, take medication as prescribed, attend scheduled appointments, and complete recommended investigations (Kalichman, 2013;Moosa & Jeenah, 2012). Conversely, non-adherence to antiretroviral, evidenced as missed doses, is associated with incomplete viral suppression and the development of drug resistant virus that will eventually limit therapeutic options (Wilson et al., 2010).
In this study, the pharmacy refill records were used to measure adherence to antiretroviral by HIV positive patients in Ugu Health District. The method of using pharmacy refill records for estimation of adherence is related to the amount of drugs actually ingested by the patients. As discussed in this chapter, the relationship between collection of medications and actual ingestion of medications is difficult to establish. Saberi, Caswell, Amodio-Groton, and Alpert (2008) state that some advantages of utilising pharmacy refill records is that these data can easily be collected. They do not depend on patients' self-reports and accurate recall, they are inexpensive to acquire, they allow for retrospective assessment and are readily obtainable from computerised records.
In addition, pharmacy-based adherence measures are ideally suited to monitoring adherence because they are objective and can be easily derived from data routinely collected for other purposes, such as clinical care, medication billing, fulfilment of legal requirements, or drug supply management (McMahon et al., 2011). In contrast, pharmacy refill records may overestimate actual pill taking if individuals discard or share pills. Therefore, estimated maximum possible adherence for this measurement methodology threaten the internal validity (Sattler, Lee, & Perri, 2013;McMahon et al, 2011).
In settings where frequent routine viral load monitoring is not available, pharmacy refill records can play an important role in monitoring individual and population level adherence to ART (McMahon et al., 2011). Ndubuka and Ehlers (2011) concluded that if single available measures such as pharmacy refill records could be correlated with laboratory tests, results for improved CD4 counts (indicating immunological recovery) and decreased viral load (indicating virological recovery) could be used as preliminary measures of adherence.
In this study, 50 (82%) patients achieved an adherence level of 90% or above, with a mean adherence level of 94.8%. In addition, the mean baseline CD4 counts at initiation of ART increased from 250.67 cells/mm 3 to 347.56 cells/mm 3 after 12 months of antiretroviral therapy, while only about 37 (60%) achieved undetectable viral load (below the detectable level of less than 50 copies/ml within 12 months of treatment. It can be suggested that all claimed medications were not ingested. Therefore, pharmacy refill records should be implemented with laboratory tests for monitoring of patients' ART adherence. This is because it is difficult to predict who will not take claimed medications as directed. In addition, directly observed therapy (DOT) strategy should be also implemented for patients who have problems taking medications correctly.
Conclusion
In this study, the relationship between adherence to antiretroviral therapy using pharmacy refill records and virological suppression could not be determined. Multiple and varied means are more likely to identify patients with adherence problem than a single method. Therefore, pharmacy refill records cannot be recommended as an alternative method of monitoring response to antiretroviral therapy. However, pharmacy refill records as simple available measure of adherence must be combined with laboratory tests results, including CD4 cell count and or viral load measurement, to monitor response to antiretroviral therapy and for early identification of patients at high risk of virological failure.
Good adherence to ART and corresponding high rates of sustained virological suppression can be achieved in a resource-limited area with improvement in the ability of health care providers to assess adherence in routine care of HIV infected persons.
Limitations
There are several limitations to this study. Firstly, it is a retrospective cohort study in design and therefore reliance on record keeping was a major concern. This was due to the fact that some records were incomplete, and others did not have patients' CD4 counts results. In some records it was also found that viral load measurements after 6 months of antiretroviral therapy initiation were not done. This led to the usage of a sample of only 61 records out of a possible cohort of 198. In addition, all participants were from only one population who lived in rural area. As a result, patients' outcomes in this rural area may not be representative of those in urban areas. Moreover, the results of this study do not necessarily reflect practices in other settings and this may limit the generalisation of these findings.
Secondly, this study used pharmacy refill records, which do not reflect the dynamic nature of adherence. In addition, pharmacy refill records do not record the actual ingestion of medication, or the pattern of non-adherence (for example, frequency, duration). Therefore, there is an overestimation of the actual adherence because patients may not take all collected medication.
Thirdly, this study used hospital records as a source of information. Obtaining informed consent from participants for whom the records were reviewed was impracticable as most of the participants stayed far from the institution. Some of the participants did not have a proper physical address.
Fourthly, the research results are only applicable to one public hospital where the data had been collected. Consequently, these results might not be generalised to other ART services in the province.
|
2018-04-03T01:29:42.754Z
|
2016-09-28T00:00:00.000
|
{
"year": 2016,
"sha1": "607857de666e234b18be821924ef7a6a062c3ead",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.2989/16085906.2016.1170710?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "18ec2c62c573d5771b4039d8272f26a741a6270c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219045319
|
pes2o/s2orc
|
v3-fos-license
|
The relationship between talent management practices and retention of generation ‘Y’ employees: mediating role of competency development
Abstract In a competitive marketplace, retention of talented and younger employees is a challenge for organizations. Thus, it becomes important for organizations to execute employees’ development strategies to retain Generation Y employees. The goal of this research is to analyze the effect of talent management (TM) practices i.e. mentoring, strategic leadership, social media, and knowledge sharing on the intention to stay of generation Y employees and strengthening this relationship by investigating the mediating role of competency development. A total of 372 employees of generation Y participated in the study. The data were analyzed through the PLS-SEM model using smartPLS-3 software. Findings reveal that TM practices: mentoring, strategic leadership, social media positively affect the intention to stay of generation Y employees, while knowledge sharing has no significant effect on the intention to stay. Competency Development mediates the relationship between strategic leadership, social media, knowledge sharing and intention to stay. However, competency development does not significantly mediate the relationship between mentoring and intention to stay.
Introduction
Talent Management (TM) is a fast-growing concern for organizations in the competitive business environment. Today's organizations are competing with one another and want to attract and retain talented workers to enhance their operational and workplace productivity. The TM characterized as the attraction, identification, development, and maintenance of talent and ability of an organization to address the business issues (Thunnissen & Buttiens, 2017). According to Meyers et al. (2013) talent defines as an individual's natural capacity to do good things without acquiring learning. These talented people serve high-potential to get long-term organization's success. The TM has become a major organizational tool that contributes to the competitive edge, and sustainable organizational performance. The literature indicates that TM and business strategy collectively help to get business success (Bayyoud & Sayyad, 2015). The scholars also recommend that organizations should align the TM practices with business strategies to enhance organizational performance and motivate the individuals (Bethke-Langenegger et al., 2011). According to Govaerts et al. (2011) retention refers to the organizational ability to retain its desirable employees in the workforce. If organizations want to retain their employees, it needs to invest in TM to enhance their retention rate.
Generation Y which is also referred to as the echo boomers, digital generation, and millennials (born during the years of the 1980s and early 2000s) are joining the workplaces on the retirement of baby boomers (Cennamo & Gardner, 2008). According to Dulin (2008) there are 81 million members of generation Y and over 29 million members have already entered the workplace. As per Pakistan Bureau of statistics (2018) population-to-employment ratio of young employees consists of 36.9%. Generation Y employees are energetic, savvy technological and groomed from a multicultural perspective. Generation Y are children of boomers, their working styles, behaviors, and expectations are different from other generations. Generation Y are passionate and put more emphasis on learning and career success. If members of generation Y found no opportunities, they prefer to switch the job and seek new career opportunities (Weyland, 2011). Generation Y employees have more emphasis on flexible work environments and prefer challenging work tasks to enhance their own abilities in the workplace. Generation Y employees also prefer clear directions or immediate feedback on their performance (Winter & Jackson, 2016). Retention of young and talented employees is important because they are characterized by different attributes, such as skills, knowledge, and capacity to learn within the organization (Festing & Sch€ afer, 2014).
In a strategic context, TM practices focus on hiring the right persons with the right job position at the right time when they are needed (Bethke-Langenegger et al., 2011). Employers need to design and implement the TM process and practices to improve and retain young talented employees. TM practices are implemented to align the talent pool with business objectives. Scholars suggest that TM strategies such as training, career development, mentoring, international assignment, team project, and networking that enhance the opportunities, motivation, knowledge, and retention of talented employees (Glaister et al., 2018). Based on the social exchange theory (SET), Naim and Lenka (2018) proposed four TM practices such as mentoring, social media, strategic leadership, and knowledge sharing for the retention of Generation Y employees. According to Emerson (1976) SET defined as social relationships between two parties that are established, changed or assisted based on the exchange of common benefits. Hence, social exchange theory provides the theoretical support in explaining these four TM practices help to improve the employees' competencies and enhance the chances of retention of generation Y employees.
Mentoring is an effective career development tool for employees. Mentoring defines a relationship between mentor and mentee where a mentor helps the mentee and facilitates him/her in personal and career development (Mullen, 1994). Strategic leadership refers to the leader's ability to create a vision and influence others by making decisions to sustain the organization (Rowe, 2001). Social media refers to online communication channels such as Facebook, WeChat, WhatsApp that facilitate employees to connect, communicate and share knowledge (Bolton et al., 2013). Scholars suggested that usage of social media in organizations contribute to knowledge sharing, social interaction, and promotion of organizational performance (Tajudeen et al., 2018). According to Hsiu-Fen Lin (2007) knowledge sharing denotes an exchange of knowledge, skill, information, and experience through social interaction among employees within and outside the organization. Mentoring, social media, strategic leadership, and knowledge sharing as TM practices, help in the competency development of 'Gen Y' workers (Naim & Lenka, 2018). Competency development is an organizational activity to maintain and enhance employee's career, knowledge, skills and align the employees with the strategic goals of the organization (De Vos et al., 2011). According to Leonard (2008), competency development as learning opportunities enables generation Y employees to find the areas for improvement and their retention at the workplace.
The present study has two objectives. First, it empirically investigates the association of TM practices i.e. mentoring, social media, strategic leadership, and knowledge sharing and the intention to stay of generation Y employees. Second, it explores the mediating role of competency development between the relationship of TM practices and intention to stay of generation Y employees. To seek these research objectives, this study has the following research question. What is the impact of TM practices on the intention to stay of generation Y employees and how competency development mediates the relationship between TM practices and intention to stay of generation Y employees? The empirical investigation is conducted in the high-profile software houses of Punjab, Pakistan which is one of the most important sectors needed for talented employees.
Talent management
The term talent management (TM) as a concept emerge in the 1990s after the study of McKinsey 'War for Talent' (Michaels et al., 2001), it highly attracted the attention of scholars. Scholars acknowledged the TM as a systematic procedure whereby organizations identifying the vacant positions, hire valuable employees to sustain the competitive edge (Hughes & Rog, 2008). After hiring these employees, HR managers design and execute the learning practices to improve their skills and competencies and strengthen their commitment toward the organization (Collings & Mellahi, 2009). The HR managers identify, develop and engage the employees to evaluate their productivity at the workplace (Meyers et al., 2013). According to the global perspective, organizations develop strategies for the purpose of proactive selection and deployment of high-quality talent pools on a worldwide scale (Farndale et al., 2010). Lewis and Heckman (2006) described that TM is commonly associated with human resources management (HRM) or encompasses HRM practices focusing on the attraction, identification, advancement, and turnover intention of people who are considered to be talented. Talented individuals are characterized on the base of various attributes such as skills, competencies, experience, knowledge, and ability to learn and grow within the organization (Thunnissen & Buttiens, 2017). These talented employees are considering as an important resource of the organization because they positively affect the organization's performance (Michaels et al., 2001). According to Bethke-Langenegger et al. (2011), TM practices ensure that organizations have a focus on the right person, right place, at the right time to access business demand.
Talent development or retention has become an increasingly important phenomenon. TM practices refer to talent attraction, talent development, talent engagement, and talent retention. The TM practices are implemented through HR managers to control the shortages of talent and also meet the future needs of the organization (D'Amato & Herzfeldt, 2008). Talent development considered an important tool of competitive advantages for the organization through career development, performance enhancement, and succession planning of talented employees. The organization can implement these TM practices such as mentoring, training, and coaching to improve the employee's career development, engagement, and retention (Armstrong, 2006). Social media tools not only useful for the development of individuals but also used for engagement and retention of talent (Zhang et al., 2018). These TM practices are important components of the improvement and retention of talent and also ensure that employees are motivated and enhance their competencies which they need to be committed with the organization (Naim & Lenka, 2017b). Talent retention war starts initially at the hiring stage when organizations are recruiting the employees and fit the employees according to organizational requirements (Ross, 2005). Implementing effective TM practices can improve the TM process of the organization and enhance the employee's engagement and improve the retention of skilled and talented employees (Hughes & Rog, 2008).
Generation Y and workplace
Today's organizations become more dynamics as compared to traditional ones due to the entrance of generation Y employees at the workplace. It is necessary for organizations to understand the needs, expectations, and work preferences of multi-generations. De Hauw and De Vos (2010) define the Generational cohort as a group of individuals born at the same age, have similar work values, expectations and attitudes and they are different from a member of another age group. The literature highlight three generations at the workplace, Boomers (1946-1960), Generation X (1961-1980), and Millennials (1981-2000 (Cennamo & Gardner, 2008). Generation Y is the fastest growing workforce also known as Gen Y, Millennials, the digital generation, and Echo Boomers (Dulin, 2008). Generation Y employees are passionate, self-reliant, independent and prefer to work as a team (Shih & Allen, 2007). Generation Y has grown up in a technology-intensive environment and they have comparatively more adaptable technological skills as compared to other generations (Cennamo & Gardner, 2008).
Generation Y employees have their own expectations regarding the job, such as job characteristics, choosing employers, and achievements for their future careers. Generation Y has higher expectations for career development and they are ambitious to seek career opportunities within the organizations. They want to enhance their skills and knowledge to remain in the talent market. In addition, Generation Y employees prefer challenging works due to they are able and want to enhance their own skills and abilities (Naim & Lenka, 2018). Generation Y employees seek career opportunities otherwise they prefer to leave the organization and seek new opportunities (Cennamo & Gardner, 2008). Generation Y has valuable expectations for worklife balance, mentoring, and career development as well. By offering these career development opportunities, providing immediate feedback on performance, organizations can develop, maintain and retain Generation Y employees (De Hauw & De Vos, 2010).
Mentoring and intention to stay
Mentoring defined as an association between a skilled or more experienced person (mentor) and a less skilled or less experienced person (mentee). Mentoring is a formative relationship, where a mentor facilitates the mentee for his/her personal and professional growth (Mullen, 1994). According to Roberts (2000) mentoring is a structured program in which a more capable or knowledgeable individual provides guidance, learning, and encouragement to a less knowledgeable or less experienced individual. A mentor is a skilled or more experienced member of the organization who facilitates the career and advancement opportunities by serving as a role model. Mentoring relationship provides the career and psychological development to the young employees (Mullen, 1994). In addition, mentors facilitate career development functions to younger employees by providing sponsorship, challenging tasks, coaching, exposure and visibility, and protection (Noe, 1988). Psychological outcomes i.e. role modeling, counseling, confirmation, and friendship that enhanced the young employee's sense of competence, efficacy, and confidence (Kram, 1983). Mentoring is the development approach that fulfills the Generation Y expectations to learn knowledge, skills, personal, and professional development. Mentoring as a developmental opportunity also enhance the generation Y engagement, and improve retention (Meister & Willyerd, 2007). Allen et al. (2004) found that mentoring programs positively associated with career outcomes such as career satisfaction, commitment, expectations for advancement, and intention to stay of generation Y employees. Hence, this study proposes that; H1. The mentoring program positively affects the intention to stay of generation Y employees.
Strategic leadership and intention to stay
Strategic leadership refers to an individual's ability to express a strategic vision, think strategically, and motivate others that will make a feasible future for the organization (Ireland & Hitt, 1999). The concept of strategic leadership based on the Upper echelon theory, argues that executives' experiences, traits, preferences, and cognitive styles provide direction to the organization and influence strategic decisions (Hambrick & Mason, 1984). In the 21 st decade, companies need to adopt some effective activities to sustain strategic competitiveness. The effective activities of strategic leadership are: (1) communicate a vision, (2) develop or sustain core competencies, (3) development of human resources, (4) emphasizing ethical practices, (5) effective corporate culture and (6) establish strategic control (Ireland & Hitt, 1999).
Strategic leadership also focuses on the relationship between leader and supervisor within an organization (Vera & Crossan, 2004). While the entrance of Generation Y workers in the workplace, they perceive high expectation, high potential, and new ideas with clear direction from their leaders. A strategic leader needs to focus on the right leadership strategies to develop and retain the employees otherwise Millennials won't prefer to stay long at the workplace (Graybill, 2014). Leadership development is considered as an important part of competitive advantage. Leadership development strategies enhance the Generation Y employee's abilities, career development and align the employees with the strategies of the organization. Leadership development strategies include coaching, international assignment, and immediate feedback (Azbari et al., 2015). The researcher found that leadership career support provides the employee's learning and enhances the employ abilities that had a negative effect on the turnover intention (de Oliveira et al., 2019). As per social exchange theory employees prefer to stay longer when organizations valued their needs and expectations. Investments in TM and leadership development strategies enhance the organization's competitiveness, and retention of Generation Y workers (Carter et al., 2019;Chami-Malaeb & Garavan, 2013). Thus, it is hypothesized that; H2. Strategic Leadership positively affects the intention to stay of Generation Y employees.
Social media and intention to stay
In the last three decades, the usage of social media has attracted the attention of academics and practitioners (Zoonen et al., 2014). Social media are increasingly being used as a source of information and become an integral part of today's business. Social media refers to online communication technology through which individuals and organizations develop and share information (Bolton et al., 2013). Scholar suggests that social media defined as various websites and applications that are designed and developed for users to create content and share their thoughts, ideas and information with other users (Tajudeen et al., 2018). According to Zhang et al. (2018) social media used by employees for two purposes: work-related purpose and socialrelated purpose. Work-related usage of social media such as meeting with employees on projects, share the information related organizational policies, procedures and the promotion of the brand. Social-related usage of social media such as arranges social events with colleagues after working hours, and finds friends within the firm. Social media developed from the birth of generation Y so they know as digital natives or net generation. Generation Y workers are the technological-savvy generation, they grow up with information and technology environment and it is an important part of their lives (Maton et al., 2008). Generation Y uses various social media tools such as communities and social networking websites including Facebook, LinkedIn, Twitter, and Blogs.
Generation Y has the preferences for social media usage at the workplace. Generation Y knows how to use technology and social media to gain work-related material and knowledge. Organizations used work-related social media to meet the expectations of Millennials that enhance their job engagement and organizational engagement (Gonzalez et al., 2013;Zhang et al., 2018). Social media as a TM practice foster the internal communication and knowledge of Generation Y workers that may positively affect the intention to stay of generation Y employees (Naim & Lenka, 2017b). Based on these arguments, this study proposes that, H3. The usage of social media positively affects the intention to stay for generation Y employees.
Knowledge sharing and intention to stay
Knowledge is considered an important strategic resource for the organization. The term knowledge refers to the facts, information, and skills gained through the experience or learning (Ipe, 2003). Knowledge Sharing defines as the process where individuals share their information, experience, skills, and thinking with others during their interaction (Yang, 2004). Knowledge sharing collaborates between the individuals and the organizations. It enables a company to keep their worker empowers and engaged with the organization (J. Yang, 2007). Knowledge sharing is categorized into two forms. First, knowledge donating refers to the individual's willingness to transfer information and knowledge with others. Second, knowledge collection refers to the ability to absorb knowledge from others (Van den Hooff & de Leeuw van Weenen, 2004). There are two forms of knowledge: explicit or tacit knowledge. Explicit knowledge defines as an objective, technical and rational knowledge that can be easily transmitted, codified and verbalized to others (Hislop, 2003). While tacit knowledge is subjective information and experiential learning. Tacit knowledge defined as employee's skills, experience, and expertise that cannot be codified and articulated to others (Gupta et al., 2000).
Knowledge sharing among different generations has become an important role of organizations. Multi-generations employees work at the workplace and the Generation Y cohort is new at the workforce with less experience (Haynes, 2011). Through social interaction, generation Y employees can gain valuable knowledge and experience from peers that enhance their knowledge and learning abilities (Jacobs & Roodt, 2011). Employers also need to facilitate the Millennials by acquiring valuable knowledge from Baby boomers. To be competitive, employers need to engage the older and generation Y for a two-way learning process. Therefore, Knowledge transfer from the Baby boomers to Millennials can play a crucial role in the company's success (Sanaei et al., 2013). Generation Y employees are seeking learning, they demand learning opportunities otherwise they prefer to leave. Knowledge sharing activities influence the generation Y employees for learning and enhance their intention to stay with the organization (Jacobs & Roodt, 2011). Therefore, it is proposed that: H4. Knowledge sharing positively affects the intention to stay of generation Y employees.
Competency development as a mediator
According to SET Blau (1964), workers behave in the way that their employers provide development opportunities as per their needs and expectations. SET theory used to understand the workplace behavior of employees where they interact and share content with one another (Biron & Boon, 2013). Competency Development is the organizational activities that enhance and maintain the individual's learning and competencies (Forrier et al., 2009). According to De Vos et al. (2011), companies identify the individual's areas for improvement and execute various development strategies that enhance their knowledge and competencies to engage with the organization. The extant literature suggests that mentoring relationship has an impact on employees learning and development. Mentoring as TM practice has the potential to foster competency development. Mentor as a role model contributes to the employee's personal learning and establish their competencies by providing feedback on their ideas and performance (Lankau & Scandura, 2002). Mentoring includes formal and informal learning programs. Formal learning programs are usually developed in the form of the structured program such as training or support, and informal learning programs, such as broader career opportunities and on-the-job learning (Mullen, 1994). Participation in mentoring programs, generation Y employees can learn and effectively meet their own goals as well as the organizational goals. Mentoring motivates the mentor to share their experience, skills, and expertise with younger employees which helps in the retention of Generation Y workers (Ambrosius, 2018).
A strategic leader facilitates the employees to develop their competencies and encourage them to make their own ideas to achieve the intended goals (Ireland & Hitt, 1999). Scholar argues that effective top management positively associated with organizational learning and innovative behavior (Hambrick & Mason, 1984). A strategic leader as a learning agent enhances the employees learning abilities and embraces transactional and transformational leadership behaviors to foster the feedback of learning processes. Therefore, continuous learning culture and development of knowledge sharing with an organization enhance the competency development of generation Y employees (Vera & Crossan, 2004). According to Chami-Malaeb and Garavan (2013) strategic leader communicates the vision and creates the alignment between generation Y objectives and organizational strategy that help the employers to retain Generation Y employees. Based on this discussion competency development opportunities offered by the organization that has an effect on the intention to stay of Millennials.
Millennials are recognized as techno-savvy generation and they grew up in the technological environment which requires innovative ways for learning (Chelliah & Clarke, 2011). Organizations have started to use social media system to help the Y generation cohort to find out about their job search, coworkers and employers. Social media facilitates the generation Y employees to build a relationship with employees, enhance communication and enable employees to share work-related knowledge. Moreover, learning through social media enhances the employee's knowledge, communication skills, productivity. (Bennett et al., 2010). Social media tools such as wikis or discussion boards have been used to access the relevant information, exchange ideas about the job that fosters the learning capabilities at the workplace (Vance et al., 2009). Generation Y uses social media to acquire valuable knowledge and expertise to achieve work-related tasks and pursue information related to the organization. Social media provided by organizations to meet the expectations of Millennials that enhance their commitment (Gonzalez et al., 2013). Therefore, the usage of social media enhances the learning opportunities that foster competency development and increasing the feeling of obligation to stay within an organization (Zhang et al., 2018).
Knowledge sharing among an organization's older and young employees is important for competitive advantage. Employees' willingness to exchange their thoughts, skills, and experience reciprocally refers to knowledge sharing (Hislop, 2003). The literature acknowledges that knowledge sharing has a significant effect on organizational performance by developing the skills and competencies of talented employees (J. Yang, 2007). Twenty Chinese HR experts related to their learning events and career growth revealed that knowledge sharing has a positive impact on employee's learning and development (Wang-Cowham, 2011). Jacobs and Roodt (2011) suggested that knowledge sharing fosters organizational effectiveness and individual learning by promoting social interaction among older and generation Y employees. Employees within the organization develop a social relationship, exchange knowledge, experience, and thoughts that lead to competency development (Hsiu-Fen Lin, 2007). Therefore, knowledge sharing fosters the competency development that leads to the intention to stay of the Generation Y cohort (Naim & Lenkla, 2016). Therefore, this study proposes.
H5 (a). Competency development mediates the relationship between mentoring and intention to stay.
H5 (b). Competency development mediates the relationship between strategic leadership and intention to stay.
H5 (c). Competency development mediates the relationship between social media usage and intention to stay.
H5 (d).
Competency development mediates the relationship between knowledge sharing and intention to stay.
Sampling procedure and data collection
The respondents of this study are 'Generation Y' employees belong to the software houses of Punjab, Pakistan. The present study selected four big cities of Punjab (Faisalabad, Lahore, Gujranwala, and Rawalpindi) where most of the software firms are situated. There was a total of 1367 registered software firms in these cities (PSEB, 2019) and we selected those firms that have a minimum twenty employees. Therefore, only 55 software houses competed in this criterion. After getting approval from the respective firms' managers, the employees were approached through personal visits and emails at their offices. A structured questionnaire (in English) was distributed to the Generation Y workers. Participants were requested to take an interest in this with surety of data confidentiality.
To avoid the respondent's biases, data were collected in three waves. In Time 1, the first survey assessed Talent Management practices such as mentoring, strategic leadership, social media, and knowledge sharing. After 4 weeks gap of the first survey, in Time 2 the same respondents were asked to fill the questionnaire related to competency development. With a further 4 weeks gap, in final Time 3 intention to stay related questionnaire was distributed to the same respondents. The total data collection period was 4 months. A total of 520 questionnaires were distributed at Time 1stage and finally, at stage Time 3, we received 372 valid and complete questionnaires. Therefore, the response rate was 72%.
Measures
This study measured all variables through a questionnaire adapted from the literature and slightly modified according to the research context and objectives. The questionnaire used in this study consist of two sections, the first one was the demographics of the participants and the second section was based on 36 items that measure six variables. All constructs were measured by using the five-point Likert scale ranging from '1¼ Strongly Agree' to '5¼ Strongly Disagree'. Appendix 1, presents the complete instrument.
Mentoring
Mentoring defined as a relationship between skilled mentors or less experienced mentees. Mentor assigned the challenging tasks to the mentee to encourage personal or career development. The mentoring scale consists of 8 items adapted from the study developed by (Dreher & Ash, 1990). A sample item is 'My mentor shares the history of his/her career with me'. Cronbach's alpha value is 0.893.
Strategic leadership
Strategic leadership refers to a leader's ability to create a vision, align and motivate these employees to accomplish the strategic goals of the organization. Strategic leadership scale consists of 4 items adapted from the study developed by (Duursema, 2013). A sample item is 'My leader plans in detail how to accomplish an important part'. Cronbach's alpha value is 0.866.
Social media
Social media refers to online communication channels that facilitate the employees to use the organizational social media to communicate and share knowledge within the organization. The social media scale consists of 5 items adapted from the study developed by (Zhang et al., 2018). This scale was further validated by Moqbel and Nah (2017). A sample item is 'I am allowed to use the organizational social media system to post-take updates on work projects'. Cronbach's alpha value is 0.844.
Knowledge sharing
Knowledge sharing defines as a social relationship between employees. Knowledge sharing scale consists of 6 questions adapted from the study developed by (Van den Hooff & de Leeuw van Weenen, 2004). A sample question is 'Colleagues within my department tell me what their skills are when I ask them about it'. Cronbach's alpha value is 0.844.
Competency development
Competency development refers to an organizational activity that improved employee's skills, knowledge, and aligns the employees with the strategic goals of the organization. The competency development scale consists of 7 items that were adopted from the study developed by (De Vos et al., 2011). A sample item is 'In my organization, training sessions are organized to get knowledge'. Cronbach's alpha value is 0.865.
Intention to stay
Intention to stay refers to an individual's commitment and the willingness to stay with the organization. The 7 items scale under this construct is adapted from the study developed by (Mayfield & Mayfield, 2007). A sample item is 'I prefer to stay at this organization'. Cronbach's alpha value is 0.856.
Statistical model
This study used partial least squares (PLS) based structural equation modeling (SEM) to test the proposed model. PLS-SEM refers to a multivariate analysis technique used to test the structural relationship between constructs (Hair et al., 2013). According to F. Hair Jr et al. (2014), PLS-SEM is the most useful method and tool for exploratory and mix studies. In addition, PLS-SEM techniques used to increase the explained change of endogenous variables (Hair Jr, Hult, Ringle, & Sarstedt, 2016). PLS technique used to handle the complex model even with a small sample size (Khan et al., 2019). This study employed Smart PLS-3 (Hair Jr et al., 2016).
Pilot study
Before the process of data collection, a pilot test was used to check the reliability and validity of the instruments. For the pilot study, the feedback was collected from 30 respondents and analyzed in Smart PLS3. The items having outer loadings less than 0.70 were removed from constructs (Hair Jr et al., 2016). One item was excluded from the strategic leadership scale and one item was removed from intention to stay scale. Two items were deleted because the outer loading values of these items were significantly less than 0.70.
Analysis and results
This study completed the data analysis in two stages. the model measurement and model assessment. In this study, all constructs are performed on a reflective measurement model, rather than a formative measurement model because all construct's items are compatible and associated (F. Hair Jr et al., 2014). Table 1 represents the demographic information of the participants Table 1 represents the demographic profile of the respondents. The male participants were 79 percent and the female respondents were 21 percent. From respondents, 5. 10%, were born between 1980 to 1985, 7.80% were born between 1986 to 1990, 55.65% were born between 1991 to 1995, and 31.45% were born between 1996 to 2000. The unmarried respondents were 78 percent and married were 22 percent. From a total of 372 respondents, 68.55% had a Bachelor's degree, 27.15% had Master, 1.08% had a Ph.D. degree, and 3.26% different IT had related diplomas. Most respondents had 1 to 5 years of experience. Table 2 shows that 36 items of 6 variables have their reliability values (outer loading) that are near/above the recommended value of 0.70 (Hair Jr et al., 2016).
Reliability, convergent validity
The reliability of the model is determined through composite reliability (CR) and Cronbach's alpha methods. According to F. Hair Jr et al. (2014), Cronbach's alpha test is used to measures the inner consistency reliability of the constructs. The minimum value of Cronbach's alpha 0.7 is considered acceptable. CR test is also used to measures the constructs' inner reliability of the model. The values of CR between 0.6 and 0.7 are observed as acceptable, (Hai-Fen Lin et al., 2016). This study analyzed the average variance extracted (AVE) of all variables to evaluate the convergent validity. The AVE value of each construct higher than 0.50 is acceptable (Hair Jr et al., 2016). Table 2, the results of this study indicate that all values of AVE are above 0.50 that are meeting the requirement (Liu et al., 2018). All variables and measurements are available in Table 2.
Discriminant validity
Hai-Fen Lin et al. (2016) explained that discriminant validity represents the degree to which the constructs empirically differ from others. Discriminant validity is measured by using two approaches. First, Fornell and Larcker (1981) recommended that the square root of each AVE of latent variables. AVE value should be higher than the correlations among latent variables. Second, Henseler et al. (2015) recommend that Heterotrait-Monotrait Ratio (HTMT) values should be less than 0.90. Table 3 shows the correlation between the constructs has been established.
To verify the model fitness, the value of SRMR is calculated. The scholars recommend that the value of SRMR should be less than 0.082 (Henseler et al., 2016). The results of the present study indicate that the value of SRMR is 0.073 which confirms the fitness of the model (Henseler et al., 2016). Variance Inflation Factors (VIFs) values are used to examine Collinearity issues. In the study, VIFs values of endogenous variables are between 1.257 to 2.75 which ensures the issues related to Collinearity are not found in data as all values are < 05 (Hair et al., 2013).
Coefficient of determination and cross-validated redundancy
R 2 values are used to assess the structural model and the model's predictive accuracy. R 2 ranges of all endogenous variables refer to a degree of variance explained (Hair Jr et al., 2016). In the present study, the Q 2 value of dependent variables (CD¼ 0.307,and IS¼ 0.324) are supporting the model's predictive relevance.
Hypotheses verification (direct effect)
To assess the study model, the bootstrapping technique is applied and 5000 samples randomly drawn with replacement at 95% confidence level are used (Liu et al., 2018). Table 4 depicts the results and clarifies the direct association between the independent and dependent constructs. The three TM practices; mentoring, strategic leadership, and social media have a significant effect on the intention to stay. Thus, H1, H2, and H3 are accepted. While knowledge Sharing has no significant effect on the intention to stay. Therefore, H4 is rejected.
Hypotheses verification (mediation)
In this study, the direct association between mentoring, and intention to stay has a positive association, while its indirect relationship is not significant, it shows that no mediation exists between mentoring and intention to stay through competency development. Thus, H5 (a) is not accepted. Table 5 shows the relationship between strategic leadership and intention to stay significantly mediated by competency development. Therefore, H5 (b) is accepted. The relationship between social media and the intention to stay is also significantly mediated through competency development. Thus, H5 (c) is also accepted. Knowledge Sharing had a partial mediation effect on the intention to stay through competency development. Therefore, H5 (d) is also accepted. Figure 2 explains the key results of the present study model.
Discussion
The object of the present study was to empirically evaluate the impact of TM practices i.e. mentoring, social media, strategic leadership, and knowledge sharing on the retention of Generation Y workers. In addition, to evaluate the mediating role of competency development between the TM practices and intention to stay. The empirical investigation of the present study was performed in the software houses of Punjab, Pakistan. The results of the present study confirm the significant association between TM practices (mentoring, social media, strategic leadership) and intention to stay of the Generation Y cohort, except knowledge sharing. These results are also in line with the literature (Allen et al., 2004;Chami-Malaeb & Garavan, 2013;Naim & Lenka, 2017a;Zhang et al., 2018). In literature, the scholars report different arguments regarding the TM practice (Knowledge sharing) which has no significant association with generation Y employees' intentions to stay. According to Glass (2007) Generation Y also known as 'digital native' generation at the workplace. Generation Y employees are growing up with new communication technologies and approaches. By considering the source of knowledge acquisition, Generation Y employees prefer to use digital and electronic sources (instant messaging, email, and text messaging) to acquire knowledge, while senior generations are usually like to interact face to face and prefer face-to-face communication. Communication modalities create difficulties among different generations. Therefore, usually, generation Y employees also consider other sources of knowledge other than knowledge of the seniors. Therefore, it is necessary for organizations to think about the innovative ways of learning to inspire generation Y employees. Secondly, the results of the present study confirm that competency development mediates the relationship between TM practices (social media, strategic leadership, and knowledge sharing) and intention to stay of generation Y employees. However, the results present that competency development does not significantly mediate the association between participation in the mentoring program and the intention to stay of generation Y employees. The scholars recommend that management should adopt reverse mentoring from the perspective of Generation Y employees (Marcinkus Murphy, 2012).
Reverse mentoring is an approach where young employees as mentors assist the seniors (mentee) on issues like how to use modern communication technology and social media. Reverse mentoring boost Generation Y confidence, and offer development opportunities. For instance, to upgrade individual or professional competencies of generation Y in the front of senior management and enhance the desire to stay in the organization for the long run (Marcinkus Murphy, 2012). Hence, the results confirm that if the organizations provide the opportunities of competency development to their generation Y employees specifically and others in generally with the practices of TM, the retention chances of generation Y employees can increase.
Theoretical implications
This research offers significant theoretical contributions. First, it contributes to the existing literature by clarifying the relationship between TM practices such as mentoring, strategic leadership, social media, and knowledge sharing and retention of millennial employees. These TM practices are directly linked with the retention of the 'Generation Y' cohort. Secondly, the existing study also investigates that mentoring, strategic leadership, usage of social media, and knowledge sharing are the enabler of Generation Y competency development. In this study, competency development mediates between the TM practices and intention to stay of Generation Y employees. This paper contributes to the social exchange theory by clarifying the relationship between the TM practices, competency development, and intention to stay of 'Gen Y' employees. This is the first study that empirically investigates the association between TM practices by focusing on needs, and expectations of Generation Y to retain them in the present organization.
Practical implications
The results of this paper also provide some practical insights into human resource development managers and provide useful information related to Generation Y needs and expectations. First, this study will help the software houses employees and their HR managers to understand the psychological profile of their young and talented employees. Second, the findings of the study provide the strategies to HR managers to implement TM practices to retain their Generation Y workers. In software houses, HR managers should integrate TM practices: mentoring, social media, strategic leadership, and knowledge sharing. These TM practices as learning and development opportunities enhance the Generation Y competencies that have an effect on the attraction, development, and retention of Gen Y workers. The persistent development of young employees through the acquisition of learning, aptitude, and skills is important for organizations to compete and remain in the current business landscape. This study offers to the managers a strategy to develop and retain young and talented Generation Y employees by focusing on these TM practices.
Limitations and future directions
This research required to address several limitations. First, the present study has a relatively small sample size. In addition, the results of this study based on only software houses of four cities in Pakistan. Therefore, the same constructs can be applied in other industries of Pakistan and other cultures/countries. Second, quantitative research design was carried out by Generation Y employees. In future studies, it is recommended that qualitative research techniques can be used like open-ended questions and focus interviews from 'Generation Y' employees. This study examines the mentoring impact on intention to stay, so, future studies can analyze the other form of mentoring such as reverse mentoring from the Generation Y perspective. Third, In the present study, mentoring, social media, strategic leadership, and knowledge sharing are the enablers of competency development although the future study can add other enablers such as organizational innovation and organizational culture. Fourth, Further, future research can examine the comparisons between the needs and expectations of Generation X, Y, and Z and further investigates that these TM practices have an effect on Generation X and Z. Fifth, in the future, it would be better to investigate the impact of other determinants of retention, even in aggregate form such as salaries, workplace benefits, overall job satisfaction, etc.
Disclosure statement
No potential conflict of interest was reported by the author(s).
|
2020-05-07T09:11:02.989Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "4ecbb5007e2a06455e1d236a5245d1b9928f6f3f",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/1331677X.2020.1748510?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "90dd190891a16fb1bfc5c47a177bb425dfa2594f",
"s2fieldsofstudy": [
"Business",
"Psychology"
],
"extfieldsofstudy": [
"Business"
]
}
|
119409173
|
pes2o/s2orc
|
v3-fos-license
|
The general double-dust solution
The gravitational field of two identical rotating and counter-moving dust beams is found in full generality. The solution depends on an arbitrary function and a parameter. Some of its properties are studied. Previous particular solutions are derived as subcases.
I. INTRODUCTION
The study of the gravitational field of light beams (null dust) has a long history. In the linear approximation to general relativity this was done 71 years ago [1,2]. Later, Bonnor found exact solutions which belong to the class of pp-waves (algebraically special solutions of Petrov type N and no expansion) [3,4]. He showed that beams shining in the same direction do not interact. The gravitational field of two counter-moving light beams is very complicated and particular solutions were found only recently [5,6]. One of them was generalized to the case of two colliding non-null dusts and named the double-dust solution [7]. It is based on a one-parameter relation between two metric components.
In this paper we find the general double-dust solution and study its properties. It depends on one almost arbitrary function and one free parameter. All previous solutions are derived as special cases.
In section 2 the field equations are written and their general solution is given in three different ways. The condition of elementary flatness is imposed, which reduces the number of free parameters to one. The restrictions on the arbitrary function, the positivity of the energy density and the regular character of the metric are studied. In section 3 some particular examples are given such as the solutions of Lanczos [8], Lewis [9] and the two Kramer solutions (for null and non-null dust). In section 4 the properties of the 4-velocities of the two dust beams are studied. It is proved that the general solution satisfies the dominant energy condition. The appearance of closed timelike curves is discussed in the stationary version of the Kramer double-dust solution. In section 5 the general interior solution is matched smoothly to the vacuum Lewis solution at any distance from the axis. Section 6 contains a short discussion.
II. FIELD EQUATIONS AND THE GENERAL SOLUTION
We shall work in the stationary formalism when the cylindrically symmetric metric is given by [4] and there is only radial x-dependence. The static metric may be obtained by complex The energy-momentum tensor of the two dust beams reads where µ i are the density profiles of the rotating and flowing beams. The only components To write the Einstein equations we use the combinations of Ricci tensor components utilized in Refs. [10][11][12][13]. We shall work, however, with curved mixed components R µ ν . The only non-trivial ones are the diagonal components and R t ϕ , R ϕ t . The Einstein equations for the corresponding T µ ν components give after some rearrangement Eq. (4) and Here ′ is a x-derivative, 2µ = µ 1 + µ 2 and a 0 is an integration constant resulting from Eq. (7). Units are used with G = c = 1. The equations for the rest T µ ν components are satisfied either identically or when (µ 1 − µ 2 ) Z = 0. Therefore, we accept in the following the relation In the case of a single beam, we must put instead Z = 0, which halts its motion along the axis.
There are 6 equations (4-9) for 7 quantities; u, k, W, A, µ, Z, V . Let us take u (x) to be an arbitrary function. The equations decouple and starting with Eq. (5) and finishing with Eq. (4) the unknowns are found in the above-mentioned order. The function k is obtained by integrating Eq. (5). Thus we have and k u may be expressed either as a function of x or u if u ′ (x) or u ′ (u) is used. The other quantities also have double representations Eq. (14) shows that k u > 2. The second representation requires the passage to a new radial variable u or f = e u and correspondingly g xx becomes where u ′ = u ′ (u).
One can pass to u in another way, by defining the arbitrary function g (u) = k ′ . Then Eq. (5) becomes a quadratic equation for u ′ with solutions and k u = g/h. Eqs. (11)-(15) are easily rewritten in terms of g (u). The restriction k u > 2 holds iff where F is an arbitrary positive function, satisfying F λ > a 2 0 /3. A sufficient condition for the radical in Eq. (17) to be real is which encompasses the previous inequality. In must be satisfied at the axis. A should vanish there which is accounted for by the lower limit of the integral in Eq. (12). W should vanish too. One can see from Eq. (7) that A = o (W ) when x → 0 and can be neglected in Eq. (20) like in many other cases [11][12][13]. Therefore we obtain the static case condition We inset the expression for W from Eq. (11) and take the limit. Eq. (10) gives so that u ′′ (0) < 0 and is finite. Then Eq. (11) shows that W (0) = 0 due to u ′ (0) = 0. Eq.
(21) yields Hence, elementary flatness gives a relation between the 3 integration constants without affecting the arbitrary functions. The constant k 0 may be put zero by a coordinate change.
We won't do this for matching purposes. Thus the general solution depends on one free parameter and the function u (x), u ′ (u) or g (u). The only restriction for u ′ (u) comes from Both sides vanish on the axis, hence, a sufficient condition is An important physical requirement is µ > 0 in some region around the axis, which is enough for a interior solution. According to Eq. (13) µ (0) ≥ 0 always. If k uu < 0, then µ is positive everywhere. Otherwise the energy density may become negative at some distance from the axis.
Eq. (16) indicates that the change of coordinates x → u is always singular at the origin.
This may be avoided by introducing a third radial coordinate r, such that u = u (r 2 ) [7]. This is equivalent to going back to the first version of the solution by choosing u (x). However, some solutions look simpler when u is the radial coordinate.
III. SOME PARTICULAR SOLUTIONS
Let us derive several concrete solutions. For a single beam we must set Z = 0 in order to use metric (1) and the field equations above. Then Eq. (14) gives constant u, which is set to zero. The solution should be found from Eqs. (4)(5)(6)(7)(8)(9). We obtain This is the cherished Lanczos solution [8] in comoving coordinates. It represents a cylinder of rigidly rotating dust. In the absence of u (x) it depends just on the parameter a 0 .
Another important case is µ = 0, which, at first sight, should lead to vacuum solutions.
Two obvious candidates are k = 2u and k = u. Both of them belong to the one-parameter series k = (a + 1) u. This case was solved [7] in the static formulation of the problem. It is worth to do this in the stationary frame and see the differences. We must have a > 1 and k 0 = 0. Eqs. (14)(15) give directly Eqs. (11,12) yield the expressions Eq. (23) gives W 0 = 4/a 2 0 . It is necessary to express f ′ through f . Eqs. (5-6) yield Combining the last three equations gives the result Positivity requires u < 0, f < 1, k < 0. This equation may be integrated where ξ = f −2 2a+1 a+1 and B ξ (p, q) is the incomplete beta function. The formula above can not be inverted, but this is not necessary, since we have chosen a new radial variable. Inserting Eq. (32) into Eqs. (29), (16) and (13) we find explicit expressions for W, g uu and µ, which simplifies to The gravitational field is determined completely.
In the static formulation 0 ≤ a < 1, f > 1, k > 0. Here we have the opposite. The sign of a−1 is changed correspondingly. The model has two parameters, a (remnant of the arbitrary function) and a 0 . Kramer puts a 0 = 1. Then the central density is bounded, 8πµ (0) ≤ 1/2.
In fact, it can be as big as we want it, as seen from Eq. (34). In principle, one can choose an arbitrary density profile, but the extraction of g (u) out of it is not possible explicitly.
In Ref. [7] it is asserted that f (x) has no analytical expression. This is true, however, Eq. The case a = 0 is a true vacuum case. The static metric becomes simply flat spacetime.
Here we have with c 0 being an integration constant. We should compare this metric to the general Lewis solution [9,14,15]
IV. PROPERTIES OF THE GENERAL SOLUTION
Let us study first the properties of k µ and l µ . The 4-velocities are geodesic (no acceleration) and have zero expansion and shear. The vorticity (twist) vector w µ has been calculated with the help of GRTensor for the general cylindrical metric (1) and for the general solution.
We have w x = 0 and for k µ For l µ , w ϕ and w t change sign. These expressions coincide in the Kramer case k = (a + 1) u with Eq. (39) from Ref. [7] after the passage to the static metric is done and one takes into account the difference between x and ζ = f 2 .
Another useful quantity is Γ = k µ l µ . It reads for the different metrics The proof of the dominant energy condition in Ref. [7] may be lifted to the general solution.
It depends crucially on the fact that Γ 2 > 1, which is obvious from the above formula.
It is well known that closed timelike curves (CTC) exist when g ϕϕ < 0. This condition is rather intractable further for the general solution. In the Kramer case we have The sign is determined basically by the competition of the first and the second terms in the square brackets. It becomes negative when This happens always for big enough r. In the static metric formulation g ϕϕ is strictly positive and there are no CTC as seen from Ref. [7], Eq. (36). In this case a matching can be done at some r 0 to the Levi-Civita static metric and a realistic global solution constructed. In the stationary case the matching should be done to the Lewis solution (36,37) whose Weyl class is essentially static, but the Lewis class contains CTC. The continuity of the metric yields four conditions The continuity of the metric derivatives supplies another four. After some rearrangement, the total system of 8 equations becomes Eqs. (51-53) form a system for f , u ′ and c/l Replacing Eq. (56) into Eq. (57) and using Eq. (48) we obtain The combination of Eqs. (55,58) gives Inserting these formulas in the previous equations, everything is determined in terms of the interior solution and x 0 . For example, we have for the parameter n, which measures the line mass density There are several interesting features of the matching. The gravitational potential f does not depend on any parameters at the junction and has the same value as at the axis. Only the ratio of l and c is determined. The quantity n 2 is not necessarily positive. When n 2 < 0 we enter the Lewis class of the Lewis solution, which possesses CTC [14,15].
VI. DISCUSSION
We have obtained in this paper the general global stationary cylindrically symmetric solution for the gravitational field of two identical, rotating and counter-moving dust beams.
Three representations of the interior solution have been given. They depend on the free parameter a 0 and the arbitrary function u (x), u ′ (u) or g (u), which satisfies the condition (25), k u > 2 or (19) respectively. A particular solution with a 0 = 1 and depending on the arbitrary parameter a has been found by Kramer [7]. Many of its nice properties are shared also by the general solution. It satisfies the dominant energy condition. The energy density of the beams is non-negative. The axis is regular and elementary flat. The solution is necessarily non-diagonal. We have studied its stationary alternative. Rotation compensates gravitational attraction and prevents collapse and appearance of singularities.
It was shown that two non-zero components of the dust four-velocities are enough for a solution with arbitrary density profile. In the case of colliding null-dust three such components are necessary [13].
The interior solution can be matched at any distance x 0 to an exterior vacuum stationary solution, the Lewis solution [9,14,15]. Thus a global solution is formed. An important property is the traditional appearance of CTC in rotating cylindrically symmetric solutions.
This happens both in the interior when x 0 is big enough and in the exterior, when the Lewis class is induced. The Weyl class is locally equivalent to the Levi-Civita solution [14]. It is causal and serves as an exterior for the general static solution.
|
2019-04-14T02:21:56.441Z
|
2002-09-10T00:00:00.000
|
{
"year": 2002,
"sha1": "961090fdc887e7b3991f70deed41bd9bbfb0b0fa",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d54df6a32887a961a8e852534a7055eb4ba51a45",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235772522
|
pes2o/s2orc
|
v3-fos-license
|
Poor humoral and T-cell response to two-dose SARS-CoV-2 messenger RNA vaccine BNT162b2 in cardiothoracic transplant recipients
Aims Immunocompromised patients have been excluded from studies of SARS-CoV-2 messenger RNA vaccines. The immune response to vaccines against other infectious agents has been shown to be blunted in such patients. We aimed to analyse the humoral and cellular response to prime-boost vaccination with the BNT162b2 vaccine (Pfizer-BioNTech) in cardiothoracic transplant recipients. Methods and results A total of 50 transplant patients [1–3 years post heart (42), lung (7), or heart–lung (1) transplant, mean age 55 ± 10 years] and a control group of 50 healthy staff members were included. Blood samples were analysed 21 days after the prime and the boosting dose, respectively, to quantify anti-SARS-CoV-2 spike protein (S) immunoglobulin titres (tested by Abbott, Euroimmun and RocheElecsys Immunoassays, each) and the functional inhibitory capacity of neutralizing antibodies (Genscript). To test for a specific T-cell response, heparinized whole blood was stimulated with SARS-CoV-2 specific peptides, covering domains of the viral spike, nucleocapsid and membrane protein, and the interferon-γ release was measured (QuantiFERON Monitor ELISA, Qiagen). The vast majority of transplant patients (90%) showed neither a detectable humoral nor a T-cell response three weeks after the completed two-dose BNT162b2 vaccination; these results are in sharp contrast to the robust immunogenicity seen in the control group: 98% exhibited seroconversion after the prime dose already, with a further significant increase of IgG titres after the booster dose (average > tenfold increase), a more than 90% inhibition capability of neutralizing antibodies as well as evidence of a T-cell responsiveness. Conclusions The findings of poor immune responses to a two-dose BNT162b2 vaccination in cardiothoracic transplant patients have a significant impact for organ transplant recipients specifically and possibly for immunocompromised patients in general. It urges for a review of future vaccine strategies in these patients.
Introduction
The Covid-19 pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has a widespread impact on health, including a substantial mortality among older adults and patients with pre-existing health conditions [1]. Solid organ transplant recipients are considered a group at increased risk: although not associated with a higher infection rate, maybe due to high adherence to selfcare measures preliminary data suggest an increased risk of severe disease and death in case of infection [2][3][4][5].
Vaccination has emerged as a key tool for controlling the pandemic health crisis by preventing severe disease and mortality and by increasing population immunity.
Four vaccines have been approved by the European Medicines Agency (EMA) on base of the phase 3 clinical efficacy studies showing good safety and immunogenicity [6][7][8][9] However, immunocompromised patients have been excluded from these studies.
In spite of lacking data about the novel concept of mRNA vaccines in organ transplant recipients, national and international transplant societies have recommended earliest possible vaccination for all recipients > 3-6 months post-transplant (unless recently treated with lymphocyte-depleting agents) and national vaccination strategies have suggested prioritized treatment for this potentially vulnerable group [10][11][12].
The immune response to other types of vaccines have been shown to be blunted in immunosuppressed patients [13,14].
To gain more insights in the immunogenicity of mRNA vaccines under immunosuppressive therapy, we analysed the antibody as well as the T-cell response after the first and second dose of the BNT162b2 vaccination in cardiothoracic organ transplant recipients.
Study participants and data collection
Transplant recipients (Tx) who had been offered vaccination with the BNT162b2 vaccine (Pfizer-BioNTech) were recruited through their German transplant centres to participate in this prospective cohort and those who received an offer for SARS-CoV-2 vaccination (independently of the study, according to the German priority guideline) were included. The study was approved by the local Ethical committee of the Heart and Diabetes Centre Nordrhein-Westfalen (HDZ) in Bad Oeynhausen, Germany (Reg.-No 2021-742), and participants provided written informed consent.
Healthy members of the medical staff of the HDZ who were offered the vaccination with BNT162b2 in-hospital served as controls. Samples were collected in accordance with the German Act on Medical Devices for the collection of human residual material. All staff members gave written informed consent. The study was registered in the German Clinical Trials Register (DRKS00024199).
Blood samples were captured: pre-vaccination (Tx group), 21 days after the first vaccine dose and 21 days after the second vaccine dose (Tx and control group), respectively.
Determination of anti-SARS-CoV-2 IgG antibodies (Abbott)
The commercial SARS-CoV-2 IgG II Quant assay (Abbott, Lake Forrest, IL, USA) is a chemiluminescent microparticle immunoassay (CMIA) which was used for the quantitative measurement of IgG antibodies against the spike receptor-binding domain (RBD) of SARS-CoV-2 in human serum on the Alinity I system. Data were expressed in WHO standardized units BAU (binding antibody unit) per ml. According to the manufacturer's recommendation, values below 7.1 BAU/ml were regarded as negative whereas values equal to or above 7.1 BAU/ml were interpreted as positive for IgG antibodies against SARS-CoV-2.
Determination of anti-SARS-CoV-2 IgG antibodies (Euroimmun)
Two commercial ELISAs (Euroimmun, Lübeck, Germany) were used to test for antibodies to the S1 domain of the SARS-CoV-2 spike protein (IgG). For quantitative determination of IgG, data were expressed in relative Units per ml (RU/ml). Values below 10 RU/ml were regarded as negative whereas values above 10 RU/ml were interpreted as positive as stated by the manufacturer.
Determination of anti-SARS-CoV-2 IgG antibodies (RocheElecsys)
The Elecsys Anti-SARS-CoV-2 S assay (Roche, Penzberg, Germany) is a commercially available immunoassay using a recombinant RBD of the S-Antigen representing protein for the quantitative determination of high-affinity antibodies to SARS-CoV-2 on a Roche cobas e411 platform. For quantitative determination of IgG, data were expressed in Units per ml (U/ml). Values smaller than 0.8 U/ml were interpreted as negative for anti-SARS-CoV-2 antibodies and positive otherwise following the manufacturers' instructions.
Determination of neutralizing antibodies against SARS-CoV-2
The presence of neutralizing antibodies against SARS-CoV-2 was determined using the cPass™ SARS-CoV-2 Neutralization Antibody Detection KIT (GenScript, Piscataway Township, USA) and performed according to the manufacturer's instructions. The inhibition capability was calculated as follows: According to the manufacturer, values greater than or equal to 20% were considered positive concerning neutralizing antibodies.
Stimulation of immune cells using SARS-CoV-2 peptides
To test for a cellular immune response, immune cells from heparinized whole blood were stimulated with SARS-CoV-2 specific peptides (Miltenyi Biotec, Bergisch-Gladbach, Germany), covering domains of the viral spike, nucleocapsid, and membrane protein (final concentration of each peptide: 1 µg/ml). Treatment of whole blood with water served as negative controls.
Determination of interferon-γ in plasma
Interferon-γ (IFN-γ) release was evaluated using a commercial ELISA (QuantiFERON Monitor ELISA, Quiagen, Hilden, Germany), modified as previously described to allow for rapid and reliable analysis with a standard microplatereader not requiring manual plate-coating [15,16]. IFN-γ values of unstimulated controls were subtracted from the stimulated samples.
Statistical analysis
Results are presented as mean ± standard deviation for continuous variables with normal distribution, median [interquartile range (IWR), 25th to 75th percentiles] for continuous variables without normal distribution, and number (percentage) for categorical data. Student's t test was used to compare normally distributed continuous variables between two groups. The Mann-Whitney U test was used to analyse non-normally distributed data. Statistical analyses were performed in Python using the SciPy package. Figures were created in Python using the seaborn and matplotlib libraries. Statistical tests are two-sided, and p values < 0.05 were considered to be statistically significant.
Patient characteristics
Fifty transplant recipients (Tx) and 50 healthy staff members serving as control group were included in the study. The Tx group had a higher percentage of male patients than the control group (64% vs. 34%, p < 0.0001) and a higher average age (55 ± 10 vs 47 ± 10 years, p < 0.0001). The Tx group was homogenic with respect to the time since transplant, all having been transplanted between 1 and 3 years before study inclusion [median 689 (501; 859) days].
Most Tx patients (92%) were on an immunosuppressive regimen with a calcineurin inhibitor, combined with mycofenolate acid or mofetil (Table 1).
Previous SARS-CoV-2 infection
None of the Tx patients had detectable Anti-SARS-CoV-2 IgG-titres (Abbot) prior to the first vaccination dose and none had been tested positive for Anti-SARS-CoV-2 before.
All individuals of the control group of healthy staff members had undergone weekly pooled PCR analyses of nasopharyngeal swaps and none had tested positive prior to the first vaccination (nor during the 6 weeks following the prime-boost vaccination).
Anti-SARS-CoV-2 IgG titres
Anti-SARS-CoV-2 IgG titres above the cut-off value of 7.1 BAU/ml (Abbot-ELISA) were detected in all but one control subjects one 21 These findings are in drastic contrast to the results in the Tx group: 21 days after the prime dose, 48 out of 50 patients (96%) showed no Anti-SARS-CoV-2 IgG titres above the thresholds of the three tests used; for 45 of these patients, the results did not change 3 weeks following the boosting dose (Fig. 1a-c). One patient (male heart transplant, 29 years old, 482 days post Tx, immunosuppression with tacrolimus and mycofenolate) had IgG antibody titres comparable to the control group after boost dose, the other four patients showed a weak antibody response, with titres above the cut-off values, but markedly lower than the lowest response among the control group. Results were consistent for all three tests (Abbot, Roche, Euroimmun) used. -2 (Fig. 2) The analysis of the functional inhibitory capacity of neutralizing anti-SARS-CoV-2 antibodies demonstrated a positive immunization effect (cut-off ≥ 20% inhibition) in 82% of control individuals after the prime dose (with a large scatter of response) and in all controls after the second dose [median 95% (93;96) boost vs 46% (23;62) prime, p < 0.0001].
Neutralizing antibodies against SARS-CoV
In contrast, no Tx patient showed a positive inhibitory capacity after prime dose, with no significant increase after the boost dose [median 4% (1; 7) after boost, < 0.0001 vs control]. Consistent with the findings of Anti-SARS-CoV-2
Eight Tx patients with no detectable antibodies after boost dose did show an IFN-γ release of > 0.16 IU/ml (suggested as a cut-off for scoring by Petrone et al. [16]) In 80% of controls IFN-γ release was > 0.16 IU/ml.
There was no significant difference in the relatively low IFN-γ production of unstimulated whole blood samples between the groups.
Discussion
This study demonstrates a lack of immunogenicity of the completed prime-boost vaccination with the mRNA SARS-CoV-2 vaccine BNT162b2 in cardiothoracic transplant recipients even 3 weeks after the second dose, strongly suggesting that immunosuppressed cardiothoracic organ transplant recipients are left immunologically unprotected against COVID-19 infection.
Reduced immune responses to conventional vaccination concepts following organ transplantation [13,14] or in general, for patients under immunosuppressive therapy [17] have been reported before. However, the extent of missing humoral and cellular immune response following vaccination appears unexpected.
First insights into the immunogenicity of the BNT162b2 vaccine in an immunocompromised patient population have been reported as interim results from the SOAP-trial on cancer patients: the immune response following the prime dose was low in solid cancer patients (< 40%) and very low in haematological cancer patients (< 15%). However, in their population efficacy was greatly increased by boosting after 21 days [18].
There have been recent reports on poor anti-spike (S) antibody responses to mRNA vaccines in renal [19] and liver [20] transplant patients as well as in a mixed cohort of single organ transplant recipients [21] all of which had included patients over a wide range of years post-transplant, with reported semiquantitative serologic testing only. We present more detailed data on B-cell as well as specific T-cell responses in an uniform group of thoracic organ transplant recipients, all in their 2nd-3rd year post-transplant.
In our study, all participants have completed a full twodose vaccination regimen, the doses being exactly 21 days apart: it demonstrates no seroconversion following the completed two-dose vaccination strategy in the vast majority (90%) of tested cardiothoracic organ recipients. These results contrast with the robust immunogenicity in the control group, who already exhibited a 98% seroconversion following the prime dose (although with a wide scatter of antibody titres), followed uniformly by a significant, on average > tenfold increase of IgG as well as neutralizing antibodies after the boosting.
In contrast to the healthy control group, evidence for a specific T-cell response (as determined by IFN-γ release of whole blood stimulated by spike antigens SARS-CoV-2 peptides) was also lacking in the majority of transplant recipients. However, in a subgroup of transplant recipients-with no detectable humoral response-a small IFN-γ release could be observed. Although cross-reactivity with a former Corona Virus-infection cannot be ruled out as a possible explanation [16], it might give evidence for a weak specific T-cell response in this subgroup of patients. The detection of specific T-cell responses in individuals lacking detectable circulation antibodies has also been described in convalescents after asymptomatic to mild COVID-19 infections [22]. The authors conclude that seroprevalence as an indicator may underestimate the extent of adaptive immune responses against SARS-CoV-2. The importance to combine analysis of B-and T-cell immunity has been emphasized elsewhere [23,24]. In spite of growing insights into the persistence and decay of antibody responses both following infection [25] and vaccination, [9,26] we do not yet know the exact correlates of immunity neither regarding the levels of required antibody titres nor whether suboptimal B-cell responses combined with T-cell responses might still protect from severe COVD-19.
Limitations of our study include the small number of patients enrolled. Larger populations are necessary to answer additional questions: Considering the time-dependent and distinct immunosuppressive regimens after single organ transplantation, it seems obvious that doses and composition of different immunosuppressive strategies may impact on the immunogenicity after mRNA vaccination against SARS-CoV-2.
Our findings focussed on cardiothoracic patients in their first three years post-transplant, most of them being on triple immunosuppressive therapy including a calcineurin inhibitor, mycophenolate-mofetil as well as corticosteroids. The relative high-maintenance immunosuppression might explain why our finding of poor humoral response was even more pronounced than recent reports by others: in a small group of 23 renal patients, the five patients with (low) detectable antibodies were on average 18 years post-transplant [19]; in a cohort of 80 liver transplants (median of 5 years post-transplant, 47% with (low-titre) detectable antibodies) maintenance immunosuppression was lower compared to our study group, with anti-metabolite agents included in only 50% of patients, and only 21% of patients being on triple immunosuppressive therapy [20]. In a larger mixed cohort of solid organ transplant recipients poor humoral response was associated with older age, cardiothoracic transplant organ, first years post-transplant, maintenance immunosuppression regime including anti-metabolites [21]-all factors holding true for our study population.
Larger scale analyses have to elucidate whether long-term thoracic organ transplant recipients under lowered maintenance immunosuppression may confer better vaccination effects. Future studies will also have to focus on age per se. In fact, we observed mild antibody responses to BNT162b2 in younger transplant recipients.
Our sobering results on the poor response to the mRNA BNT162b2 vaccine in transplant recipients prompt further questions on dosing of the vaccine. The preliminary data by Boharsky et al. suggest that the mRNA-1273 SARS-CoV-2 vaccine by Moderna with a higher concentration per dose may confer immune responses in a larger percentage of transplants [21], but this certainly needs deeper investigation.
To gain adequate protection against other potentially threatening infections augmenting vaccination strategies such as higher doses per vial or additional boosting have been suggested for transplant recipients before [13,27,28]./ Considering the beneficial data on safety and adverse local and systemic events of the BNT162B2 vaccine in immunocompromised cancer [18] and transplant [29] patients, additional booster dose(s) could be considered, at least in those transplant patients showing at least some detectable B-or T-Cell-response to the first two doses. Of course, additional information on the effectiveness of other COVID-19 vaccines, e.g. vector-based vaccines, is needed.
In summary, given the globally poor antibody-and T-cell response of our transplant patients to a completed two-dose regimen with the mRNA BNT162b2 vaccine our findings mandate an urgent review of vaccination strategies for organ transplant recipients. As there may be relevant differences in immune responses among immunosuppressed patients depending on age, time since transplant, immunosuppressive regimen etc., post-vaccination testing for both, B-and T-cell responses is advisable for best medical care.
As long as transplant recipients are left unprotected, adherence to all public health measures in place, such as social distancing and shielding even after vaccination is mandatory. Creating herd immunity around these patients using a strategy of "ring vaccination" should be an additional safety measure. as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2021-07-09T13:47:29.782Z
|
2021-07-09T00:00:00.000
|
{
"year": 2021,
"sha1": "8c171083d01579ebb89be883620eca332317912c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00392-021-01880-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8f580b5957f17fd465309c63cbc654a45f5bee1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259880852
|
pes2o/s2orc
|
v3-fos-license
|
Surgical Management of Diffuse Plexiform Neurofibroma in Von's Disease Recklinghausen: About a Case at Mopti Hospital
Abstract
INTRODUCTION
Plexiform neurofibromas are considered a rare but disfiguring and devastating complication of neurofibromatosis type 1. Diffuse plexiform neurofibroma is a characteristic lesion of Von Recklinghausen disease [1]. The treatment of plexiform neurofibromas is a real challenge. Medical treatment of NFPs has been frustrating with little evidence of effectiveness. Standard chemotherapy has not been shown to be beneficial and is associated with the risk of treatment-induced secondary malignant neoplasms [2]. Currently surgery is the only effective treatment. This is difficult because of its hemorrhagic nature and the infiltrating aspect of the lesions. In these situations, although temporary cessation of bleeding is usually possible with packing and pressure, permanent control of bleeding has always been achieved either by ligation of the external carotid artery (ECA) or by selective embolization [1,6].
We report a case of diffuse plexiform neurofibroma of the face successfully operated without hemorrhagic accident after ligation of the external carotid artery at the Sominé Dolo hospital in Mopti (Mali).
CLINICAL OBSERVATION
A 39-year-old woman with no particular pathological history consulted for a monstrous swelling with an irregular texture on the left hemiface (figures 1, 2, 3); evolving for 13 years. On clinical examination, there were 2 mobile swellings in relation to the skin, they were painless, soft, ulcero -necrotic, the largest of which is in the left orbito -malar seat and the smallest is located at the level of the left preauricular region.
General Surgery
The rest of the examination found cafe-au-lait spots, axillary lentigines.
The diagnosis of Von Recklinghausen's disease was made in the presence of diffuse neurofibroma, café-au-lait stain and axillary lentigines.
DISCUSSION
Neurofibroma is a benign tumor arising from the connective elements of the Schwann sheath by proliferation of the endoneurial matrix. The diagnosis is confirmed by histology, which highlights a fibromatous proliferation made up of narrow and luxurious fibers, grouped in loose spans dotted with regular and fusiform nuclei, these fibers are made up of collagen [2]. Localization in the cervico-facial region is rare. Complete surgical excision is the treatment of choice for neurofibroma because recurrence is possible even if its frequency remains low [4,5]; it is most often intralesional.
The tumor is often very hemorrhagic, sometimes responsible for operative mortality.
Since peroperative hemorrhages of plexiform neurofibroma are often fatal, our patient first benefited from ligation of the external carotid artery before performing total excision of the mass. Plexiform neurofibromas of the face pose complex repair problems [2]. To repair the loss of substance at the surgical site, which was slow to heal, a thin skin graft was performed on the patient. In the study of Dogra B et al., in India [6], subtotal excision was performed due to impossibility of total excision. Their tumors bled profusely during surgery due to the friable nature of the new vessels. Partial regrowth was observed 6 months later. They therefore recommended, that a sufficient amount of blood be arranged before undertaking the surgical excision of the facial PNF; the tumescent technique should be used in all cases and, after excising the overhanging folds, the skin flap should be re-draped and a few anchoring sutures between the internal surface of the flap and the underlying periosteum should be placed to avoid downward traction of soft tissue and vital structures after surgery.
Three factors influence the results of neurofibroma surgery: extent of resection, tumor location, and patient age. Subtotal resection has a recurrence rate of less than 40% while total resection reduces the recurrence rate to less than 20%. In addition, patients with cervico-facial location, compared to locations in the trunk and extremities, and subjects under 10 years of age have a higher risk of recurrence [6,7]. Needle et al., [8] demonstrated that the greatest risk of recurrence of operated plexiform neurofibromatosis was observed in lesions involving the head and neck region, recurrence up to 54% over a period of 10 years.
It is now certain that the destruction of neurofibromas does not lead to any risk of accelerated growth of the remaining neurofibromas or of cancerization [9]. The postoperative course was simple in our patient. However, it appears in a study by Kerrary S. et al., [10] The poor prognosis in Recklinghausen disease is related to the location of the tumors (more often trunk or proximal), the larger size and grade, and the fact that some patients develop multiple sarcomas simultaneously. Local recurrences are common, and metastases (lungs, liver, skin, and bone) usually appear within two years of diagnosis. Despite this risk of malignant transformation, benign schwannomas retain a good prognosis if surgical excision is complete, recurrences are exceptional. On the other hand, malignant schwannoma is a tumor with a poor prognosis with an overall survival rate of 20 to 25% in the case of Recklinghausen's disease and 50% in the case of an isolated tumor [11].
CONCLUSION
Despite advances in surgical and imaging techniques, it is clear that surgery for facial neurofibromatosis remains a challenge for the surgeon. We must constantly have a critical look at our surgical gestures and indications.
In particular, provide for surgery after the age of puberty, ligation of the external carotid artery and reconstructive cosmetic surgery.
|
2023-07-15T15:26:51.376Z
|
2023-06-23T00:00:00.000
|
{
"year": 2023,
"sha1": "a55b75965f6c2647b876f808ee8203a4991e2007",
"oa_license": null,
"oa_url": "https://doi.org/10.36347/sasjs.2023.v09i06.020",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "552e36ce71973ad5bcbee2e0f04746358e2a2a18",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
13661044
|
pes2o/s2orc
|
v3-fos-license
|
Self-reported prevalence of clinical features of allergy to nuts and seeds, and seafood in university students
Background In developing countries, there is a lack of epidemiological information related to food hypersensitivity, including nuts and seafood. Objective The aim was to determine the prevalence of allergic reactions and clinical manifestations associated with the consumption of nuts and seeds or seafood in university students. Methods We designed an observational cross-sectional study. A structured questionnaire was applied to Mexican university students to identify allergic reactions associated with the consumption of nuts and seeds, and seafood. Results A sample of 1,200 students was included; mean age of 19.7 ± 1.7 years. Prevalence of symptoms associated with the consumption of nuts and seeds, and seafood were 2.8% (33 of 1,200) and 3.5% (42 of 1,200) respectively. The main clinical manifestations were abdominal pain (63.6% in nuts and seeds), flushing (50% in seafood), and pharyngeal oppression (19% in seafood). Prevalence of perceived, probable and systemic allergy to nuts and seeds was 2.8% (95% confidence interval [CI], 2.5%–3.0%), 0.8% (95% CI, 0.3%–1.3%) and 0.2% (95% CI, 0%–0.4%) respectively. On the other hand, the prevalence (perceived, probable, and systemic) associated with seafood consumption was 3.5% (95% CI, 2.5%–4.5%), 1.8% (95% CI, 1.0%–2.5%), and 0.5% (95% CI, 0.1%–0.9%). Walnut and shrimp were the most frequently reported foods. Conclusion For every 100 Mexican university students, approximately 3 or 4 perceived to have allergy attributed to the consumption of some nuts and seeds or seafood, while 1 or 2 students would have a probable reaction to this same type of food. Walnut and shrimp would be causing the higher quantity of food allergic reactions.
INTRODUCTION
The increase in the prevalence of food allergy, observed in some regions of the world, it has been considered as a second wave in the epidemic of allergic diseases [1]. This phenomenon has been observed, both in pediatric and adult population [2,3]. The prevalence of selfreported food allergy shows variations ranging from 0.4% to 6.0% [4]. For its potential to cause severe or fatal allergic reactions, nuts such as peanuts, walnuts and sesame seeds or seafood such as shrimp, fish or crab, have received more attention. Thus, the prevalence of nut or peanut allergy in the adult population has been estimated at 1.3% [5] and for seafood at 2.3% [6]. However, seeds like almond, hazelnut, chestnut, pistachio and sunflower seed or seafood such as octopus, oyster and clams, have been scarcely studied.
In developing countries, there is a lack of epidemiological information related to food hypersensitivity, including nuts and seafood [7,8]. The possible variations in the prevalence of allergic reactions to food due to habits and customs as well as the availability of foods that depend on geographical conditions it is unknown. For these reasons, the objectives of this study are to determine the prevalence of allergic reactions associated with the consumption of nuts and seeds or seafood and to identify the types of foods, as well as to describe the clinical manifestations most frequently related to allergic reactions in a sample of university students.
Design and subjects
The methods of this study have been previously described [9]. In summary, from a universe of 25,269 students enrolled in the Autonomous University of the State of Mexico, 1,200 students between 18 and 25 years of age, born in the State, were selected and analyzed transversely from February to May 2014.
Questionnaire
Hypersensitivity to nuts and seeds, and seafood was determined through a structured questionnaire [9], answered by each participant. The instrument investigated demographic variables and the personal or family history of allergic diseases diagnosed by a physician. Then, if the subjects reported any adverse reaction related to the consumption of any food, they were been questioned about any discomfort associated or not with the consumption of some nuts and seeds, or seafood.
Definitions
For this study, a perceived allergic reaction was defined if the subjects answered positively to the question: Do you have any discomfort, reaction or symptoms after eating some food or drink? Then, if the adverse reaction was associated to nuts and seeds or seafood. A probable allergic reaction was considered when the participants answered affirmatively to the previous question and the symptoms were typical of those involved in allergic reactions, for example, skin, urticaria and angioedema; respiratory system, shortness of breath, wheezing, and throat tightness; gastrointestinal system, vomiting and diarrhea, and the discomfort originated within 2 hours of ingestion of food [5, 10]. A systemic reaction was defined when, in addition to the above, 2 or more organs or systems were affected.
Ethical consideration
The Ethics and Research Committee of the Center for Research in Medical Sciences of the University of the State of Mexico approved this study (approval number: 2014/05). Each student signed an informed written consent to participate.
Characteristics of the population
The sample of students analyzed consisted of 501 men and 699 women; the average age of the population was 19.7 years. Mainly medical students (43.1%) represented the sample. The most common atopic comorbidity was allergic rhinitis, followed by hypersensitivity to medications. The most common atopic disease in the mother and father was allergic rhinitis, 5.6% and 3.2%, respectively ( Table 1).
Characteristics of allergic reactions
In subjects allergic to nuts and seeds, the main symptoms were gastrointestinal, and of them, abdominal pain, followed by abdominal distension and flatulence. In the skin, the most frequent discomforts were rash and flushing. Pharyngeal oppression was the main respiratory discomfort. On the other hand, in students having seafood allergy, again the most frequent intestinal discomfort were abdominal pain and abdominal distention. Half of all students with this allergy showed flushing and approximately 40% had body pruritus. In the respiratory symptoms, the sensation of pharyngeal oppression and sneezing were the most frequent clinical manifestations. In both cases, episodes of anaphylaxis were not documented ( Allergy to nuts and seeds, and seafood
Estimated prevalence
The prevalence of nuts and seeds perceived allergy was close to 3.0% where the most prevalent foods were the walnut (1.0%), Indian nut (0.9%), then the peanut (0.8%), and the least frequent were chestnut (0.2%) and date (0.1%). Probable allergy to nuts and seeds occurred in 0.8%, where walnut (0.5%) and Indian nut (0.4%) were the most frequent. Systemic reaction was documented in 0.2% of the students studied; again, the walnut was the most prevalent ( Table 3).
DISCUSSION
Our study reported that 3% of university students had perceived allergy to some type of nuts and seeds: mainly walnuts and rarely peanut. On the other hand, the prevalence of selfreported allergy to seafood was 3.5%, with shrimp being the most frequent food. Intestinal discomfort was the main symptom in students allergic to nuts and seeds, and in the allergic ones to seafood, cutaneous symptoms.
In Mexico and elsewhere in Latin America, studies that estimate the prevalence of food allergy are scarce [7,8]. Even more, those related to nuts and seeds, and seafood that are more likely to cause severe illness. In addition, previously published studies analyzing the prevalence of food allergy usually focus on products such as peanut, sesame seed or shrimp [5, 11,12]. Our study provides valuable information in other foods. Here, 13 nuts and seeds, and 8 different seafoods were identified as allergenic. Among the nuts and seeds, the different types of nuts stood out, in less quantity almond and peanut.
In this study, the frequency for nuts varied from 0.3% to 1.0% depending on the nut species studied. In both Europe [4] and Canada [11], the overall prevalence of nut allergy was consistent with our data. However, the type of nut was not described. Only one study, conducted in the United States, reported the prevalence of allergy to different types of nuts, and these were 0.2% up to 0.8%, according to the nut species [5]. In this way, it seems that nut allergy in our country, is a problem similar to that observed in developed countries.
The prevalence of peanut allergy in our population was 0.8%, similar to that observed in North American countries [5, 11,13], or in another city of Mexico [14], but differed substantially from those found in Europe [4]. Sesame seed has emerged as an important allergen. As up to 0.3% of students reported a perceived allergy to it [5, [11][12][13]; this differs markedly from what is observed in various regions of the world where the prevalence barely exceeded 0.1%. Another emergent allergen that has been gaining notoriety worldwide, are sunflower seeds; however, reports are scarce [15]. It showed a frequency similar to that observed with sesame seeds, pistachios or Brazil nuts (0.3% each), more studies will reveal the true dimension of this problem. Allergy to nuts and seeds, and seafood
On the other hand, the perception of seafood allergy showed a frequency higher than that observed in other regions of the world, such as Canada [11], United States [6], or Europe [4].
Historically, the introduction of seafood to the usual diet of Mexicans did not take place until the time of Spanish colonization; therefore, there was a lower degree of exposure to this type of food, reflected by a low per capita consumption of seafood in our population [12].
The low seafood consumption could be a condition that favors immunological intolerance, thus facilitating hypersensitivity responses to seafood. Specifically, shrimp is the seafood that caused the most allergic reactions in students, and this is consistent with the findings of other regions of our country [12,14]. Comparatively the prevalence of perceived and probable reactions to fish was higher in our population than in the United States [6] or Canada [11], but consistent with different European countries [4].
Symptoms of adverse reactions to food are diverse, which usually depend on: the amount of food ingested, the food preparation, other foods concomitant consumption, the age of the patient, the rate of food absorption, among others [16]. Symptoms that accompany allergic reactions to food are urticaria, angioedema, pruritus, cough, abdominal pain, and tachycardia.
In this study, intestinal symptoms were predominant in subjects allergic nuts and seeds; in contrast, cutaneous manifestations were the main symptoms in subjects allergic to seafood. Perhaps seafood allergic subjects are more likely to develop systemic reactions compared to those who are allergic to nuts and seeds [17].
In Germany, a population-based study showed that the most frequent symptoms were gastrointestinal, followed by cutaneous [18]. On the other hand, a study carried out in Colombia, showed that pruritus, rash and reddish skin were the most frequent symptoms [19]. However, neither of them determined the prevalence of symptoms according to food group. It is recommended that allergic symptoms should be categorized according to the food group that triggers discomfort, in order to establish patterns, such as that observed in oral allergy syndrome after consumption of Rosaceae family foods, such as peach, apple, plum, etc. [20].
In this study, we take into account our definition of systemic reaction (which may well correspond to anaphylaxis), it was observed a systemic reaction prevalence to nuts and seeds, and seafood of 2 of 1,200 (0.17%) and 6 of 1,200 (0.5%) respectively. In order to extrapolate the results to entire student population (25,269), it is estimated that 43 or 126 students with anaphylactic reactions triggered by nuts and seeds, and seafood. This is especially relevant for public health, if it is considered that nearly four million upper-level students enrolled in the 2014-2015 school year in Mexico [21] thus, there would be about 6,800 cases of anaphylactic reactions related to nuts and seeds and 20 thousand cases to seafood.
In the United States, the prevalence of anaphylaxis in a randomized adult population was 1.6%, food was responsible for 1 in each 3 episodes [17]. In Mexico, it was shown that 1.3% of a randomized adult population had food-related anaphylaxis. Shrimp, fish, and peanuts were the food mostly involved with this type of problem [14]. In another study, 5% of children with food allergies reported anaphylaxis to fruits, seafood and nuts and seeds as the main responsible foods [22].
More recently, a study of schoolchildren showed that 1.2% of the population analyzed had anaphylactic reactions as a manifestation of food allergy, however the food associated with anaphylaxis was not identified [23].
Finally, our study also shows that the food that was mostly related to anaphylaxis was nuts, which is different with the findings in the United States, where peanut was the food mainly related to this problem [24].
It is important to consider the possible limitations of the study. Our results are mainly extrapolated to university student's aged 18 to 25, however, they come from a public university that receives students from all economic strata. Although the sample size was adequate to quantify the prevalence of food allergy, it was not possible to carry out a strictly probabilistic sampling. This could led to skew of students in a particular career, as well as possibly to those subjects who had problems with food intake. A further limitation was the inability to confirm the diagnosis of food allergy through oral challenge tests or allergic sensitization. Finally, since we do not have data on the global prevalence of anaphylaxis in our country, we are unable to determine the accuracy of these estimates.
In summary, this study documents the prevalence of food allergy to certain nuts and seeds, and seafood rarely addressed in previous studies in our country. Particularly it exposes the serious problem of under diagnosis of food-related anaphylaxis in the young adult population, an entity with potential deadly consequences. More studies are necessary to clarify the true dimension of this problem.
|
2018-05-09T00:43:45.813Z
|
2018-02-21T00:00:00.000
|
{
"year": 2018,
"sha1": "c7e9a9b2fd0c871c78c1f01738e3cc691c739acf",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc5931926?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7e9a9b2fd0c871c78c1f01738e3cc691c739acf",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16031288
|
pes2o/s2orc
|
v3-fos-license
|
From KIDSCREEN-10 to CHU9D: creating a unique mapping algorithm for application in economic evaluation
Background The KIDSCREEN-10 index and the Child Health Utility 9D (CHU9D) are two recently developed generic instruments for the measurement of health-related quality of life in children and adolescents. Whilst the CHU9D is a preference based instrument developed specifically for application in cost-utility analyses, the KIDSCREEN-10 is not currently suitable for application in this context. This paper provides an algorithm for mapping the KIDSCREEN-10 index onto the CHU9D utility scores. Methods A sample of 590 Australian adolescents (aged 11–17) completed both the KIDSCREEN-10 and the CHU9D. Several econometric models were estimated, including ordinary least squares estimator, censored least absolute deviations estimator, robust MM-estimator and generalised linear model, using a range of explanatory variables with KIDSCREEN-10 items scores as key predictors. The predictive performance of each model was judged using mean absolute error (MAE) and root mean squared error (RMSE). Results The MM-estimator with stepwise-selected KIDSCREEN-10 items scores as explanatory variables had the best predictive accuracy using MAE, whilst the equivalent ordinary least squares model had the best predictive accuracy using RMSE. Conclusions The preferred mapping algorithm (i.e. the MM-estimate with stepwise selected KIDSCREEN-10 item scores as the predictors) can be used to predict CHU9D utility from KIDSCREEN-10 index with a high degree of accuracy. The algorithm may be usefully applied within cost-utility analyses to generate cost per quality adjusted life year estimates where KIDSCREEN-10 data only are available.
Background
Health-related quality of life (HRQoL) is a multidimensional construct that measures the impact of health or disease on physical and psychosocial functioning [1,2]. The measurement and valuation of HRQoL is a major issue for health services research and has become an essential component for assessing the cost-effectiveness of treatments and interventions in public health and clinical medicine research internationally [3]. HRQoL instruments can be categorised into two groups: health profile measures providing simple summative index summary scores for individual dimensions (items) and/or overall health, and preference based instruments/multi-attribute utility instruments containing preference weights for individual dimensions relative to each other and a preference weighted summary score for each health state defined by the instrument. Multi-attribute utility instruments can be used to generate quality adjusted life years (QALYs) for use in cost-utility analyses. QALYs are the preferred outcome measure for many regulatory bodies including the National Institute for Health and Clinical Excellence in the UK and the Pharmaceutical Benefits Advisory Committee in Australia [3,4].
The majority of HRQoL instruments developed specifically for children and adolescent populations are not suitable for application within the framework of costutility analysis because they are non-preference based. One of the most prevalent non-preference based instruments, widely used in both public health and clinical medicine disciplines across countries, is the KIDSCREEN [5][6][7][8]. The KIDSCREEN has a simple summative scoring system in which equal weights are attached to different dimensions of HRQoL. However, a valid instrument that can be used to generate QALYs in cost-utility analyses needs to have the ability to 'measure' health status and also the ability to 'value' health status by incorporating preferences relating to the relative desirability of the dimensions and severity levels of each of the dimensions included in the instrument.
Mapping or cross walking techniques may be applied to link profile instruments and preference based instruments together thereby enabling non-preference based HRQoL instrument results to be utilised within the framework of cost-utility analyses [4,9]. A comprehensive review by Brazier and colleagues [9] identified 30 mapping studies in the literature. All of these studies had been conducted using instruments designed for measuring HRQoL in adults, and had been applied exclusively in adult populations. To date, only one previous study has conducted a mapping exercise exclusively in a paediatric population. Furber and colleagues mapped the Strengths and Difficulties Questionnaire responses into Child Health Utility 9D (CHU9D) utilities [10].
The main objective of this study was to develop an algorithm for generating CHU9D utility scores from KIDSCREEN-10 index summary scores, facilitating costutility analyses within studies where health outcomes are assessed only by the KIDSCREEN-10 index.
Study design
An online survey was developed for administration to a community based sample of adolescents living in Australia, aged 11-17 years. Following parent and adolescent consent, adolescents were invited to complete a survey which included the CHU9D and KIDSCREEN-10 instruments, socio-demographic variables (gender, age and socio-economic status as measured by the Family Affluence Scale) [11], a five-scale self-reported general health question (measured as excellent, very good, good, fair and poor), and whether they had a long standing disability, illness or medical condition. This study was approved by the Social and Behavioural Research Ethics Committee, Flinders University (project number 4701).
Instruments
The KIDSCREEN-10 is a generic non-preference based measure of well-being and HRQoL developed internationally for children and adolescents aged 8 to 18 years old [5]. It is a short version of the KIDSCREEN-52 and KIDSCREEN-27 instruments and has demonstrated criterion validity, convergent validity and known groups validity [12,13]. The KIDSCREEN-10 contains 10 items: fit and well (KS_I1), energy (KS_I2), sad (KS_I3), lonely (KS_I4), had enough time for yourself (KS_I5), been able to do the things that you want to do in your free time (KS_I6), parent(s) treated you fairly (KS_I7), had fun with friends (KS_I8), got on well at school (KS_I9) and been able to pay attention (KS_I10), each with a 5 point response scale [13]. The calculation of KIDSCREEN-10 index involve three steps: firstly, a raw overall score is summed by adding each item score with equal weight; secondly, the sum scores are converted to a score by assigning Rasch person parameters to each possible sum score; and lastly, the person parameters are transformed into values with a mean of approximately 50 and standard deviation approximately 10 [12]. A higher score is indicative of a better HRQoL. Both self-reported and parent proxy versions are available for KIDSCREEN instruments. The self-reported version was adopted in this study.
The CHU9D is a newly developed generic preference based measure of HRQoL that was designed specifically for application with young people [14]. Whilst it was originally developed for use with younger children aged 7 to 11 years, recent studies have also demonstrated the practicality and validity of using CHU9D in older adolescent populations aged 11-17 years [15][16][17]. The CHU9D consists of 9 dimensions: worried, sad, pain, tired, annoyed, schoolwork/homework, sleep, daily routine, ability to join in activities, with 5 different levels representing increasing levels of severity within each dimension. The original health state valuation algorithm for CHU9D was generated from the application of the standard gamble method within the UK adult general population [18]. In this study, since Australian adolescent data is used, we applied a recently developed Australian adolescent specific scoring algorithm for the CHU9D instrument based upon the best-worst scaling method and anchored on the 1-0 fullhealth to dead scale using the UK standard gamble results [19]. The CHU9D utilities range between 0.33 and 1. The strength of overlap between the KIDSCREEN-10 and the CHU9D has been reported in detail elsewhere [17]. Briefly Stevens and Ratcliffe found a moderate degree of significant correlation between CHU9D utility scores and the KIDSCREEN-10 index (r = 0.61), with some differences in the coverage of the items for the respective descriptive systems. The KIDSCREEN-10 is broader in scope than the CHU9D which focuses on a narrower definition of HRQoL.
Statistical analysis
To develop the mapping algorithm from the KIDSCREEN-10 index to CHU9D utility scores, a dataset containing responses to both instruments from the same individual is used to estimate the mapping algorithm that can then be applied to other studies. In this study two groups of models were considered. In the first group the CHU9D utility score was regressed upon the KIDSCREEN-10 index, and also a higher order of the KIDSCREEN-10 index if the relationship between the two instruments was found to be nonlinear. In the second group the CHU9D utility score was regressed upon the individual KIDSCREEN-10 item raw response scores. In the event that not all KIDSCREEN-10 items coefficients were statistically significant, the stepwise regression with forward selection technique (with significance levels for entrance of 0.05) was used to choose the "best" combination of predictors from the 10 items [20]. In the mapping literature, Model 2 is the most widely used additive model [9]. In addition to individual item and overall summary scores several previous mapping studies have also included socio-demographic characteristics, in particular age and gender, to improve predictive performance [9]. The significance (or otherwise) of including age and gender was also considered here. To summarise, the following two models were considered.
where CHU9D is the CHU9D utility score, KS is the KIDSCREEN-10 index, KS 2 is the KIDSCREEN-10 index squared, KS_Ij_sw are the selected KIDSCREEN-10 items based upon statistical significances using the stepwise regression technique, k is the number of selected KIDSCREEN-10 items. The significance level is set to be 5% in this study.
Several econometric techniques have been adopted in previous studies to estimate mapping models, of which the ordinary least squares estimator has been the most widely adopted [9,21]. The majority of mapping models in the literature have mapped to EQ-5D, and as a result models are used that are appropriate for the distribution of EQ-5D responses which is typically bi-modal or tri-modal with a large proportion of responses at 1 (see Longworth and Rowen [22] for an overview). Figure 1 indicates that for this sample CHU9D responses are left-skewed with a large number of responses at 1. Appropriate estimators include: the Tobit estimator which takes into account bounding issues (e.g. for some multi-attribute utility instruments a high proportion of respondents report full health with a utility of 1), the censored least absolute deviations estimator which further relaxes the distributional assumption of the error term (i.e. not necessarily requiring the error term to be normal and homoscedastic as assumed by Tobit) [23,24], and the generalised linear model which allows for the non-normal distribution of dependent variables (e.g. left/negatively skewed utility scores) [25].
The ordinary least squares estimator is sensitive to potential outliers as it is based on the minimisation of the variance of the residuals. The censored least absolute deviations estimator mentioned above is a special case of robust regressions that does not suffer from this sensitivity and is therefore considered to be more suitable in this context. In this study we propose to include another effective robust estimator, the MM-estimator [26], that has been shown to have both a high breakdown point (i.e. the percentage of incorrect observations an estimator can handle before giving an incorrect result) and a high efficiency [27,28], but has not yet been utilised in mapping exercises. Heteroskedasticity robust standard errors are reported for inference.
Previous studies have indicated that the censored least absolute deviations estimator outperforms the Tobit estimator in relation to goodness-of-fit criteria (e.g. mean prediction error) (see for example Sullivan and Ghushchyan [29]). However since no other definitive evidence is available regarding the superiority of a particular estimator, we chose to utilise four estimators (ordinary least squares, censored least absolute deviations, MM and generalised linear model) in this study. Among different combinations of family and link function for the generalised linear model, the binomial family with logit link was chosen as the most appropriate since it showed the best performance of predicting the mean utility close to the observed. Regression analyses were estimated in Stata version 12.1 (StataCorp LP, College Station, Texas, USA).
Goodness-of-fit was examined using mean absolute error (MAE) and root mean square error (RMSE)whereby the lower the value, the better the performance. MAE was selected as the key criteria to measure average model performance as it has been found to be a more natural measure of average error than RMSE; it is unambiguous [30].
Since no external validation dataset is currently available, model performance was assessed using the internal dataset in two approaches. The combination of model and method with the best goodness-of-fit results in two groups of validation analyses would be the optimal one chosen for the full sample. In the first set of validation analyses (Validation I), the full sample was divided equally into five groups using computer-generated random numbers. Each time, 80% of the sample (i.e. four random groups) was assigned to the "estimation sample" that was used to generate the mapping algorithm, while the remaining 20% of the sample (assigned to the "validation sample") were used to predict CHU9D utilities based on the above algorithm. This procedure was repeated 5 times, so that each of the five random groups was used in the estimation and validation exercises. Model performance was assessed based on the pooled estimated prediction errors. This validation method is usually referred to as a cross-validation approach in the literature [31,32]. In the second set of validation analyses (Validation II), the mapping algorithms generated through the full sample were tested on three random samples [33]. The three random samples with sample size of 100, 300, and 500 were generated by random selection within the full sample.
Results
Of the 961 adolescents who consented to take part in the survey, 590 adolescents (61.4%) completed both the CHU9D and KIDSCREEN-10 instruments and had no missing values on age and gender. The mean (standard deviation) CHU9D utility score was 0.808 (0.155) and mean (standard deviation) KIDSCREEN-10 index was 43.737 (7.932). Fifty five percent of respondents were male, the mean (standard deviation) age was 14.5 (2) years, 53% of respondents came from families with high socio-economic status (as defined by the Family Affluence Scale), 92% reported their health status was good, very good or excellent, 11% had a disability. See Table 1 for details. Figure 1 shows the kernel density of the CHU9D utility scores and the KIDSCREEN-10 index. The CHU9D utility score is non-normally (left-skewed) distributed while the KIDSCREEN-10 index tends towards a normally distribution (although the null hypothesis for normality was rejected based on Shapiro-Wilk normality test).
Prediction of CHU9D utility scores
The goodness-of-fit results for different combinations of models and methods of the full sample are reported in Table 1 Sample Table 2, it is reasonable to conclude that the mapping algorithm using the MM-estimator with model 2 specification is preferred based on MAE criteria. Scattergrams of the relationship between the observed and the KIDSCREEN-10 predicted CHU9D utility scores are shown in the Figures 2 and 3.
Validation Table 3 reports two groups of validation analyses results for all combinations of models and methods introduced in the statistical analysis section. According to MAE and RMSE, ordinary least squares and MM-estimates based on the model 2 specification have the best predictive performance across both methods of valuation. Overall the MM-estimates based on the model 2 specification are selected as the preferred model as it performs slightly better using the preferred MAE criteria. The results reported in validation analyses support the conclusion from the full sample analysis that MM-estimator based on Model 2 is the optimal choice if MAE is the key criteria, whilst the ordinary least squares estimator based on Model 2 should be chosen if RMSE is the dominant one.
Mapping equations
The detailed regression results using the full sample are reported in Table 4. Gender was consistently insignificant in all scenarios. Age was found to be significant only one occasion where the ordinary least squares estimator was applied. For all other three estimators, age was insignificant. Considering these findings, both gender and age were not included in the final regression equations. For Model 1, both the original KIDSCREEN-10 index and its squared term were found to be robustly significant (P < 0.05) in three estimates (ordinary least squares, censored least absolute deviations and MM-estimator), indicating the existence of the non-linear relationship between the two instruments. The generalised linear model incorporates the nonlinear relationship between dependent and independent variables through the link function, and as shown in Model 1, the coefficient of the KIDSCREEN-10 index was statistically significant (P < 0.05) whilst the squared term was insignificant and not included. In Model 2, the stepwise selected significant KIDSCREEN-10 items are the key predictors. As can be seen, not all of the 10 items were significant, but for all statistically significant items the positive coefficients were consistent with the expectation that a high item score (better health) is associated with a higher utility. The potential multicollinearity issue was detected using variance inflation factor and the mean/highest variance inflation factor in this case is 1.88/2.01, suggestion that none of the items suffered from multicollinearity and can be included simultaneously in the regressions. The items that were found to be robustly non-significant across four estimators were KS_I5 ("had enough time for yourself"), KS_I6 ("been able to do the things that you want to do in your free time"), KS_I7 ("parent(s) treated you fairly") and KS_I8 ("had fun with friends"). This is consistent with the findings from the pairwise correlation analysis, specifically that all four items exhibited a relative lower correlation relationship with CHU9D (r < 0. Table 4 for the detailed regression outputs of four estimators. Based on the MAE result discussed above, the optimal equation used to predict the CHU9D utility based on KIDSCREEN-10 items would be: CHU9D utility score = 0.222655 + 0.037867*KS_I1 + 0.023085*KS_I2 + 0.037192*KS_I3 + 0.021284*KS_I4 + 0.024877*KS_I9 + 0.022256*KS_I10. As previously highlighted, there are currently two preference based scoring algorithms available for the CHU9D, the original one generated by the standard gamble method with the UK adult general population and a newly developed one generated by the best-worst scaling method with the Australian adolescent general population and anchored on the 1-0 full health-dead scale using the UK values. The utility scores generated by application of the two scoring algorithms are highly correlated (r = 0.97). The correlation between each item of the KIDSCREEN-10 instrument and each of the two utility scores are almost identical. Owing to word limits, the analyses presented here were based upon the Australian adolescent general population scoring algorithm. The key mapping equations (corresponding to those reported in Table 4) from the KIDSCREEN-10 index to the CHU9D utility scores based upon the UK adult scoring algorithm are also reported in the Table 5 for the readers' interest. The goodness-of-fit results also suggest that the ordinary least squares and MM-estimates based on the Model 2 specification had the best predictive performance, and the MM-estimates based on the Model 2 specification is selected as the preferred model using MAE.
Discussion
The measurement and valuation of the HRQoL of children and adolescents is increasingly being recognised as an important component of economic evaluations of health care treatment and preventive programs targeted for young people. The KIDSCREEN-10 instrument has been validated across several European countries for the measurement of health status and since its development in 2004 the instrument has been also widely used across countries. However, a current limitation of the KIDSCREEN-10 is the absence of preference weights meaning that the measure cannot be used directly to estimate QALYs for use in cost-utility analyses. This study has developed a mapping algorithm that can be used to predict CHU9D utility scores based on the KIDSCREEN-10 index. The utilisation of the algorithm will enable cost-utility analyses to be conducted within studies where health outcomes were assessed using only the KIDSCREEN-10 index. There are two main strengths of this study. Firstly, the target and base measures are both generic HRQoL instruments and as such they have a conceptual overlap between each other. This is an important determinant to the success of mapping analysis [9,22,34]. Secondly, multiple estimators that are appropriate for the data have been adopted to explore the optimal mapping algorithms [22]. Specifically, we have used the MM-estimator, an effective robust estimator to map the KIDSCREEN-10 to CHU9D. The MM-estimator has not to our knowledge been previously used in mapping and in this dataset outperforms the censored least absolute deviations and generalised linear model techniques that have been used previously in the mapping literature, and performs similarly to ordinary least squares in this dataset. As the MM-estimator offers some theoretical advantages over ordinary least squares estimator and performs similarly for this reason it is our preferred model here. The model performance as indicated by MAE (0.0946) of the preferred MM-estimate model based on the Model 2 specification is within the range reported by previously published studies (0.0011 to 0.19) [9]. Despite our preference for the MM-estimator, it should be noted that these two estimators do perform similarly. In terms of their predictive ability as the RMSE value (0.1193) of the optimal ordinary least squares estimate is also within the published ranges (0.084 to 0.2) [9]. The largely comparable predictive performance of ordinary least squares and MM-estimator models, despite the MM-estimator overcoming the theoretical limitations of ordinary least squares estimator for the analysis of CHU9D, is of interest. However in the literature this has also been found in some studies mapping onto the EQ-5D using ordinary least squares estimator and other models overcoming the theoretical limitations of ordinary least squares estimator [22]. KSthe KIDSCREEN-10 index; KS_I1 -"fit and well", KS_I2 -"energy", KS_I3 -"sad", KS_I4 -"lonely", KS_I9 -"got on well at school", KS_I10 -"been able to pay attention".
Although aggregated sample/group level and disaggregated individual level predictions of CHU9D utility scores can be incorporated within economic evaluation, it is recommended that only the aggregated sample/ group level prediction be adopted based on the current algorithm. At the individual level the predicted utility scores are less reliable as the prediction error could be large as indicated in the Figures 2 and 3. The overprediction at the lower end of utility values is an issue that not uncommon in the mapping analysis where regression technique is used [35]. Furthermore, as can be seen from Columns (2) and (3) of Table 2, there is no guarantee that the predicted utility will lie within the observed ranges if the transformation algorithm is based upon ordinary least squares estimator, censored least absolute deviations or MM-estimators. Some studies have suggested that in practice if the predicted utility fell outside the defined range, then it should be truncated to the appropriate boundary value (e.g. Sullivan and Ghushchyan [29], Wu et al. [31], Payakachat et al. [36]). Following this suggestion, the predicted utility score should be specified to 1 if the prediction is larger than 1. How this modification will change the goodness-of-fit results in our sample is shown in Columns (6) and (7) of Table 2. As can be seen, this adjustment always improves the goodness-of-fit results.
This study has some limitations. Response rates and data quality are two potential issues with online modes of survey administration. On-line modes of administration are increasingly familiar, particularly for young people and have the potential to engage large numbers of community based adolescents who would otherwise be more difficult to reach. It is possible to include checks for data quality in on-line surveys and we have taken care to scrutinise the data generated for illogical responses and to check that respondents appeared to understand the task adequately. It is also important to note that other modes of survey administration including self-completion questionnaires and interviews may also be plagued by low response rates and issues of data quality.
In relation to the modelling approach adopted it is important to highlight that model performance was validated using the internal dataset only. A cross-validation would be ideal once a suitable external dataset becomes available. In addition, the study sample was relatively healthy and as such it is also possible that the best performing model specification and type would have differed if the mapping algorithms had been estimated using a dataset with a larger number of respondents in poorer health. Therefore, an external validation using a patient sample is recommended prior to using these mapping algorithms in a dataset with children in poor health. An alternative mapping method, the linking approach that has not yet been empirically tested could be explored in future studies [37].
Conclusion
When a preference based instrument has not been included in a study to enable QALYs to be estimated for use in cost-utility analyses, the adoption of a mapping approach from a non-preference based instrument to obtain health state utilities served as a second best alternative facilitating cost-utility analyses. This paper has produced a mapping algorithm to generate a CHU9D utility Heteroskedasticity robust standard errors. *significant at 5%. For generalised linear model, binomial family and logit link were used. KSthe KIDSCREEN-10 index; KS_I1 -"fit and well", KS_I2 -"energy", KS_I3 -"sad", KS_I4 -"lonely", KS_I9 -"got on well at school", KS_I10 -"been able to pay attention".
score from KIDSCREEN-10 items. The preferred model is the MM-estimate with stepwise selected KIDSCREEN-10 item scores as the predictors (i.e. Model 2 in Table 4) according to the MAE. The ordinary least squares estimate with stepwise selected KIDSCREEN-10 item scores as the predictors also show good performance based on RMSE.
|
2016-05-04T20:20:58.661Z
|
2014-08-29T00:00:00.000
|
{
"year": 2014,
"sha1": "7aa7d5ab3b2c3d5d6a7e8632628496f96ffa9bb9",
"oa_license": "CCBY",
"oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/s12955-014-0134-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2beec38c1fddc47041bf04a26f49975b336835e",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249287797
|
pes2o/s2orc
|
v3-fos-license
|
Open-Ended Learning Strategies for Learning Complex Locomotion Skills
Teaching robots to learn diverse locomotion skills under complex three-dimensional environmental settings via Reinforcement Learning (RL) is still challenging. It has been shown that training agents in simple settings before moving them on to complex settings improves the training process, but so far only in the context of relatively simple locomotion skills. In this work, we adapt the Enhanced Paired Open-Ended Trailblazer (ePOET) approach to train more complex agents to walk efficiently on complex three-dimensional terrains. First, to generate more rugged and diverse three-dimensional training terrains with increasing complexity, we extend the Compositional Pattern Producing Networks - Neuroevolution of Augmenting Topologies (CPPN-NEAT) approach and include randomized shapes. Second, we combine ePOET with Soft Actor-Critic off-policy optimization, yielding ePOET-SAC, to ensure that the agent could learn more diverse skills to solve more challenging tasks. Our experimental results show that the newly generated three-dimensional terrains have sufficient diversity and complexity to guide learning, that ePOET successfully learns complex locomotion skills on these terrains, and that our proposed ePOET-SAC approach slightly improves upon ePOET.
Introduction
In recent years, Reinforcement Learning (RL) and Deep RL (DRL) have achieved remarkable successes in the area of legged robot locomotion, especially in controlling robots or agents to successfully walk on flat terrains [15,16,12]. However, many of those agents usually fail to maintain well-balanced behaviors on rugged terrains [17,9], because locomotion on uneven terrains requires the ability to perceive the environment. Therefore, dynamically generating diverse terrains for the robots to interact with is one of the key challenges in controlling robots on complex terrains. To address this challenge, Wang et al. [25] introduced an approach that automatically generates diverse environments while optimizing the policy, called the Paired Open-Ended Trailblazer (POET). Moreover, an improved version, called Enhanced POET (ePOET) [26], can generate more diverse and complex challenges by leveraging compositional pattern producing networks (CPPNs) [19] as a terrain encoding method, coupled with the Neuroevolution of Augmenting Topologies (NEAT) [20,21] algorithm to evolve increasingly rugged environments. Despite achieving impressing results in training a simple bipedal walker walking over two-dimensional terrains, the question remains whether this approach also works for complex agents in three-dimensional settings. Modeling gait transitions of legged robots is more complex and much harder compared to biped walkers. Additionally, generating three-dimensional terrains with gradually increasing complexity also requires a more careful and complex design in comparison to two-dimensional worlds. By addressing these issues, we aim to make this work more applicable to real-world cases.
To this end, this work aims to make the following contributions: 1) automatically generating diverse three-dimensional complex terrains while optimizing policies; 2) analyzing the effectiveness of ePOET in training a complex hexapod robot to adapt suitable gaits to these terrains; 3) improve upon ePOET so that the agent learns more diverse skills to solve more challenging tasks. This paper is organized as follows. First, section 2 presents a detailed literature review of previous work on complex locomotion, as well as the POET and the ePOET approach. We then describe our proposed ePOET-SAC approach in section 3. Section 4 describes how we leverage CPPN-NEAT [20,21] to generate complex terrains. Our conducted experiments and experimental results are presented in section 5. Lastly, research directions are proposed for future study in section 6.
Related Work
Complex Locomotion in simulation: Levine et al. [11] applied direct policy search methods to produce motions for a bipedal walker running on uneven terrains. Particularly, their approach succeeded in learning a push recovery behavior. Heess et al. [10] applied stochastic policy gradient methods in learning continuous control policies for locomotion agents to address two potential limitations (reliance on planning and restriction to deterministic models) caused by value gradient methods. A 3-dimensional physics-based locomotion controller is presented by Mordatch et al. [14]. Their experiments demonstrated that by optimizing a low-dimensional physical model, their controller can successfully traverse not only flat terrains but also many types of uneven and constrained terrain. However, their controllers failed to consistently handle certain types of motion such as generating 180 o turns when the character is running quickly. Heess et al. [9] used Proximal Policy Optimization (PPO) and distributed PPO to explore complex locomotion behaviours over a wide range of environmental conditions. Furthermore, imitation learning has been used for producing high-quality locomotion through imitating well-defined experts' behaviors, such as work done by Chentanez et al. [3]. Peng et al. [16] proposed a mixture of actor-critic experts (MACE) model, to work directly with high-dimensional character and terrain state descriptions without requiring feature engineering. More recently, Luo et al. [13] applied Generative Adversarial Networks (GAN) to enable a high-level gating network to approximate previously learned natural action distributions. Learning locomotion on various terrains can be treated as a problem of learning several skills for a variety of terrains and choosing the right skill for a certain type of terrain. From this point of view, a two-layer recurrent policy combining with PPO was introduced by Azayev et al. [1] to train a hexapod walker to adapt to different terrains. To improve sample efficiency and generalization, a hierarchical RL structure that combines an off-policy Soft Actor-Critic method with a modelbased planning approach is proposed by Li et al. [12]. Their approach helped a Daisy robot to reach goals up to 12 meters away from its start point and to follow the waypoints defined by a user.
Limitations: Despite the outstanding results that previous studies have achieved, to our knowledge, they have shown two primary limitations. First of all, their experiments were performed with mostly manufactured or randomly generated terrains that either lack complexity and diversity or have uneven difficulty levels [1,9,16,11,14,15,12]. With very simple terrains, an agent could only acquire some very basic skills, while with overly challenging terrains, an agent could fail to learn at the very beginning. Therefore, terrains with well-balanced and gradually increasing difficulty levels are essential for an agent to learn diverse locomotion skills. Secondly, some algorithms are unstable for training legged robots' locomotion tasks and are highly sensitive to hyper-parameter settings, such as the actor-critic architecture [12,16]. An unstable training could cause the trained agent to behave unnaturally and inefficiently, especially when the agents face uneven training environments.
Paired Open-Ended Trailblazer (POET), proposed by Wang et al. [25], aims to directly confront open-endedness, i.e., an unbounded invention of learning environments and their solutions. This is done by evolving a set of diverse and increasingly complex environmental challenges while collectively optimizing their solutions. These environmental challenges and solutions together form a class of environment-agent (EA) pairs. When agents prove successful in one environment, they are transferred to another, usually more complex environment. This exploits the opportunity to transfer high-quality solutions from one objective to another. Each EA pair is optimized with Evolution Strategies (ES) [8,18,28,27]. Enhanced POET (ePOET) [26] improves upon POET in several ways. Firstly, a domain-general Environment Characterization (EC), called the Performance of All Transferred Agents EC (PATA-EC), is used to evaluate how all agents perform in each newly generated environment instead of relying on domain-specific information. Secondly, ePOET simplifies the transfer mechanism of the original POET by introducing a more stringent threshold to save computing resources. Finally, another environment encoding mechanism, based on compositional pattern producing networks (CPPNs) [19] is applied to generate environments with increasing complexity and express any possible landscape at any conceivable resolution or size. More details about CPPN are described in section 4.1.
Although both POET and ePOET have shown exciting results, they have only been evaluated within a simple 2D bipedal walker environment. In three-dimensional settings, both the observation space and action space could have more dimensions, so training could also be much harder. Hence, it is unclear whether this approach generalizes to more complex settings, such as 3D agents walking over complex three-dimensional terrains. To address this question, we modify the ePOET algorithm to perform experiments in 3D space and extend the CPPN approach to generate more rugged terrains. Moreover, to address the problems of local convergence and mutation-sensitivity when using evolution strategies, we combine ePOET with the Soft Actor-Critic approach to encourage more random behaviors while maximizing returns.
Preliminaries
We define an RL model as M = (S, A, T , R, γ), in which S represents a set of states, A denotes a set of actions, T is a transition probability distribution, R is the reward function, and λ ∈ [0, 1] is a discount factor. Under this definition, at each environment-step t, the agent observes a state s t from its state space S, then takes an action a t from its action space A according to a policy π(a = a t | s = s t ), and then receives a corresponding reward r(s = s t , a = a t ) calculated with the taken action and its reward function R. Meanwhile, the transition probability P a (s, s ) = Pr (s t+1 = s | s t = s, a t = a) is calculated and the corresponding transition T (s t , a t , s t+1 ) is added into the transition probability distribution T . This indicates the probability of changing the state from the current s to the next state s by taking action a at environment-step t. The goal of an RL model is to seek an optimal policy π * that maximizes the expected cumulative reward J(π), i.e., π * = arg max π J(π).
Motivation
The evolution strategies applied in ePOET have good scalability but suffer from local convergence [18,4], while SAC has less stability but encourages more diverse motions through maximizing entropy. Hence, combining ES and SAC promises to produce more diverse motions that enable the agent to overcome more complex challenges while retaining the scalability of ES. This idea is inspired by Suri et al. [22], who introduced the Evolution-based Soft Actor-Critic (ESAC) method. This method adds a SAC actor into a population of actors optimized using evolutions to exploit gradient-based knowledge. They demonstrated that it combines high sample efficiency and scalability. Since each agent-environment pair of ePOET is optimized with ES, it could be similarly combined with SAC to encourage a more diverse exploration of each agent. However, different from ESAC, which optimizes a single ES optimizer, in our method we add the SAC actor to the ePOET agent pool to form a new ES optimizer, and then perform crossover between the newly added optimizer and all other ES optimizers in the pool.
We name this combination ePOET-SAC. The workflow of this approach is illustrated in Figure 1, and detailed in Algorithm 1. The steps highlighted in red are additional steps in comparison to ePOET. Initially, similar to ePOET, an environment is generated together with a randomly initialized agent (parameter vector), and the agent is to be optimized under the environment via an ES optimizer. Simultaneously, the SAC model is initialized with one policy network and two Q-networks (as two Q-networks can significantly speed up training [7]), as well as a global empty replay buffer for storing old experiences. Meanwhile, an environment is randomly sampled from the environment pool to train the SAC model. If there is only one initial environment in the pool, then the same environment is used for the SAC model. At each iteration, the ePOET and the SAC are trained in parallel, and the SAC networks are updated every environment step as explained in Appendix A.1.
After training for a user-specified number of iterations, the updated SAC Actor is added into the agent pool and crossover is performed with the active agents in the pool. The crossover subroutine is
Algorithm 1 ePOET-SAC Main Loop
Input: initial environment E init (·), its paired agent's policy parameter vector θ init , learning rate α, noise standard deviation σ, iterations T , mutation interval N mutate , transfer interval N transfer Initialize Set EA_list empty Initialize SAC, an empty replay buffer R (global variable) if Score(SAC_Actor) < Score(θ top ) then SAC_Actor = θ top #update SAC actor with the best agent in the pool end end if t > 0 and t mod (N transfer * 4) = N transfer * 4 − 2 then Add SAC_Actor to the agent pool as θ sac t Crossover(agents list, θ sac t ) # Algorithm 3 (see appendix) end end illustrated in Algorithm 3 of Appendix A.1. In essence, during crossover, we randomly exchange the weights and biases of two agents (two input parameter vectors θ m and E m ): for each node of each layer, we flip a coin and randomly replace the weight of θ m with the weight of E m from the same position. We then perform the same operation for all biases. Before performing every single crossover, we evaluate the candidate agent θ m and save its score. Then, we evaluate the agent again after performing the single crossover. If the evaluation score of an agent after crossover is higher than its original score, then it is used to replace the original. Otherwise, we keep the original agent and discard the other.
Simultaneously, at every environment step, the transitions < s, s , a, r, d > of all active agents in the pool are pushed into the replay buffer from which the SAC samples previous experiences. Hence, the replay buffer contains not only transitions from the SAC model itself but also old experiences from the agents in the pool. This is because, instead of reusing only the SAC actor's experiences, learning from more agents in the pool could help improve the SAC actor's performance and speed up training. Each transition pushed into the replay buffer consists of the current state s, the next state s , the current action a, the reward r and the terminating indicator d after taking action a. Furthermore, after every transfer step, the best parameter vector (best agent in the pool) is also transferred to the SAC actor if it outperforms the SAC actor. This is to speed up the training of SAC.
Environment Evolution
This section explains how we combine the Compositional Pattern Producing Networks (CPPNs) encoding with the Neuroevolution of Augmenting Topologies (NEAT) algorithm to produce diverse three-dimensional terrains with gradually increasing complexity.
CPPN-NEAT
Compositional Pattern Producing Networks (CPPNs) are an indirect encoding method that abstracts the process of natural development without requiring the simulation of diffusing chemicals but through a composition of functions [19]. These functions are similar to the activation functions of Artificial Neural Networks (ANNs), however, unlike ANNs, CPPNs could allow each node to select a unique activation function so that the output of each is patterned, symmetrical and unique. Due to the structural similarity with ANNs, CPPNs could be evolved through neuroevolution algorithms such as the Neuroevolution of Augmenting Topologies (NEAT) algorithm [20,21], together called CPPN-NEAT. The NEAT algorithm is an evolutionary algorithm that could evolve increasingly complex networks over generations by adding nodes and more connections, or deleting existing connections in the population. In short, the idea behind CPPN-NEAT is that the CPPNs take geometric coordinates as inputs and output expression patterns that describe the phenotypes, and through evolving the networks over generations by the NEAT algorithm, increasingly complex phenotype expression patterns can be produced. More specifically, in the beginning, the CPPNs are initialized with random simple structures without hidden nodes. By evolving through the NEAT method over generations, extra nodes and connections are added into the networks, and new offspring (networks) are generated. Hence, the population of CPPNs becomes more complex as evolution continues and topological mutations are applied.
Generating three-dimensional terrains with CPPN-NEAT
In ePOET [26], the authors used CPPNs to produce y-coordinates for each given x-coordinate and then render the corresponding (x,y) coordinates into a two-dimensional terrain. The CPPN was updated by NEAT every user-specified number of iterations. Similarly, in this paper, our goal is to adapt this idea to generate more complex rich three-dimensional terrains. Figure 2 illustrates the overall idea of generating three-dimensional terrains with CPPN-NEAT. The terrain is generated with a heightmap, which consists of two parts, namely a random bowl-shaped part and the CPPN-NEAT part. The random bowl shape is created through sampling points from a uniform distribution and applying a cosine function to these sampled points, and then subtracting the cosine value with a gradually increased threshold (the maximum height). An example terrain generated from the random bowl shape method can be found in Figure 4 of Appendix A.2. The second part is to generate more diverse terrains through CPPN encoding and to evolve both the weights and the architecture of CPPN through the NEAT algorithm. We define two hidden nodes for the initial network to make it neither over-complex nor over-simple. The (x, y) coordinates in the three-dimensional plane are taken as the inputs of the CPPN. The output of each node is determined by formula: activation(bias + weight * (response * aggregation(inputs))), in which the activation function is chosen from {sin, sigmoid, square, tanh, identity, gauss}, the aggregation function is set to sum, and the bias, weight, response parameters are updated through the NEAT algorithm. More settings are listed in Table 6 of Appendix A.5. With these settings, the network produces varied height values for each point (x, y) in the plane, and the produced heights together with the heights generated from the random bowl shape form different height maps. The height maps are used to generate terrains in MuJoCo [23]. Importantly, the fitness function and the fitness threshold of the NEAT algorithm are not used in this approach because we applied the PATA-EC method also used in ePOET to guarantee the novelty and complexity of every newly generated terrain. The key points that enable the generated terrains to have sufficient diversity and gradually increasing complexity are illustrated in Appendix A.2, and some generated example terrains can be seen in Figure 5 of Appendix A.2.
Experimental Results
We evaluate our method using a Hexapod walker created by Azayev et al [1] with 18 Degrees-of-Freedom (DoF), which allows it to be more flexible on uneven terrains and to have a suitable training complexity. The agent and its implementation are detailed in Appendix A.3 and A.4, respectively. We trained the hexapod agent with PPO, SAC, VMPO, ePOET, and ePOET-SAC approaches to allow a direct comparison. All the experiments are modeled with the MuJoCo [23] physics simulator. The ePOET and our proposed ePOET-SAC ran on 10 CPU workers (Intel Xeon Broadwell-EP 2683v4 @ 2.1GHz), while PPO, VMPO, and SAC ran on GPU (ASUS Turbo GeForce GTX 1080 Ti, CUDA 10.2). The hyper-parameter settings of each algorithm are presented in Appendix A.5.
The average training returns of ePOET, PPO, SAC, and VMPO are shown in Figure 3 (a), showing that ePOET performs well compared to other methods. In this figure, ePOET_gen0 denotes the initial agent of ePOET, and ePOET_geni denotes the paired agents of reproduced environments from mutating, and the number i indicates their generation order. For the training terrains, because the CPPN-NEAT encoding method cannot be used in PPO, SAC, and VMPO, their training terrains contain only the random bowl shape with a maximum height of 1. These are actually easier than the terrains generated for ePOET because CPPN-NEAT generates terrains with larger height variation. In addition, the terrain's complexity of agent ePOET_geni increases with their generation order. Furthermore, Figure 3 across generations, we see that ePOET-SAC_gen1 (yellow) outperforms ePOET-SAC_gen0 (blue) after some iterations and ePOET-SAC_gen0 (blue) is also improved dramatically by transferring knowledge from ePOET-SAC_gen1 (yellow) or ePOET-SAC_gen2 (green).
To measure how well the trained agents perform when encountering unseen terrains, we evaluate the hexapod walker trained by PPO, SAC, ePOET, and ePOET-SAC with the same 32 environments. These 32 terrains were generated by ePOET from a run that is different from the above ePOET agents, which means that the terrains are not only new for PPO and SAC but also new for the ePOET and ePOET-SAC agents. Importantly, in our settings, an eligible parent environment generates at most 8 child environments and only one of them is admitted, which is to prevent generating overly challenging or overly easy terrains (as explained in section 3.2). Therefore, only 4 environments among these 32 environments are ensured to be neither very difficult nor very easy, and some environments could be overly challenging. We took all these 32 environments to compare the performance on not only easy terrains but also challenging ones. Furthermore, since the agent's travel route of each run could be different, we run the agent 5 times on each terrain to get a better performance estimate. Thus, in total, each trained agent runs on 32 * 5 = 160 terrains. We consider an environment 'solved' if the agent yields a score of 2000 or higher.
The comparison results are shown in Table 1. As can be seen, ePOET passed more terrains than PPO and SAC in both easy and middle terrains. In general, ePOET performed better than PPO and SAC, and none of these approaches can solve hard terrains. In contrast, ePOET-SAC did solve a few of the hard-level terrains, and it solved more terrains than ePOET in total. Interestingly, the hexapod trained with these approaches behaves very differently under the same environment, which can be seen in the video. 1 The hexapod trained with SAC can solve an environment that the hexapod trained with PPO cannot solve. However, the locomotion learned with SAC is often less efficient than the one learned with PPO. In comparison to the agent trained with ePOET, the agent trained with ePOET-SAC shows the best balanced and efficient walking styles on different terrains. This comparison is shown in a second video. 2 To gain more insight in their behavioural differences, we evaluate the trained agents on 16 easy and medium difficulty environments. Table 2 summarizes their average evaluation scores over 5 runs on the 16 unknown terrains. As can be seen, ePOET-SAC obtained both the highest average and maximum returns, yet very close to ePOET. More behavior analysis are shown in Appendix A.6.
Conclusions & Future Scope
Contributions The contributions of this work center around three key aspects. First, our experimental results have shown that the ePOET approach can indeed outperform some classic RL algorithms such as PPO, SAC, and VMPO in acquiring diverse locomotion skills in complex three-dimensional environmental settings. Second, our adapted approach of combining the CPPN-NEAT approach with random bowl shapes can generate diverse three-dimensional terrains with gradually increasing complexity. Third, our proposed ePOET-SAC approach slightly outperforms the ePOET algorithm, especially on hard terrains, by combining ePOET with SAC so that the trained agent could learn more diverse locomotion skills and overcome more challenging terrains.
Limitations There are still some limitations in our work. First, the diversity of generated terrains could be further improved. For instance, adding small deep gaps, continuous stairs, or obstacles. This might be achieved with a better design or more careful fine-tuning of CPPN-NEAT. Secondly, due to the hidden layer shape differences between SAC ([256, 256]) and ES ([40, 40]) of ePOET (as described in Appendix A.5), we randomly choose parameters to reshape the parameter vectors when adding the SAC actor into the ES agent pool. This random selection could weaken the effectiveness of the SAC actor to the ES agents when performing crossover. In our analysis, we found that the improvement of our ePOET-SAC compared to the ePOET mainly benefits from adding SAC actors into the pool rather than performing crossover. This is because the replacement of the existing agents with the new ones generated from crossover happens only when crossover outperforms the original agents. To address this problem, we attempted to use a middle hidden layer shape of [128, 128], but this led to degraded performance. Further research is needed to address these challenges.
Future Scope
The potential future scope would focus on addressing the above limitations. Firstly, more careful fine-tuning of the hyper-parameters of CPPN-NEAT would be desired, and a method of monitoring the difficulty levels of generated terrains would be helpful for providing more insight into the generated environments. Secondly, based on the fact that our ePOET-SAC does not massively outperform ePOET, tuning the entropy parameter of SAC could help to improve the exploration process. Furthermore, the training algorithm could be further improved with respect to two possible aspects. One is to pre-train the agent with a Diversity is All You Need (DIAYN) [6] approach to learn useful skills and then to adapt the learned skills to different terrains. Because learning locomotion skills is highly sensitive to reward function design, and human-designed reward signals could bring about potential limitations, learning without reward functions would allow us to address this problem. Another possible future direction is to make use of additional meta-learning approaches. Because the agent encounters new terrains every iteration, a meta-learner (running in an outer loop) could help to first identify the current terrain type (e.g. roughness, steepness, depth, height, etc) and then apply suitable locomotion gaits to the corresponding terrain. Moreover, meta-learning can also be used to learn good initial policy weights or hyper-parameter values over a distribution of environments. Finally, using the CPPN-NEAT approach to evolve the agent so that the agent can flexibly adapt its body to different terrains might also be an interesting and possible research direction.
A.1 Algorithms
The algorithm of training SAC for one iteration is shown in algorithm 2, in which ψ, θ, φ,ψ represent the parameters of the value network, the Q-networks, the policy network, and the target value networks, respectively. The gradient of the soft value function J V (ψ) can be estimated with an unbiased estimator: The soft Q-values are approximated by two parameterized functions Q θ1 (s, a) and Q θ2 (s, a), and they can be optimized with stochastic gradients: i represents corresponding target value networks. The target networks are updated through: One can refer to the original SAC paper [7] for more proof about the above equations. One difference from the paper is that the replay buffer contains not only the old experiences of SAC itself but also experiences from the agents trained with ES. Additionally, we update the networks every environment step as the authors [7] set both their target update interval and gradient steps as 1 in their practice.
A.2 Generated Environments
Following the ePOET, crossover between CPPNs is also not performed in our approach of generating three-dimensional terrains. Behind this approach, five key points enable the generated terrains to have sufficient diversity and gradually increasing complexity: • The CPPN-NEAT is set to be updated (produce new generations) after a user-specified number of iterations (e.g. 75 is set in our experiments) and when the agent has achieved a user-defined score threshold (e.g. 2000 is set in our experiments) under the current encoded terrain. This is to ensure that the agent has solved the current challenge before producing a new challenge level. It also prevents the CPPN from being updated too often and generating overly complex terrains.
• The domain-general PATA-EC method, which is described section ??, is used to measure the novelty of every newly generated terrain so that the newly added terrains are distinguished from the existing ones in terms of diversity and complexity.
• The part of the random bowl-shape is re-generated every new iteration, to ensure that the agent is always trained on changed terrains every new iteration even if the CPPN-NEAT is not updated.
• The weight for the two components is set as 0.3 and 0.7, respectively, which makes a good balance for the generated terrains based on our experiments. Because the major complexity and diversity should be contributed by the CPPN-NEAT part, whereas the random bowl shape part with a relatively small portion (0.3) introduces small changes when the CPPN-NEAT is not updated.
• The maximum terrain height (elevation_z in MuJoCo [23]) is increased by 0.01 every iteration, which also ensures that the complexity of the generated terrains gradually increase. Figure 4 shows a terrain generated from the random bowl shape method with the maximum height (elevation_z in MuJoCo [23]) of 1.5. Figure 4: A three-dimensional terrain that is generated from the random bowl shape method.
Some generated example terrains can be seen in Figure 5. They are sorted into five levels (easy, challenging, solvable, hard, and extremely hard) according to their average height variances. The number of solved terrains by the hexapod walker among all generated terrains is explained in section 5. Moreover, Figure 6 illustrates that the complexity of created terrains over generations gradually increases. These environments were generated by ePOET and ePOET-SAC. The number corresponds to the generated order, for instance, Env0 and Env11 correspond to the initial environment and the 11th environment respectively. To some extent, the complexity of the generated environments gradually increases as their generated orders.
A.3 Implementation
All our implementation is done with the Python language as it is widely used in Machine Learning and Reinforcement Learning (RL) communities. Moreover, Python provides many common packages for neural networks, such as the most popular Tensorflow and Pytorch packages. There are also many RL baselines implemented with Python, such as the baselines provided by OpenAI [5]. MuJoCo simulator and mujoco_py API are used to simulate the agents, as well as OpenAI Gym [2] benchmarks. For the RL baseline algorithms (PPO, SAC, VMPO), we used Yang's Pytorch implementation [29] and made small changes to adapt them to our experiments. For the implementation of our approaches, Figure 5: Examples of generated terrains through our approach. The difficulty level is decided by the average height variance.
we used the framework of the ePOET that uses Fiber (a distributed computing library developed by OpenAI) [30] for parallel processing. Fiber is similar to Python Multiprocessing but more powerful. For instance, it has better ability in terms of error handling. To generate diverse complex terrains, we applied the Neat-python API to evolve the CPPNs. The list below shows some primary packages and corresponding versions that are used in our implementation. The source code and trained models can be found at https://github.com/ml-tue/ePOET_3D.git. •
A.4 The agent
Our chosen agent is a Hexapod walker created by Azayev et al [1] as it has 18 Degrees-of-Freedom (DoF), which allows it to be more flexible on uneven terrains and to have a suitable training complexity. As can be seen in Figure 7 (left), it has 6 legs and each actuator is defined with a feedback gain of 40. The highest and the lowest torso height is shown in Figure 7 (right-top) and (right-bottom) respectively. The observation space and the action space of this agent is 53 and 18, respectively. Since we did not add any sensors to the hexapod, the observations of each time-step have to be informative enough for the agent to perform the task. To this sense, the observations consist of the torso positions, joint velocities, contacts information, as well as some sampled heights of the agent's current position. Borrowing the formula from Azayev et al. where R v t represents the velocity reward that motivates the agent to move forward, R θ t indicates the correcting heading error, R c t consists of a variety of costs, and w i indicates their corresponding weights. The velocity reward R v t in Equation 1 is defined as: , where x t , y t represents the velocity in x-and y-direction, respectively. The v tar represents the maximum target velocity to prevent a very fast gait and erratic jumping behavior. It is important to Figure 7: A hexapod walker with 18 joints. The right-top image shows the maximum body height that the hexapod can achieve, and the right-bottom image shows its minimum body height.
find a suitable v tar because a large v tar could cause erratic jumping behaviors while a small v tar could result in slow movements. It is set to 0.4 in our experiments, which is a finetuned value by Azayev et al [1]. The correcting heading error R θ t in Equation 1 is defined as: . This term is to motivate the agent to correct its head forward to the target direction before starting to walk. According to Azayev et al. [1], they found that correcting heading errors works significantly better than penalizing heading deviations. The penalty term R c t sums up the torso angle, acceleration penalization, y-axis penalization, velocity in the z-axis, and control costs (external forces of the body), with different weights. Finally, the weights w v and w θ is set to 6 and 10, respectively.
The reward shaping requires a lot of fine-tuning since it is not the main focus of our study, we reused the hexapod agent created by Azayev et al [1] and made two small changes to adapt it to our experiments instead of creating an entirely new agent. One change is that we add a bonus reward if the agent reaches the target position (the edge of the terrain along the positive x-axis), and we assign it a f inished signal. This is to encourage the agent to move towards the positive x-axis and the f inished signal is used to count the number of terrains that the agent passed. The other change is that we add an indicator to continuously track the agent's direction along the x-axis. If the agent continuously moves toward the negative direction along the x-axis for more than 500 steps, then the current iteration is terminated. This is to save training time.
A.5 Hyper-parameters
The settings shown in Table 4 are applied for both SAC experiments and ePOET-SAC experiments. Note that the hidden layer shape of SAC differs from the hidden layer shape of ES in the ePOET, which is [256, 256] as shown in Table 4 and [40, 40] as shown in Table 5, respectively. This is because the SAC with the hidden layer shape of [256, 256] outperforms the hidden layer shape of [40,40], while the ES performs oppositely with these two hidden layer shapes for the hexapod according to our experiments. Thus, we use a different hidden layer size for SAC and ES, and randomly select the parameters when performing crossover in our ePOET-SAC. We have also attempted to use the t-SNE [24] dimension reduction technique to keep the distance relation while reducing the shape from [256, 256] to [40, 40], but the resulted parameter vectors had worse performance than the randomly selected parameter vectors. Furthermore, to ensure that the SAC actor indeed helps improve the ES agents, we pre-train the SAC for 500 iterations so that the pre-trained SAC actor obtains a certain score before performing crossover with the ES agents.
It is also important to mention that we set the lower bound and upper bound of PATA-EC (explained in section ??) as 500 and 3000 respectively in our experiments, as can be seen in Table 5.
These bounds are to prevent that a newly generated terrains from being over-simple and over-complex. Moreover, the threshold of reproducing new environments is set as 2000, which means that the agent needs to acquire a score of higher than 2000 for the current terrain in order to reproduce child terrains. According to our experiments, a score of 2000 or higher indicates that the agent can somewhat pass the current challenge. In the ePOET and ePOET-SAC experiments, every 75 iterations, a parent agent generates its child agent-environment pair if the agent reaches a score of 2000. During every reproducing process, the parent environment produces 8 child environments via CPPN-NEAT, but only one environment is admitted and added to the pool with its paired agent. For the experiments with RL methods (PPO, SAC, VMPO), since the algorithms do not have CPPN-NEAT mechanism, their training terrains only contain the random bowl shape. Furthermore, one training episode terminates when 2,000 environment steps have elapsed, when the agent continuously heads to the opposite direction for more than 500 steps, or when the agent arrives at the finish line. An environment is considered solved when the agent reaches the finish line and obtains a score of 2000 or above.
A.6 Behavior Analysis
To better understand the behavioral differences of the hexapod trained with different approaches, we visualize the agent's action representations with a dimensionality reduction technique known as t-Distributed Stochastic Neighbor Embedding (t-SNE) [24]. Figure 8 shows the visualizations of the agent's action representations while walking over three terrains (corresponding to three different colors) with t-SNE. Each subplot corresponds to the hexapod trained with different approaches. A more dispersed scatter plotting indicates a larger distance among actions, thus, represents more diverse locomotion behaviors. As can be seen in Figure 8, the hexapod trained with SAC (a) has the most erratic behaviors while the hexapod trained with ePOET (c) has the least behavioral diversity. This is not surprising as SAC encourages random exploration while optimizing the rewards through a trade-off between maximizing rewards and maximizing entropy. In comparison to ePOET, it seems that ePOET-SAC (d) helped the agent to explore slightly more diverse locomotion behaviors. A possible reason is that ePOET-SAC benefits from the randomness of SAC.
Through rendering the interactions between the agent and the environments, which can be found in the video, 3 we found that the hexapod trained with ePOET-SAC shows the best balanced and efficient walking styles on different terrains. Behavioral comparisons are shown in Figure 9 (trained with ePOET-SAC) and Figure 10 (trained with ePOET). The behaviors shown in these two figures are captured with the same terrain. In Figure 9, the agent trained with ePOET-SAC shows well-balanced movements and a wide range of joint angles. Particularly, the left-front leg is well balanced with the right-back leg in order to stretch to a maximum horizontal length. Moreover, flexibly making use of the inner joint of each leg with a wide-angle range prevents it from slipping down when encountering a steep slope, as shown with the orange markers in Figure 9. By contrast, although the agent trained with ePOET also shows quite natural and well-balanced locomotion behaviors, its weakness could be exposed when facing challenging tasks. As shown in Figure 10, due to using a narrow joint angle of the left-front leg when stretching the right-back leg, the stability (holding power) is weakened so that it easily slips when facing a steep slope. Another reason that the agent failed to overcome the slope is that it did not control the torso angle in a suitable range that resulted in an excessive body height (straightly standing pose). Therefore, it fell down from the slope. More performance comparisons can be found in video3. Figure 9: Behavioral visualization of the agent trained with ePOET-SAC. The agent applies a wide angle range of each joint and has a good balance between legs (left-front-leg is paired with right-back-leg, and right-front-leg is paired with left-back-leg to reach the longest distance per step). Figure 10: Behavioral visualization of the agent trained with ePOET. The angle range is smaller than the above agent, which results in a shorter reach distance per step and an unstable position when facing a steep slope.
|
2022-06-02T17:10:43.856Z
|
2022-06-14T00:00:00.000
|
{
"year": 2022,
"sha1": "d42d33c0024ed1a3a96babd514807a172c595df9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ea6d38e953cae26eb01d477e4594bdc3f1a2343e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
6362258
|
pes2o/s2orc
|
v3-fos-license
|
Usage and Distribution for Commercial Purposes Requires Written Permission. Ophthalmia Secondary to Cobra Venom Spitting in the Volta Region, Ghana: a Case Report
Purpose: To report the first case of ophthalmia due to contact with cobra venom in the Volta Region, Ghana. Methods: An ointment containing vitamin A was applied to treat the patient's unilateral defects in the corneal epithelium and the consequent diminished visual acuity. Results: Healing of the corneal epithelium and improvement of visual acuity were observed after only 1 day. Conclusions: This case suggests that consequences of cases of cobra venom spitting in the eyes can be minimal if immediate treatment is provided.
Introduction
Different species of cobras defend themselves by spitting venom towards the face of whom they perceive as an aggressor [1]. Although rare, this event can lead to ocular envenomation with potentially serious injuries. Reported consequences range from periocular soft tissue swelling, hyperemia, extensive conjunctivitis, and corneal epithelial erosion to corneal opacity, uveitis, and hypopion, and even cranial nerve VII involvement [2][3][4][5][6].
Previous reports have highlighted the importance of an emergency therapeutic approach with multiple drugs to avoid severe complications such as necrosis of the eye [3,6]. However, in most cases a strict follow-up with early observation after the trauma is not available. Hereby, we describe the first case of eye involvement secondary to cobra spit in the Volta Region in Ghana with a close follow-up and an early resolution with a simple and affordable therapy.
Case Description
A 65-year-old male farmer presented to the eye clinic of the Comboni Hospital in Sogakope (Volta Region, Ghana) 1 h after having suffered an attack from a cobra that had spat venom in his eyes. The snake had been killed and identified as a Naja nigricollis, while the man's eyes and face had been promptly irrigated with tap water by his coworkers.
Upon examination, the patient's uncorrected visual acuity was 20/40 in the right eye and 20/70 in the left eye. The latter showed diffuse punctate corneal epithelial defects, conjunctival injection, and mild chemosis ( Fig. 1), with no alterations of the anterior chamber and the fundus. The right eye and intraocular pressure on both sides were also normal.
The patient's left eye was treated solely with vitamin A ophthalmic ointment (Kerato VitA; Bruschettini, Genova, Italy) applied underneath a pressure patch. He was then instructed to come back for follow-up visits during the next 7 days.
These further examinations allowed an appreciation that on day 1 after injury the epithelial defect had healed completely (Fig. 2). In addition, the patient's left uncorrected visual acuity had improved to 20/40, and he no longer complained of any symptoms. During the following week, he did not develop any sequelae. Thus, the therapy was interrupted.
Conclusion
Direct ocular exposure to cobra venom spitting has been reported previously in a few dozen humans [2][3][4][5][6]. Cobras spread the venom over the entire face of the aggressor by rapidly moving their head, therefore enhancing the chance of hitting at least 1 eye [1]. As a consequence, various degrees of ocular and periocular injuries have been reported [2][3][4][5][6].
Cobra venom contains bioactive proteins such as neurotoxins, necrotoxins, and proteolytic enzymes that may damage ocular and periocular tissues and potentially produce a systemic effect [3,6]. This is why prompt and copious irrigation of the involved eyes is considered a mandatory maneuver.
Additionally, in most cases topical antibiotics, mydriatics, and cycloplegics are prescribed, while topical antivenom and heparin are rarely used. Administration and prescription of these types of drugs may be limited due to their poor availability in a rural setting. The therapy of choice should obviously depend on the severity of the presenting case and the possibility of performing a close follow-up.
In this case, 2 main factors were determinant in selecting the treatment. Firstly, our patient only showed mild symptoms such as punctate epithelial erosions and mild conjunctival involvement in the absence of any periocular swelling and intraocular inflammation. In addi-tion, we had the rare possibility of closely monitoring the patient for a week following the injury.
Based on these observations, it appears that in the absence of major intraocular reactive signs to the venom, a simple re-epithelization therapy may be sufficient to obtain early recovery. Our findings also support the general conviction that irrigation of the involved eyes is of crucial importance. Since such a simple procedure could be sight saving, we believe that a public education campaign should be issued on possible cobra spitting injuries to the eyes.
Statement of Ethics
The subject has given informed consent and the report has been approved by the institute's committee.
Disclosure Statement
None of the authors received any sponsorship or funding arrangements relating to this study and none of the authors have any conflicts of interest.
|
2018-05-08T18:39:48.882Z
|
0001-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "f62b1be536d2a79e33883983307b942a1843e4e8",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/458519",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f62b1be536d2a79e33883983307b942a1843e4e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226227550
|
pes2o/s2orc
|
v3-fos-license
|
Job Mobility and Wealth Inequality
The extent to which employees change jobs, known as the job mobility rate, has been steadily declining in the US for decades. This decline is understood to have a negative impact on both productivity and wages, and econometric studies fail to support any single cause brought forward. This decline coincides with decreases in household savings, increases in household debt and wage stagnation. We propose that the decline could be the consequence of a complex interaction between mobility, savings, wages and debt, such that if changing jobs incurs costs which are paid out of savings, or incurs debt in the absence of sufficient savings, a negative feedback loop is generated. People are further restricted in making moves by their debt obligations and inability to save, which in turn depresses wages further. To explore this hypothesis, we developed a stylized model in which agents chose their employment situation based on their opportunities and preferences for work and where there are costs to changing jobs and the possibility of borrowing to meet those costs. We indeed found evidence of a negative feedback loop involving changes, wages, savings and debt, as well as evidence that this dynamic results in a level of wealth inequality on the same scale as we see today in the US.
Introduction
The US job mobility rate, describing the extent to which employees move between employers, is at an all time low after declining for decades, and this decline has important consequences. Job changing is understood to improve productivity by matching workers to more suitable employment and by promoting innovation through the inter-firm exchange of experience (Eriksson and Lindgren 2008;Helsley and Strange 1990;Breschi and Lissoni 2009). Changing jobs is also understood to increase wages by providing workers with opportunities to negotiate higher salaries (Gottschalk 2001).
Why has job changing become less frequent over the past several decades? Suggested causes include a need to retain employer-provided health insurance, an aging population, the rise of dual-career households, declining entrepreneurship, a decline in middle-skill jobs, burdensome occupational licensing requirements or skill supply and demand mismatches. Yet econometric studies do not provide strong support for any of these explanations (Hyatt 2015;Molloy et al. 2017).
Another possible explanation is that changing jobs incurs costs on the part of the employee, such as gaps in income, training expenses or relocation costs, and these costs are funded by the employee by spending savings or borrowing through loans [also suggested by (Bhaskar et al. 2002)]. 1 A broad exploration of the employee cost burden in the reallocation process is missing from the literature, perhaps in part because of the difficulty in quantifying such costs.
A further consideration is that this decline could be the result of a complex interaction of several factors. The decrease in job mobility is contemporaneous with decreases in household savings (Guidolin and La Jeunesse 2007), increases in household debt (Getter 1996), and a stagnation of wages (Donovan and Bradley 2018). 2 Could a decrease in savings and an increased debt burden be impacting the ability of workers to take advantage of wage and productivity improving job opportunities, thus further impeding their ability to accrue savings? Some evidence suggests this could be the case. Owing more on a mortgage than the market value of the house has an impact on job mobility, to the extent that people take lesser jobs in order to avoid the costs of moving (Brown and Matsa 2016) 3 . The decline in US savings rates strongly correlates with increased credit availability (Carroll et al. 2019), suggesting that households are substituting debt for savings. Barba and Pivetti claim evidence of the substitution of loans for actual wages (2008), further supported by findings of sharp increases in the use of consumer credit applied to necessitous spending, where households borrow to make regular purchases, which in turn may lead to liquidity traps that make future saving difficult (Pollin 1988;Sullivan et al. 2001;Weller 2007;Eggertsson and Krugman 2012). 4 Could mobility, wages and debt interact to generate a negative feedback loop, which differentially applied across a population, be one of the mechanisms driving wealth inequality?
Informed by the evidence presented above, we propose that if pursuing improving work opportunities requires some amount of financial capital, then individuals without savings either miss out on wage increasing opportunities or resort to borrowing, which impedes their future ability to save, and that this dynamic may be a driver of wealth inequality. 5,6 Thus we wish to explore a complex interaction between savings, lending and wages to explain job mobility and its consequences for wealth. Kirman (2011) defines economic complexity as agent interactions generating phenomena at the macroeconomic level that do not coincide with observations at the microeconomic, so in that spirit we have developed a stylized multi-agent model, the Emergent Firms (EF) model, to explore the emergent effects of individual work choices in the context of job change costs, savings and lending.
We indeed find that if pursuing a job opportunity incurs costs, then having financial capital matters, and without it, and especially in the presence of debt, agents are limited in their ability to fully participate in the stylized economy. The strength of the relationships found in the model may generate testable hypotheses (Griffin 2006) as well as justify efforts to seek techniques and datasets to demonstrate these complex feedback effects more explicitly.
The Emergent Firms Model
The EF model is based on Rob Axtell's Endogenous Dynamics of Multi-Agent Firms Model, where agents chose their employment situation based on their opportunities and preferences for work (1999; 2015; 2018; 2019). The intent of the Axtell model is to describe the overall distribution of firm sizes as the emergent property of numerous individual choices about where to work. Therefore, the Axtell model is also a job mobility model, and as such provides a uniquely suitable starting point for an exploration of the effects of costs, savings and credit on mobility dynamics. 7 4 More broadly, Steindl explicitly modeled household saving and debt, and proposed that consumer credit could act as an economic stimulus (1990) whereas Dutt, on the other hand, finds evidence that consumer debt results in economic contraction (2006). If a decline in job mobility results in a decline in productivity, then we'd expect an economic contraction. This leads to the question of whether or not there exists a 'right' amount of consumer credit, which is intriguing but out of our current scope. 5 These economic patterns describe a pre-COVID-19 world. 6 Another way to express this idea is the Matthew principle whereby the rich get richer and the poor get poorer (Rigney 2010). This is related to the concept of preferential attachment (Barabási and Albert 1999) or preferential opportunity (Bottazzi and Secchi 2006;Arthur 1994). 7 Details of the Axtell model and the relationship between the Axtell and EF models can be found in Applegate (2018a).
The EF model is driven by an agent's choice to work with other agents in order to advantage itself of the benefits of returns-to-scale and coordination. Numerous agents explore options for changing firms, becoming self employed as a singleton firm, or remaining in their current position. Agents chose their best option based on maximizing a Cobb-Douglas utility function with their individual preference set for income and leisure, where O is total firm output, n the number of agents in the firm, such that O n is the agent's income in the current firm configuration. The agent's preference for income is given by , therefore preference for leisure is 1 − . The agent's total time endowment is and e is the agent's work effort, thus the agent's leisure is − e.
Each firm adopts its founder's values for a, b and which characterize the returns to scale in the firm's production function, thus each firm will have differing production capabilities which could represent differences in production technology or in the managerial ability to appropriately utilize employees' skills. The firm's output is divided evenly between all employee agents, and each agent's portion of the output is its wage. Therefore an agent's wage is not only a function of its own effort, but of the positive returns to scale obtained by combining its efforts with other agents, and an agent could obtain very different wages for the same effort depending on the configuration of its employing firm.
where E is the sum of all the firm agents' efforts.
Agents are characterized by both preferences for income and savings rates, as well as by the production function parameters that determine firm output levels. Agents are also connected via an underlying social network, modeled as an Erdös-Renyi network, and can choose to join a firm that employs a neighbor in this social network (Montgomery 1991). 8 The founder of a firm determines the production function parameter values for that firm.
These basic microeconomic principles embodied by a group of utility seeking agents create macroeconomic conditions of "fluctuating effort and sustainable cooperation" (Huberman and Glance 1998), and provide the engine of free movement job mobility. Agents choose an optimal individual effort to maximize their utility, given the output of a combined effort (E) of all agents in the firm. Therefore, the optimal effort for an agent will decrease as other agent efforts in the same firm increase, thus creating a free-rider problem. As free-riding increases, the utility for other firm members decreases, such that an optimal firm size that maximizes wages is not the same as one that maximizes utility. Thus agents will leave firms with high wages if they find higher utility elsewhere, even if their resulting wages are lower. 9 The Axtell model approach differs from other agent-based models exploring wage dynamics such as those of Dawid and Genkow (2014) or Dosi (2018) in that Axtell does not model a closed-system production-based cycle, where firm output is determined by consumer and producer demand, which in turn determines wages. Agents, in this and the Axtell model, spend the portion of their wage not saved, but the source of the consumption goods agents spend on and the destination of goods produced by firms is not considered as relevant in the context of the public goods game. Wages are assumed to rise with increased production, the shared output from the firm is considered a wage rather than a divided, and wages are not determined through a labor market.
The model dynamics also make no recourse to innovation or investment, rather firms are founded when an agent's optimal choice is to become self employed. Self employment can lead to a multi-employee firm if the founding agent attracts others to work with it. Although a large portion of self employed persons are not entrepreneurs and don't intend to grow their firms, we continue with this assumption that individuals who choose self employment have the potential to become firm founders with employees (Hurst and Pugsley 2011). Futhermore, we assume no technological limitations on economies of scale.
As explained in Sect. 1, job changes may incur costs and we are seeking to explore what effects the presence of costs has on job mobility. We further assume costs exceed those that could be regarded as general household expenditures, that these costs are not smoothed over a period of time, and are funded via savings or borrowing (Sullivan 2008;Clark and Davies Withers 1999). The ability to make a change will therefore be dependent on an agent's savings and access to credit. Therefore we add two components in a gradual manner to create two new scenarios.
The first addition applies costs for employment changes, with a cash-in-advance constraint which means an agent must have available funds to make a change (Lucas and Stokey 1985). Agents save a portion of their wage each time step, the quantity dependent on their individual saving rate, and savings accrue until agents spend all or a portion on making an employment change. We assume living costs are covered by wage and any residual goes into savings, so the varied savings rates are a proxy for varied levels of consumption. There are two aspects of mobility costs that need to be considered from the modeling perspective; the heterogeneity of costs and the level of costs. We model the costs of changing jobs as specific to each agent to provide heterogeneity, and for simplicity we consider the level of costs as a linear function of an agent's current employment, which explicitly accounts for effort preferences and implicitly for a skill set or network connections. This is the cost scenario.
The second addition adds to the costs scenario a universal credit-creating lender who makes funds available to agents with insufficient savings to make a move. The financial economy and the real economy are treated as decoupled systems such that loanable funds are not sourced from the savings of other agents (Minsky 1992;Mehrling 2010;Werner 2014;Jakab and Kumhof 2015). Loans incur interest compounded each time step at a constant rate, and are paid with a borrowing agent's full savings each step until repaid. It is not our intent to model loan dynamics intensively, rather to explore the effects of debt on job mobility. Loans are made based on current income to those agents without existing loans, which mimics lender The full EF model functionality is illustrated as a flowchart in Fig. 1. Cost and lending functionality can be toggled independently so we can explore three distinct scenarios: free movement, costs, and costs with credit. 10 Experiments with the EF model were made over 30 runs for 600 agents over 500 steps with an activation rate, or churn, of 10%. Therefore an average of 60 agents explore alternative employment options each step, for a total of 30,000 explorations for each of the 30 simulation runs. Sensitivity analyses demonstrate a stability of results for the base model at these values.
Results
For the three scenarios described in Sect. 2 we explored the number of employment changes and missed opportunities for change (described as thwarts), wages, firm productivity, loans and debts, as well as savings and total wealth. The institutional conditions represented by the costs and the costs with credit scenarios implemented at the microeconomic level have statistically significant effects on these macroeconomic measures in their respective emergent economies. Unless otherwise indicated, all simulations were run 30 times with the parameters and settings described in Table 1.
The emergence of regions of steady state firm population stability, even though the composition and size of any given firm is in flux, is common across the scenarios. This equilibrium region emerges after roughly 100 time steps and the number of firms oscillates within a band, as demonstrated in the 30 run spaghetti time series plots for each scenario in Fig. 2. 11 By 500 steps the band for the free movement scenario is level, while the costs scenario demonstrates a slight downward trend ( − 3% ) and the costs with credit scenario a slight upward trend in this equilibrium band ( + 3%).
The costs with credit scenario produces the most firms, therefore the smallest firms. The costs scenario produces the fewest, conversely the largest, firms and the number of firms in the free movement scenario falls between the two. Underlying these firm populations dynamics are employment dynamics, and the following sections explore aspects of job mobility in the contexts of free movement, costs, and costs with credit.
Mobility: Changes and Thwarts
An agent's mobility describes its ability to make a desired change, and agents with lower mobility miss out on opportunities more often than agents with higher mobility. Figure 3 shows a generalized additive model fit of the total number of employment changes and thwarts across 30 runs for each scenario. Mean numbers of changes and thwarts for the three scenarios at = 500 are 45 changes and 0 thwarts for the free movement scenario, 28 changes and 16 thwarts for the costs scenario, and 4 changes and 39 thwarts for the costs with credit scenario, though we notice for this scenario the changes are continuing downward and thwarts upward. There are no restrictions on making changes in the free movement scenario, so any utility runs for all three scenarios, starting at t = 100 and demonstrating the macroeconomic convergence into a steady state equilibrium band. Notice the downward and upward trends in the costs and costs with credit scenarios, and the relative volatilities within the bands improving opportunity can be acted upon, thus the number of thwarts is 0 and this scenario consequently produces the greatest number of changes.
The number of changes decreases across the scenarios and conversely the number of thwarts increases. Since the costs scenario produces more changes than the costs with credit scenario, we find the counterintuitive result that being able to borrow to make a move in aggregate results in fewer moves than overcoming the cost constraint via savings alone. As the level of costs increase, thwarts increase as well. We ran sensitivity analysis with various values for the wage multiplier, as well as modeling costs as a function of expected future wages rather than current wages.
(Results for the change in cost scheme is found in "Appendix A.2.") Basing costs on expected future wages produced results similar to increasing the cost multiplier, and the observed patterns across the three scenarios is the same: fewer changes for the costs scenario compared to the free movement scenario, and even fewer changes for the costs with lending scenario. As the cost level rises, differences in the changes across scenarios become more pronounced. Statistical summaries of select macroeconomic variables including changes and thwarts across the scenarios are given in Table 2.
Wages and Firm Productivity
Wages are an employee's share of firm output, as defined in Eq. 2, distributed each time step, and a firm's productivity is synonymous with the firm's output. 12 Mean wages for the three scenarios are on average .56, .59 and .53, respectively, all significantly different across scenarios according the Welch t-tests. Summary statistics are given in Table 2 and note that the costs with credit scenario produces not only the lowest wages but also the least variation in wages. Over time, wage values in the free movement and the costs scenarios oscillate, while wages in the cost with credit scenario continuosly decline.
The mean firm productivity values for the three scenarios are 2.77, 4.33 and 1.93, with the costs scenario producing the most productive firms and the costs with credit scenario the least productive firms. This is a consequence of the cost scenario firms being larger, so fewer of them, and wages highest, while the lowest wages and the most firms are produced by the costs with credit scenario.
One of the effects of agents setting effort according the utility function described in Eq. 1 is that as wages decrease, so does effort. This decrease is wage due to decreased effort further decreases an agent's effort until they find a better situation, and results in a portion of agents with wage values near zero.
Agent Parameter Correlations with Firm Size and Wage
Firm sizes and wages emerge out of the interactions of utility maximizing agents who have differing preferences for income over leisure, given by , and who form firms with production characteristics, a, b and , determined by the founding agent. For firm sizes greater than 1, meaning all firms that are engaged in co-production, Table 3 shows correlations for relationships between founding agent parameters and wage or firm size where at least one scenario correlation value is greater the .25, for each of the scenarios. We have also broken down the relationships for the costs with credit scenario, dividing the population into three categories; those with increasing debt, those with decreasing debt, and those with no debt at t = 500. These costs with credit subpopulation results are shown in Table 4. Wage and size are slightly correlated in the free movement scenario, less correlated in the costs scenario, and uncorrelated in the costs with credit scenario (Table 3). Wage and are slightly correlated in the free movement scenario, and less correlated in the costs and the costs with credit scenarios. Size and firm are lightly correlated in each of the scenarios, with the strongest correlation in the costs scenario, and the weakest correlation in the costs with credit scenario. When the costs with credit scenario is divided into subpopulations, the correlations between wage and increase, mostly for the population with no debt, and the wage size correlations disappear (Table 4). The correlation between firm size and firm strengthens for the subpopulation of agents with increasing debt. Additionally, the decreasing debt subpopulation has the highest mean , the debt-free population has the lowest mean and the highest mean firm , and the increasing debt population has the lowest mean firm .
Savings and Debt
All agents have a non-zero savings rate, so will save a percentage of their wage. The costs with credit scenario allows agents with insufficient savings to pursue utility improving opportunities by taking out a loan. Debt quickly becomes pronounced, increasing superlinearly for lending rates above 0, as demonstrated in Fig. 4. The superlinear loan behaviour for model simulations (an interest rate of 3%) starts around = 15 . In this scenario, all indebted agents' savings goes to servicing debt, The aggregate loan amounts are so great they overwhelm the positive wealth values. We therefore consider a variant of the wealth metric, net wealth, which is the difference between the sum of all agent savings and all agent loans. Figure 5 demonstrates the net wealth values, truncated at = −5 , for all 600 agents in a single run. Agents who do not borrow at all over the 500 time steps are highlighted in red. Note that in this case the highest net wealth value belongs to an agent who did not borrow at any time, but this result varies with run, and it is common for the highest net wealth agent to have borrowed at one or more points.
The model restricts agents who can make loans to those who don't currently have one, which is a simplification of lenders' risk avoidance. We conducted a sensitivity analysis with a variant of the model whereby agents can take out loans up to ten times their current wage, which amounts to agents being able to make roughly ten job changes or being self-employed five times. The model behaviour is the same as the base model in that thwarts exceed changes, though this occurs later than in the base version, and the quantity of loans grows superlinearly. Once the interest rate for loans rises as well, the thwarts exceed changes in the same timeframe as the base model. (Results for these alternative loan schemes are provided in "Appendix A.2.") A model version where an agent can make unlimited loans results in a scenario nearly identical to that of free movement, with the difference being gross quantities of debt.
Loans
We noted in Sect. 3.4 that the total amount of loans in the simulated economies increases superlinearly for interest rates greater than 0. What is driving this superlinear behaviour? Figure 6 demonstrates model results over 30 individual runs for total loan amounts, wages and total savings for interest rates of 0% and 1% and 3%, with every agent having a savings rate of 3%. Cost multipliers for both moves 200 400 40 20 200 Fig. 6 Loans, wages and savings over time for different savings rates. Plots of loans, wages and savings for values of lending rate 0%, 1% and 3%. Savings rates and cost multipliers are homogenous for all agents and types of moves, with values of 3% and 1. Net wealth, or savings minus loans, is indicated by the colored regions between the savings and loan lines. Blue indicates positive net wealth and red is negative net wealth and self-employment are homogenous with a value of 1. We see that with a lending rate of 0% there is no superlinearity in aggregate loan value and a positive net wealth. 13 As seen in Fig. 4, the higher the lending rate the sooner the superlinearity appears. The colored regions indicate whether or not the difference between wealth and loans, or net wealth, is positive (blue) or negative (red). Superlinear behavior in loans in our costs with credit scenario results in total negative net wealth.
To explore the dynamics underlying this superlinearity in aggregate loan value, we consider 1 and 2 to be the quantity of loans at two consecutive time steps and if l is the lending rate, then 2 = 1 + l 1 − loan payments + new loans.
Loan payments are a function of the wages and savings rates of borrowers. If s is an agent's savings rate and w his wage, and b the set of agents with outstanding loans, then loan payments are where s b and b are the savings rate and wage for borrower b. New loans are a function of the number of singleton loans, firm move loans, the costs for these two activities and the wages of the borrowers. If c s and c m are the wage multipliers to determine the costs for becoming self employed and changing firms respectively, and s and m the instances of new loans made to facilitate these activities, then the principle quantity of new loans are Assuming mean wage represents any given borrower's wage, mean savings rate s any given borrower's rate, and mean costs multiplier c represents both singleton and move costs, and the total number of new loans, then Eq. 3 becomes sb and Eq. 4 becomes c , and the simplified total loan equation is The superlinear behavior is described by an increasing difference in consecutive values. In the further simplified case of interest-free loans, l = 0 and if 2 − 1 > 0 then (5) 2 = 1 + l 1 − sb + c .
(6) c > sb 13 What happens in the cost with credit scenario when the interest rate is 0? Changes and thwarts mirror the costs scenario, with almost the same number of changes in the costs scenario as thwarts in the zero interest scenario, and the same number of changes in the zero interest scenario as thwarts in the cost scenario. Fig. 7 Loan parameters analysis. Simulation values averaged over 20 runs for the determinants of loan quantity with simulation parameters (left) and average discrete second derivatives of wages and loans (right). The cost multiplier for both startup and employer changes is 1, savings rate is homogenous at 3%. Note the correspondence of the inflection point Therefore aggregate loans will increase when the ratio of new loans to existing borrowers exceeds the ratio of savings to the wage multiplier for costs. The plot in Fig. 7 on the left shows the simulation values for the elements in Eq. 7, costs, borrowers, wages and loans, with the lending rate 3% and homogenous savings and costs for both singletons and moves equal to 1.
The discrete second derivative of total loan and wage values for the two simulation are shown in the right hand plot in Fig. 7, and notice the matching inflection points in both the loan and wage curves around = 51 , which suggests a correlation between decreasing wages and increasing debt.
Wealth Inequality
In the free movement and costs scenarios, all 600 agents have some amount of savings, while in the costs with credit scenario only 156 agents on average have savings greater than 0 at t = 500 . The remaining 444 agents have debts, as illustrated in Fig. 5. Thus the model has produced a bimodal net wealth distribution roughly characterized by agents with debt and agents without debt, or agents with positive wealth and those with negative wealth.
Inequalities within a population are canonically represented by Lorentz curves, thus Fig. 8 demonstrates those curves for each of the scenarios for both wages (income) and total savings (wealth). It is interesting to note that while the three scenarios are not so different from the income perspective, they are hugely different from the wealth perspective, as demonstrated by the Gini Index values in Table 5. Note that the Gini values for income track with the wage variance for each of the scenarios. 14
Discussion
The EF model demonstrates that adding cost constraints to free movement of workers, along with the ability to borrow to make a move, produces a negative feedback loop described by decreasing job mobility, savings and increasing debt, as hypothesized. As job mobility decreases in this costs with credit scenario, wages and savings fall and debt rises. As debt rises, job mobility and savings continues to fall and debt continues to rise, imitating the observed qualitative behaviours described in Sect. 1. The costs with credit model also produces levels of wealth inequality consistent with that observed in the US. In the free movement scenario, agents are free to join firms until the optimal firm size is formed (Axtell 2018), at which point additional agents produce a freerider effect that causes wages to fall and prompts agents to find better opportunities elsewhere. In this scenario both wage and size and wage and are most strongly correlated compared to other scenarios. In the costs scenario an agent must accrue enough savings to afford the costs of making a change, which adds a time delay in agent moves, roughly 30 time steps with a savings rate of 3%. By the time an agent can afford to move, their best utility options may have changed due to the movement of other agents and the constant reconfiguration of firms. Since thwarted agents include those who may want to leave a firm, firms in the costs scenario grow larger because agents are unable to leave. This impedance causes statistically significant increases in firm sizes, output and wages. Since agents aren't free to move once a firm surpasses its optimal size and wages fall, the size-wage correlation decreases, but the size-correlation increases because mobile agents are attracted to highly productive firms with immobile high wage-preferenced workers.
Exploring the costs with credit scenario at the subpopulation level is highly informative. The debt-free subpopulation, the most mobile class, has the highest average firm , but the lowest average . Rather than high wage-preferenced agents grouping into highly productive firms, suggestive of the superstar firms discussed by Autor et al. (2020), agents move to more productive firms not because they produce more effort, but because they are mobile. This class also has the highest wagecorrelation, indicative of the free-movement scenario. Conversely, the least mobile class, the subpopulation with increasing debt, has higher average values than the mobile class, but the lowest average firm value, suggesting that these agents are stuck in low-production free-riding firms. As in the costs scenario, this population also displays the strongest correlation between firm size and firm .
In this model, as in the Axtell model, the firm production parameters a, b and are independent random variables and uncorrelated, and income preference is uncorrelated with an agent's production parameters. This means that an above average value could be paired with a below average effort multiplier value, a, such that the two counteract each other in determining firm production. A future implementation could correlate these values for each agent, particularly the production parameters, to perhaps strengthen the productivity related correlations and further explore the emergence of agent homophily within firms under different scenarios.
Agent characteristics in this and the Axtell model are not dynamic, and network edges, effort preferences and productivity parameters are fixed. Currently, costs are modeled as a linear function of current wage, with the intent that an agent's current situation is a representation of their network or capacities, but this means that lower wage agents incur lower mobility costs. Just as networks could evolve and become dynamic, the cost function could become nonlinear such that a low-wage agent could incur large mobility costs, representing an improving action such as upskilling, which would in turn update the agent's productivity parameters.
In the costs with credit scenario a subset of agents are further impeded in making changes because they have outstanding loans they must pay off before they could either begin saving for a future move or borrow again. The superlinear growth in total loans exhibited in this scenario has multiple causes, all of which results in an agent not being able to save because they cannot pay off debt, and cannot move to an improved situation where they could earn higher wages in order to pay off debt. Savings rates are heterogenous in the EF model so there will be agents who make a loan and will not be able to repay that loan because their savings rate is lower than the lending rate. In another case, a perpetually indebted agent may have a saving rate equivalent to or higher than the lending rate, but may have chosen an opportunity that increased utility but decreased wage, again resulting in insufficient payments. Alternatively, an agent my have chosen a situation with a higher wage, but the decisions made by other agents eventually cause the firm's productivity to decrease and the wage becomes insufficient to repay the loan. In each of these cases, the amount that borrowers owe will continue to grow over time. Unlike the costs scenario where there are two classes of agents, those with sufficient funds and those with temporarily insufficient funds, three classes of agents emerge in the costs with credit scenario: agents with sufficient savings who move at will, agents with loans who will pay off that loan and either borrow again to make a move or accrue savings before an opportunity arises, and agents who are hopelessly indebted and will never make a move. Changes are rare in the costs with credit scenario, and since fewer agents are able to place themselves in superior situations both wages and productivity are depressed.
The superlinear growth behavior in aggregate loan values is an intriguing result as the total amount of loans will exceed the total wealth in the model in the situations where a significant number of agents are unable to repay loans. The purpose of credit is to allow agents to complete contracts otherwise not obtainable, thus expanding markets, or in this case, permitting an employee to make a beneficial change and therefore increasing wages and productivity. Yet clearly too much available credit has significant negative consequences (Turner 2017). Russo developed a multi-agent model exploring the effects of household credit and found that using credit can smooth consumption over business cycles, but eventually the debt burden leads to inequality, thus there is a tradeoff (2016). Is it possible to say anything about viable credit regimes? Eq. 7 suggests that for cost multiplier of 1 and homogenous savings and lending rates, loans will increase only if the ratio of new loans to existing borrowers exceeds the saving rate. Exploring this double-edged effect of debt would be an intriguing avenue of study, and is relevant to current issues such as student loans.
While the above suggests the possibility of a sustainable credit regime, it is in no way prescriptive. The EF model is highly stylized, and not stock-flow consistent. Credit does not come from another agent nor are costs and interest paid to another agent, so the total wealth in the system is greatest for free movement, where it is equal to the cumulative output over the simulation run, and least for the costs scenario. This current formulation means any growth in output is the result of more efficient combinations of workers. If we were to apply an endogenous rather than exogenous lender then the debts held by agents would show up as credits held by others. In these cases of negative wealth the Gini index can be greater than 1, and inequality even more widely spread (Chen et al. 1982).
Despite its stylized nature, the model demonstrates that if pursuing the best productive opportunity incurs costs, then having financial capital matters. Without that capital, and especially in the presence of debt, agents are unable to participate in the economic activity of finding the most efficient uses of their labor. As the effective N decreases the options available to other agents become increasingly limited. Thus changes are fewer in the costs with credit scenario not only because perpetually indebted agents won't make moves, but also because there are fewer opportunities for improvement for the mobile agents. Van Bavel describes this lack of access and diminished participation as a hallmark of the downward trend in historic cycles of the rise and fall of market economies (2019). Our current crisis of capitalism may actually be a lack of distributed financial capital. This the basis of the freedom argument in support of universal basic income (Widerquist et al. 2013). Studies of a project by GiveDirectly in Kenya that provides a monthly subsistence income over twelve years claim a significant portion of recipients use that money toward entrepreneurial ends (Lowrey 2017). 15 The town of Aarhus, Denmark has implemented a program to give people seeking employment roughly $5000 to do whatever they needed to do to find or create a job, whether training, tattoo removal, new wardrobes, job hunting travel, or whatever their unique circumstances require (Urbact 2017). 16 As mentioned in Sect. 1, other common mobility costs are relocation expenses or the need to cover gaps in health insurance or income. Costs may also arise in the form of externalities such as child or elderly care. These examples highlight that costs associated with job mobility are varied and individual, and are thus difficult to account for econometrically.
Hayek observed that when dealing with complex systems 'the aspects of the events to be accounted for about which we can get quantitative data are necessarily limited and may not include the important ones ' (1975). Perhaps mobility costs aren't visible as such at the macroeconomic level, but still play a decisive microeconomic role in determining an individual's access to advantageous employment opportunities, which suggests a novel research direction to discover the microeconomic empirical evidence that drives the complex interaction between wages, savings and debt.
Conclusion
We have developed a stylized model to explore the hypothesis that the observed decline in job mobility could be the consequence of a complex interaction between mobility, savings, wages and debt. We indeed found evidence of such a negative feedback loop, as well as evidence that this dynamic results in a level of wealth inequality on the same scale as we see today in the US. The EF model serves as a qualitative experiment that can generate testable hypothesis and justify efforts to generate datasets describing this dynamic which could be tested empirically. Before expanding the EF model in scale, 17 we believe further modifications to the original Axtell model are required to better capture employment dynamics, namely reworking the utility function determining an agent's work effort to take into account worker-discipline dynamics which would put a floor under possible effort values to accommodate subsistence or debt obligations (Bowles and Boyer 1988), allowing for dynamic modification of agent production values, as well as incorporating an evolving social network. We believe the resulting model could provide a sound basis for a future quantitative exploration of wages, savings, debt and firm size distributions.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflicts of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. Aggregate loan amounts for two alternative loan schemes, the first with agents able to make loans up to ten times their wages with interest rate equal to savings rate, and the second where agents can take out loans up to ten times their wage, but with a lending rate of 10%
A.1 Run Parameter Selection
The EF model is a stylized small-scale model, and as such we minimized the runs, time steps and number of agents while still producing stable results in the free movement scenario. The plot on the left in Fig. 9 shows the cumulative number of firms over 100 runs, and our selected value of 30 runs is indicated by the dotted line. The values from = 30 to = 100 are statistically linear with a slope of 0. The plot on the right of Fig. 9 shows the number of firms over time out to 1000 time steps, and our selected values of 500 is likewise indicated.
The mean number and size of firms, the scaling factors for the distributions, and output and wages all scale linearly with N, thus we selected 600 agents as a computationally tractable number.
The model results are also independent of starting conditions, whether singleton firms, a single firm or starting anew with previous results obtained at = 100 . The results were very sensitive to the production parameters, with small variations in the range of values for the returns to scale exponent, , producing changes in the mean firm size as well as the shape of the firm size distributions. 18
A.2 Cost and Loan Scheme Variations
In Sect. 3.1 we describe a sensitivity analysis where the cost of changing jobs depends on the expected wage instead of the current wage. Simulation results for for this analysis showing changes and thwarts for all scenarios is given in Fig. 10, which demonstrates that the pattern is the same as in the base model: fewer changes for the costs scenario compare to the free movement scenario, and even fewer changes for the costs with lending scenario.
As explained in Sect. 2, it is not our intention to model lending risk avoidance explicitly, so we used a simple lending scheme where interest rates matched mean savings rates, and borrowers couldn't obtain new loans if they had one currently unpaid. In Sect. 3.4 we describe a set of sensitivity analyses exploring how alternate loan schemes affect the model outcomes. In the left-hand plot in Fig. 11 we show the results for the analyses where agents can borrow up to ten times their current wage with a lending rate of 3%, which is the same as the mean savings rate, and a lending rate of 10% (still significantly lower than modern consumer credit interest rates, which are upward of 20%). When an agent can borrow at will, the agent behaves as if it is in a free movement scenario, so the changes decrease and thwarts increase at a significantly slower rate than in the base model. However, when we raise the interest rate on this unsecured debt to 10%, the model behaves as the base model does with the number of changes quickly falling to low levels. In both cases, the aggregate amount of loans still increases superlinearly, as demonstrated in the right-hand plot in Fig. 11.
A.3 Utility Improvement Thresholds
In the Axtell model, agents make changes to their employment whenever there is an opportunity to increase utility. If there are costs to making a change, it may be more reasonable to assume there is a minimum improvement in utility required before making a change. We ran the model for all scenarios with a series of increasing minimum improvement thresholds. For the free movement scenario, the number of changes decreases linearly with the threshold amount, such that the higher the threshold, the lower the changes, as expected. For the costs scenario, for a low threshold such as a 5% improvement, the number of changes actually increases slightly, before decreasing with increasing threshold values. This is because just a slight decrease in churn means more agents will have saved sufficiently before deciding to move, whereas this effect disappears for the larger reductions in churn caused by higher improvement thresholds. The costs with credit scenario produces linear decreases in both changes and thwarts. Figure 12 demonstrates that despite decreases in the amount of employment changes, aggregate loan amounts exhibit the same superlinear growth for the costs with credit scenario at least for threshold values up to 20%.
|
2020-11-03T05:08:48.214Z
|
2020-10-30T00:00:00.000
|
{
"year": 2020,
"sha1": "0337eb49c9f2fbd01fbd06b2b593f8b69dd7a1bd",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10614-020-10064-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b70958cd77f23474e4b5bf29e75a6d0dfbf03cd5",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Medicine",
"Economics"
]
}
|
3578155
|
pes2o/s2orc
|
v3-fos-license
|
Time Multiplexed Active Neural Probe with 1356 Parallel Recording Sites
We present a high electrode density and high channel count CMOS (complementary metal-oxide-semiconductor) active neural probe containing 1344 neuron sized recording pixels (20 µm × 20 µm) and 12 reference pixels (20 µm × 80 µm), densely packed on a 50 µm thick, 100 µm wide, and 8 mm long shank. The active electrodes or pixels consist of dedicated in-situ circuits for signal source amplification, which are directly located under each electrode. The probe supports the simultaneous recording of all 1356 electrodes with sufficient signal to noise ratio for typical neuroscience applications. For enhanced performance, further noise reduction can be achieved while using half of the electrodes (678). Both of these numbers considerably surpass the state-of-the art active neural probes in both electrode count and number of recording channels. The measured input referred noise in the action potential band is 12.4 µVrms, while using 678 electrodes, with just 3 µW power dissipation per pixel and 45 µW per read-out channel (including data transmission).
Introduction
The need for large-scale neural recording across multiple brain areas in behaving animals has driven the recent development of high density neural probes [1]. In this application, the implanted probe shank needs to be sufficiently long to reach deep brain structures (Figure 1a), but it also needs to have a reduced cross section to minimize tissue damage. Active silicon neural probes that have been recently developed consist of a large number of tiny active electrodes that can locally amplify/buffer the neural signals [1][2][3][4]. However, with such limited space for each active electrode, the CMOS (complementary metal-oxide-semiconductor) pixel amplifiers (PA) underneath the electrodes are restricted to a bare minimum, while most of the signal processing is done in the 'base' (i.e., non-implantable part) of the probe. This manuscript presents a thorough description and in vivo measurements from a probe architecture first published in [5], presenting an active neural probe that contains 1344 recording (20 µm × 20 µm) pixels and 12 reference pixels (20 µm × 80 µm), densely packed on a 50 µm thick, 100 µm wide and 8 mm long shank. This new type of probe features a 1:1 electrode-to-channel ratio and supports simultaneous recording of all of the 1356 electrodes (full-probe recording) and high-performance recording from 678 electrodes (half-probe recording), increasing the number of simultaneous recording channels by 3.5 times when compared to the state of the art [4]. Each active electrode (i.e., pixel) consists of dedicated in-situ circuits for signal source amplification that are located under each electrode.
Dedicated neural amplifier circuits [6][7][8] can provide the best electrical performance, however, they need to be connected to external passive probes (e.g., [9]) or arrays (e.g., [10]) that capture the signal. Such an arrangement is scalable to only tens or hundreds of channels [11], due to the limitation in the interconnection between the external probe and the amplifier circuit. Furthermore, this leads to an overall less compact system, which is where CMOS neural probes present an advantage, as they can integrate both of the electrodes and circuits in a single integrated circuit [12].
Prior active [2][3][4] and passive [9,[13][14][15] neural probes used a dedicated metal line per electrode to send the signal to the base circuitry. This one-to-one mapping results in either a limited number of electrodes present on the shank [9] or the recording of a statically selected subset [2,4,15] (Figure 2a). Naturally, these approaches limit the number of simultaneous recording electrodes to the number of metal lines fitted in the cross section of the shank (Figure 1b). The available routing space is shared amongst signal wires, local routing, power, and input coupling capacitors (Figure 1c). Smaller CMOS processing nodes may alleviate this problem by allowing a higher routing density, however this approach comes with the drawback of increased crosstalk amongst channels [12]. Further degradation of the signal will be caused by the increased thermal noise caused by the increased electrical resistance. When the wire connecting the electrode to the base reaches magnitudes of tens of kΩ or more, the overall contribution can become significant. Therefore, a smaller CMOS node may not be the solution to an increased number of simultaneous read electrodes.
To overcome the fundamental wiring bottleneck and achieve a denser simultaneous readout, a new architecture is proposed, which relies on time division multiplexing and techniques that reduce the associated drawbacks of implementing multiple sensitive and low noise switched circuits across a long and narrow shank. The shank imposes strict area and power limitations, resulting in a poor power supply with increased drop and ripple, dense layout prone to capacitive coupling, and a requirement for low complexity circuits, which provide the desired functionality and low noise.
This architecture and circuit implementations presented maximize the readout capability of a given inserted shank by simultaneously recording all of the available electrodes. Thus, a probe fully covered with electrodes, which are all simultaneously readable, will provide the neuroscientist with the maximum amount of information for the damage created by the probe insertion. This aspect is a crucial drive for further development, as current neural recordings are done at a scale that is minute when compared to the size of a brain. Furthermore, the high density was shown to improve the performance of spike sorting [16]. This new architecture further opens new ways in the scaling of neural probes by circumventing the barrier of a limited number of shank wires.
The paper is organized as follows: in Section 2 the operation principles of the probe are described in the context of our goal. Section 3 describes the overall architecture and functionality, continuing with further details of specific novelty circuit blocks described in Section 4. In Section 5 we describe the resulting fabricated device, following with the details of the supporting system blocks required to use the probe in Section 6. Test results with both electrical and in vivo measurement are outlined in Section 7, reaching to conclusions in Section 8.
Overview
Active neural probes improve recording quality when compared to the passive versions by buffering or amplifying the input signal close to its source (i.e., the electrode). This approach reduces the source impedance and minimizes the crosstalk caused by the coupling amongst the long and dense neighboring shank wires [2]. In such cases, the electronics under each active electrode (i.e., PA) has strict design constrains.
Within a shank that is fully covered by electrodes, the area available for each amplifier is limited by the electrode size, the power is limited by the acceptable tissue heating, and the noise requirements are imposed by the signal amplitude (as small as tens of µV). In previously active and passive probes, only a fraction of the electrodes present may be read out simultaneously [2,4,15]. Static switches need to be configured before recording (Figure 2a, top), as the amplifiers used have a long settling time required in order to capture neural signals down to <1 Hz, while still rejecting the DC offset of the electrode. This configuration allows for a certain degree of flexibility in choosing which probe area is read out, however the approach is limiting, as it does not give neuroscientists the opportunity of accessing all of the brain areas near the probe simultaneously. Overcoming this limitation is achieved by employing a multiplexing architecture which makes use of new types of amplifiers, capable of operating in such a multiplexed configuration (Figure 2a, bottom), while still maintaining the stability and performance required to record the neural signals.
Noise Folding
Within the limited pixel area, an obvious method to reduce noise is to increase the current consumption of the PA input transistor. This results in the PA having a high bandwidth. Since the neural signal band itself is limited to~7.5 kHz, the PA output can be sampled at a frequency f s > 15 kHz in the base. Therefore, a simple time division multiplexing could be embedded within the shank (Figure 2a), allowing M number of PA outputs on a single shank wire (using a sampling frequency f MUX = M × f S ).
However, the lack of a traditional anti-aliasing filter limiting the high PA bandwidth increases the in-band noise (coming from both brain and circuits) due to spectral folding. Since it is not possible to fit low pass filters within the limited area of the PA (before the sampling operation), we have employed an alternative method of noise reduction by integrating the signal over a period of time (T i ) ( Figure 2c) [17]. Since the integration circuits are located after the sampling circuits, they may be placed within the base, not the area restricted pixels. The integrate, sample, and reset operations strongly attenuate the signal beyond f i = 1/T i (f i ≥ f MUX ), improving the signal-to-noise ratio while allowing for certain circuit elements to be shared across multiple channels.
For the current probe design, a multiplexing factor of M = 8 was sufficient to overcome the shank-wire bottleneck and provide sufficient area for power lines and capacitors. To avoid in-band distortion, each channel is oversampled at f s = 40 kHz (higher than the Nyquist rate of 15 kHz), producing a total multiplexing frequency of 320 kHz. This, in turn, limits the integration period to a maximum of 3.125 µs. We have used T i = 2.5 µs, while using the remaining time for circuit transitions between the adjacent channels that were selected for multiplexing. This process effectively results in a low pass operation, strongly reducing the PA bandwidth from~4 MHz to 400 kHz and limiting the noise folding [18].
Power Limitation
One of the most stringent restrictions of an implantable device is on power dissipation and the resulting heating of the nearby biological tissue. In this application, the power budget of the probe is determined by the limited amount of heat that may be dissipated without disturbing or damaging the surrounding brain tissue. For long-term experiments with a chronically implanted probe in the brain, a maximum of 1 • C [19] is considered acceptable, while for acute or shorter term recordings a higher temperature increase may be acceptable. Using finite element method (FEM) simulations with Comsol Multiphysics ® , we determined the maximum power that the circuit can dissipate, prior to its design. Thus, we have determined the power budget such that the hottest point in the brain is seeing at most 1 • C temperature increase. A similar approach was used in previous designs [2,4], which have already been used in long term recordings.
The critical part, the implanted shank, is modeled taking into account the variation of the power dissipation across its area, while the base is modeled with uniform power density ( Figure 3a). Furthermore, the complete probe as well as its fixture are modeled along with the skull, dura, and brain, including blood circulation. A model is used to determine the non-uniform power distribution across the shank, taking into account the dissipation of the amplifiers and power lines according to the layout (Figure 3a). The power dissipated in the power lines brings a significant contribution nearing the base. As shown in Figure 3b, the maximum temperature is reached at the edge between the brain and skull, as this area of the shank has the highest power density caused by the highest value of supply voltage as well as the highest current in the supply rail, taking into account the worst-case scenario that is expected from the probe.
Architecture
Taking into account the previously described operating principles, we propose a neural probe architecture, which is described in detail in this and the following sections. Figure 4a shows the block level architecture of the complete probe, including the number of repeated instances for the relevant parts. In an array of eight PAs, the input signals (Vi<1:8>) are connected to each of them individually. The multiplexed output from this array is sent to the base through a shared shank wire. The signal is subsequently fed to an integrator the output of which is demultiplexed (DMUX block) using eight sample-and-hold circuits (Vo<1:8>). Each Vo signal then goes to its corresponding channel block (Figure 4b) where the signal is further amplified and filtered, keeping only the band of interest. Together, these blocks implement the noise reduction technique described in Section 2.2, with individual blocks further described in Section 4. The outputs of 20 channels are multiplexed and digitized with the help of a 10-bit successive approximation register (SAR) analog to digital converter (ADC) [4]. The number of multiplexed According to the simulation results, the power dissipation limits that would produce a 1 • C increase in the tissue temperature are 4.5 mW for the entire implanted shank and 45 mW for the base. These determined limits were used as design specifications for the circuits: the complete power budget is used in the shank when all of the electrodes are turned ON in order to minimize noise, while the base circuits require less power than allowed, without a penalty on performance. Thus, the design presented further increases the electrode density within the available power and noise limits.
Architecture
Taking into account the previously described operating principles, we propose a neural probe architecture, which is described in detail in this and the following sections. Figure 4a shows the block level architecture of the complete probe, including the number of repeated instances for the relevant parts. In an array of eight PAs, the input signals (Vi<1:8>) are connected to each of them individually. The multiplexed output from this array is sent to the base through a shared shank wire. The signal is subsequently fed to an integrator the output of which is demultiplexed (DMUX block) using eight sample-and-hold circuits (Vo<1:8>). Each Vo signal then goes to its corresponding channel block ( Figure 4b) where the signal is further amplified and filtered, keeping only the band of interest. Together, these blocks implement the noise reduction technique described in Section 2.2, with individual blocks further described in Section 4.
The outputs of 20 channels are multiplexed and digitized with the help of a 10-bit successive approximation register (SAR) analog to digital converter (ADC) [4]. The number of multiplexed channels is selected based on the required sample rate per channel (20 kHz) and the performance of the selected ADC architecture.
The digital control block is responsible for generating the internal clocks for the ADCs and the MUX/DMUX blocks from a single external clock source. It also buffers and then serializes the parallel data from all of the ADCs to only six data lines. The number of data lines is a compromise between fewer output lines and lower clock speed (i.e., lower I/O pads dissipation which are part of the base). All of the channels, PAs and bias parameters are configurable through daisy-chained shift registers. The chip contains 1344 small electrodes (20 µm × 20 µm) and 12 larger electrodes (40 µm × 80 µm) that can be used as reference. The shank is divided in 12 identical regions, each with a reference electrode in the center and 112 small electrodes around it. The pitch of the small electrodes is given by the compromise between desired number of recording sites at high density and the signal quality that can be achieved with the available area and power budget. Similar to the internal shank reference electrodes, a 13th Ref-PA block without an exposed electrode contact is used to amplify an external reference signal provided through a bond pad. This block is placed at the beginning of the shank, near the base to improve matching to other reference signal amplifiers.
A total of 180 Integrator-DMUX blocks drive the 1440 channels (1:8 ratio), which are digitized by 72 ADCs (1 ADC per 20 channels). The extra channels (1357 and higher) are used for the external reference and for test purposes.
A global bias block contains a band-gap reference and the necessary circuits to generate the required voltages and currents for the chip. Hierarchical and active biasing is used to facilitate the biasing of such a high number of analog blocks spread across the whole base area.
Pixel
The integrator architecture, described in Section 2.2, is split in two parts. Within the limited area of the pixel, the PA acts as a voltage to current converter (Figure 5a), while the integration capacitor and sample and hold (S/H) circuits forming the de-multiplexer are located in the less area-restricted base. The current from the pixels is first integrated for a fixed period of time (T i = 2.5 µs) over a capacitor (C i = 15 pF) that is shared by eight channels. After T i , the voltage on C i is sampled and then the capacitor is discharged for the next cycle (Figure 5b). The S/H circuit is followed by a buffer, implemented as a flipped voltage follower [20] and using a deep N-well NMOS transistor. The buffer is necessary for the reference path (Ref DMUX) where one output may connect to multiple channels (Figure 4a). In the signal path, it is primarily used to closely match the reference path. The PA employs an open-loop, AC-coupled, transconductance (gm) stage (M1). At the end of the DMUX, this produces an overall small signal gain of 10, given by: The cascode transistor, M2, reduces the clock feedthrough from the switches A and B to the gm stage. These switches are operated with temporal overlapping to ensure a constant ON current through M1. These aspects are crucial to maintaining DC operating point stability, as the gate (G) of M1 is a high impedance node (~TΩ), produced by the high-pass filter. The filter (corner << 1 Hz) is necessary to reject the relatively high input DC level (upwards to hundreds of mV) produced by the electrode-tissue interface [21], while allowing through neural signals down to 1 Hz. Due to the small value of C1, the two transistors forming the pseudoresistor (M3) are considerably long, taking up a significant area in the pixel.
During normal operation, the cascode transistor M4 located between the current source (i.e., the PA) and the integrating capacitor (C i ) ensures that the shank wire connected at the source of M4 is at a constant voltage equal to the supply rail (V s~1 .2 V). By keeping all of the shank wires at a constant voltage, this approach reduces the crosstalk amongst channels caused by the capacitive coupling of the long shank lines. Furthermore, the shank power dissipation is reduced, as this constant voltage is higher than the average voltage on the top plate of the integrating capacitor, C i , causing an overall smaller average V DS across M2.
The layout of the pixel is designed to take certain aspects into account, besides the stringent size restrictions. The pixels are isolated from each other through a dedicated guard ring. Furthermore, the routing within the pixel is such that M3, M1 and M2 are properly shielded from external disturbances caused by the switching elements, A and B, as well as the digital control lines.
With the exception of the high-threshold inverter used for calibration (Section 4.3), the transistors, shown in Figure 5, are thick-oxide transistors, in order to reduce gate leakage and facilitate operation at higher supply levels (i.e., 1.8 V).
Shank Power Supply
The choice of voltage levels for the supply rails of the PA is defined by multiple factors. The power budget (I DC × (V DD − V SS )) determined in Section 2.3, coupled with minimal noise requirement, induces a trade-off between the current through M1 and M2, and their V DS . However, the chosen operating point must account for the drop in the power supply lines across the shank. A 0.6 V supply voltage was found to be optimal. By using V SS = 1.2 V and V DD = 1.8 V, the current can be directly integrated over C i (within the range of 0 to 1.2 V), eliminating the need for a negative supply or mirroring circuitry. Furthermore, since the 1.2 V rail is used by the following stages, the current from the unselected pixels (switch A closed) can be fed back into the 1.2 V rails of other blocks in the base, reducing the overall consumption of the probe.
The extremely high aspect ratio of the shank (80:1), along with the limited area for supply routing (due to the large number of signal wires), results in a high voltage drop of~120 mV across each of the shank power supply lines (Figure 6a). Two complimentary solutions mitigate the negative consequences of the voltage drop. First, the gate bias, V b , is generated locally and periodically across the shank using reference currents from the base, as the limited space does not allow for a more complex solution that is sufficiently accurate. There are 12 bias circuits, one for each of the 12 regions of the probe shank. Still, the voltage drop experienced within the same bias group creates sufficient differences among the PA bias voltages (∆V b~∆ V DDG /2) to affect the operation performance. To further mitigate this issue, a tree structure for the supply line is implemented by splitting the shank pixel amplifiers in branches, one for each of the 12 regions (serving 113 PAs each), each powered from the same bus through a single connection. Here, each half of a branch experiences an insignificant supply drop (∆V b~0 V), due to a much lower consumption (i.e., only 56 pixels). This results in a more controlled bias current amongst different pixels in a group. . This is due to the high current in the supply rail (consumed by all PAs), that causes voltage drop within a bias region (∆V DDG ); (b) a tree-like power supply ensures that supply change ∆V DL is close to zero within each region, due to the lower current in the local rail. Each region contains its dedicated local bias generator.
The power rails of the shank are carried over the top metal layers, 5 and 6 as shown in Figure 1c. A minimal amount of power supply decoupling is provided by using the two power rails to form the two layers of a metal-insulator-metal (MIM) capacitor, which is distributed across the shank (total 80 pF) and does not consume extra area. The input capacitor C1 of the PA ( Figure 5) is also found implemented on the top two metal layers, which results in a tradeoff between the area used for power rails and input capacitance, which influences the noise performance. More decoupling capacitors for the shank power supply are present in the base and shank neck, as well as external components on the outside of the probe, in proximity to the power pins.
Calibration and Reset
The pixel circuits offer additional features that can be activated independently per pixel, one at a time, without containing a dedicated memory element such as shift registers.
Both gain calibration (CAL) and electrode impedance characterization (IMP) are activated through switch E in the pixel (Figure 5a). By applying a known voltage (via the CAL/IMP port) while the electrode is floating (not connected to sample or solution, e.g., before implantation) the end-to-end gain can be measured and calibrated. Similarly, the electrode-tissue interface impedance can be characterized by applying a known current from the circuit side, while the probe is submerged in a grounded saline solution, by measuring the voltage that develops at the pixel input. Since this measurement requires the connection of a single PA input to the shared CAL/IMP signal, the selection of the corresponding switch E is done by temporarily lowering the wire voltage Vs to~0.8 V by controlling cascode voltage Vc2 when its switch B is ON. This triggers a high-threshold inverter only within the selected PA, thus setting the switch E, while allowing normal operation of the PA, albeit with a higher power dissipation due to a higher V DS . This method of using the output line simultaneously as a select signal eliminates the need for dedicated registers within the PA, which take valuable area. The signals required for calibration are generated externally by the headstage, as described in Section 6.
The low frequency high pass corner (<1 Hz) formed by the AC coupling filter leads to significant settling time at startup or in response to large voltages induced by nearby brain stimulation. To reduce the time needed to reach a steady state, the filter resistor (M3) can be shorted using switch F, resulting in a settling time in the order of micro seconds. Furthermore, this can be used in conjunction with optical stimulation, as the pseudoresistor is a light sensitive structure. By preemptively activating the reset before the light pulse and releasing it after, the pixel may avoid being affected. Similar to switch E, this switch is controlled by the voltage present on the output line. Specifically, the switch is closed and a PA reset is triggered by a logic low level (<0.6 V) achieved by controlling the wire voltage Vs.
Recording Performance
Although small scale designs have been proposed [22], multiplexing fast enough to capture the full signal band on the shank has not been previously demonstrated on a large scale as it poses a multitude of challenges. Due to the switching nature of the circuits and in order to maintain proper operation under large voltage drops while also accounting for supply ripple on the highly resistive power lines, any 6 of the 12 shank regions can be turned ON simultaneously without an additional penalty on noise and power dissipation (half-probe recording). This permits recording with good noise performance from six arbitrary regions on the 8-mm shank (~0.7 mm each, covering 4 mm), which is sufficient for covering multiple regions of a rat brain.
Moreover, the design also supports the simultaneous readout from all of the electrodes on the shank (1356) by featuring 1440 channels in the base (full-probe recording). This recording scenario comes with increased noise due to the very small amount of decoupling capacitors in the shank, as insufficient area is available to properly filter the power rails at a higher current consumption. However, as illustrated in Section 7.2, these recordings still provide sufficient signal to noise ratio (SNR) for accurate analysis of the data.
Channel
Each channel receives a signal (Sx) and reference (Rx) line, from the corresponding DMUX (Figure 4a) that feeds the instrumentation amplifier (IA). The referencing and differential amplification allows for improvement of the common mode rejection ratio (CMRR). The reference (REF) line can be selected from (i) one of the local reference PA (Ref-PA), (ii) a few locally averaged Ref-PAs, or (iii) an external signal. The various reference signals facilitate the recording of different brain signals: action potentials (AP) and local field potential (LFP) have different spatial resolution and may benefit from a local or global reference (e.g., a screw attached to the animal skull), depending on specific recording conditions [23]. Furthermore, single ended operation is possible, which along with the readout of the reference channels, enables software referencing, potentially resulting in improved signal quality [23].
In order to preserve circuit symmetry and avoid distortions, each Ref-PA is de-multiplexed to eight outputs, such that for each channel the two inputs of the IA are de-multiplexed (i.e., sampled) simultaneously.
By providing a gain of 10, the integrator also relaxes the noise budget of the IA. The IA is implemented using an AC-coupled folded-cascode operational transconductance amplifier (OTA), with the bandwidth being limited to~15 kHz. This prevents aliasing from the subsequent switched capacitor (SC) band-select filter.
The SC filter is implemented as a first order RC-filter and operates at 80 kHz. Through a selection of switches (Figure 4b), it can be configured as high pass, low pass, or disabled. Furthermore, the corner can be programmed through the change in capacitance value to either 300, 500 or 1000 Hz. This allows a selection of the action potential band (AP: 300/500/1000 Hz to 7.5 kHz), the local field potential band (LFP: <1 Hz to 300/500/1000 Hz) or the full band (<1 Hz to 7.5 kHz), respectively, by bypassing the filter.
A programmable gain amplifier (PGA) follows the filter and provides eight configurable gains between 1 and 50. The PGA is DC coupled to the previous stage and uses a capacitive feedback to provide the variable AC gain, while the DC gain is 1. The role of the PGA is to maximize the utilization of the dynamic range of the ADC, since neural signals will vary in amplitude based on the selected band and brain region. After the PGA, the signal passes through an anti-aliasing filter and is buffered, prior to being multiplexed and fed into the ADC. A class-AB ADC driver is used to reduce the static power consumption.
Each channel allows for independent band selection, gain configuration, reference selection, calibration selection, and power down through a chain of shift registers distributed across the chip. Figure 7a shows the chip photograph and details of the shank and electrodes after fabrication. The probes were fabricated using a 6M1P 0.13 µm Al CMOS technology and a 200-mm fab-compatible post-CMOS process is used for electrode deposition. The shank is 9 mm long, including a 1 mm neck, and 100 µm wide. A reliable shank thickness of 50 ± 3 µm and low bending of <100 µm were achieved by combining Si 3 N 4 stress compensation with wafer backside thinning and polishing. The front side deep Si etch process defining the shank outline was optimized to achieve very smooth shank etch walls for minimal damage during implantation in rodent brain. The tip has a length of 300 µm and a sharp opening angle of 20 • , a geometry targeting low tissue damage [24]. The dimensions of the probe base are 11.9 mm × 13.5 mm (width × height) and a thickness of 50 µm (Figure 7a), that is the same as the shank which is achieved through full wafer thinning. To achieve the low-impedance and biocompatible TiN electrodes, a scalable and CMOS-compatible process was used. The 20 µm × 20 µm electrodes are arranged in a 4 × 336 array, with periodic interruptions for 12 large 20 µm × 80 µm reference electrodes (Figure 7b). Such a uniform arrangement of small electrodes covering the full shank allows for the capturing of spikes of single neurons with high spatial resolution. The center-to-center distances of the small neighboring sites is 22.5 µm, as shown in Figure 7c. If the electrode pitch is maintained, mask changes can allow for smaller or different shapes of electrodes. The large reference electrodes are not a requirement; however, they were purposely designed based on the neuroscientists' recommendation. Larger sites will average the spikes around the reference, improving the recording quality. Multiple vias are used to connect the electrodes to the top CMOS metal line, which results in an increase in surface area and thus a reduction in the electrode impedance. The average electrode impedance for the 20 µm × 20 µm sites measured at 1 kHz in phosphate-buffered saline (PBS) of pH 7.4 was 48.1 ± 2.5 kΩ. After post-CMOS processing, the final probes are wire-bonded onto custom PCBs (Figure 8b). The probe base was covered by a metal-coated Si spacer that acts as a light-shield and reference surface during implantation. The bond-wires are finally sealed in a black bio-compatible epoxy (Master Bond EP42HT-2MED, Hackensack, NJ 0761, USA).
System
Due to the design constraints described in Sections 2.2 and 2.3 regarding the chip dimensions and power dissipation in close proximity to the brain, certain functions need to be pushed off-chip. As a result, auxiliary circuitry is present on a small PCB (printed circuit board), called a headstage (Figure 8b), which is placed in the vicinity of the neural probe.
The probe is wire bonded directly on a short and thin PCB, which attaches to the headstage through a zero insertion force (ZIF) connector (Figure 8b). The size of this short PCB is adaptable to the application and may be made flexible. The small 20 mm × 22 mm headstage weighs 1.25 g and connects to a back-end FPGA development board through a 3 m, flexible, dual micro-coax cable. The cable is selected for maximum flexibility and low weight (3.5 g/m) to minimize the strain in freely moving animal experiments.
To provide a reliable and high speed data link between the headstage and back-end, a dedicated gigabit multimedia serial link serializer IC (MAX9271) and its corresponding de-serializer (MAX9272A) are used. The pair of ICs provide high speed data link with low power consumption and include error correction and detection codes for a high reliability data path.
Data communication between the serializer and deserializer is provided through a high bandwidth, unidirectional connection used for streaming the neural data, as well as a low bandwidth, bi-directional serial link used for the controlling and the configuration of the neural probe. Both connections are carried out across the same coaxial cable by the serializer and de-serializer pair.
At the probe end, the headstage contains a small FPGA that is used for managing the neural probe configuration as well as generating the clock and analog calibration signals through an external digital-to-analog converter (DAC). The presence of a DAC on the headstage gives the neuroscientists the flexibility to envision other usages for the calibration or impedance measurement circuits.
The headstage and neural probe are powered using the second micro coaxial cable. Multiple low noise, low drop voltage regulators are used to generate the required power rails on the headstage.
At the back end, the system uses an off the shelf Xilinx Kintex 7 FPGA development board with an attached mezzanine PCB containing the de-serializer IC (Figure 8a,b).
A Gigabit Ethernet connection is used between the system and a PC to stream the data and control the probe. This connection allows for an increased distance to recording equipment, ground separation, as well as data splitting (i.e., sending data to multiple computers).
The FPGA development board provides 27 s of data buffering using the onboard RAM as well as preprocessing (including real time gain calibration). Additionally, 16 external digital signals are recorded simultaneously with the neural data to allow for synchronization with various external equipment. Furthermore, sufficient resources for closed loop neuroscience experiments are left available at the user's disposal on the back-end FPGA development board.
Electrical Performance
Measurements were performed in a dark Faraday cage, using phosphate buffered saline solution to contact the electrodes (Figure 8a). The total power consumption is 31 mW for 678 channels, with 2.3 mW dissipated in the shank (3 µW/PA), and 28.7 mW in the base, including data transmission with 4 pF loading.
In half-probe recording mode (678 channels), the total input referred noise, including the electrodes, and using the broadest band is 12.4 ± 0.9 µVrms in the AP band (300 Hz-7.5 kHz) and 50.2 ± 12 µVrms in the LFP band (1 Hz-1 kHz), as shown in Figure 9. A reduction of LFP noise is possible by software averaging multiple channels, as LFP signals have low spatial resolution. In full-probe recording, 1356 channels can be simultaneously turned on for lower fidelity recording purposes, in which case the noise may increase up to 2.5 times, as explained in Section 4.4. Figure 9. Adapted from [5]. Measurement results in half-probe readout, omitting the small number of defective channels; (a) distribution of noise in AP and (b) LFP band; (c) noise density in AP band (300 Hz-7.5 kHz) and LFP band (1 Hz-1 kHz); (d) different filter corner configurations, considering a fixed total gain of 1000; LFP high pass corners is below 1 Hz and not visible; (e) full probe readout and half probe readout allowing 6 random regions out of 12 to be active.
The crosstalk across the full signal chain is −63 dB at 1 kHz, with the measurement being limited by the noise floor. Table 1 compares this work with prominent passive and active neural probes, showing up to a 3.5 times increase in the total number of channels compared to the state of the art, while maintaining similar performance when using half-probe recording.
In-Vivo Neural Recordings
We performed in vivo recordings in the brain of anesthetized rats to validate the CMOS probes. All of the animal experiments were performed according to the EC Council Directive of 24 November 1986 (86/89/EEC) and all procedures were reviewed and approved by the local ethical committee and the Hungarian Central Agricultural Office (license number: PEI/001/695-9/2015).
For the acute experiments, Wistar rats (n = 5, body weight: 270-450 g, gender balanced) were anesthetized with an intramuscular injection of ketamine/xylazine (KX) mixture (37.5 mg/mL ketamine and 5 mg/mL xylazine at 0.2 mL/100 g body weight injection volume). A craniotomy with an area of 3 × 3 mm 2 was drilled over the left hemisphere, then a small piece of the dura mater was removed above the target site (anterior-posterior: −2.5 mm; medial-lateral: 3 mm, with reference to bregma; Figure 10c, [27]). Before insertion, the probe was connected to the headstage, which was mounted to a stereotaxic micromanipulator (David Kopf Instruments, Tujunga, CA, USA). After that, the CMOS probe was driven into the brain tissue to a depth of 6.5-7.5 mm either manually (insertion rate:~0.1 mm/s, n = 3 insertions) or using a motorized stereotaxic device (Neurostar GmbH, Tübingen, Germany) with slow insertion rate (~2 µm/s, n = 2 insertions). The targeted brain areas were the trunk region of the somatosensory cortex and the underlying hippocampal and thalamic areas (Figure 10c). In the latter area, we could record the activity simultaneously from various thalamic nuclei (e.g., nucleus reticularis thalami, ventrobasal complex). A stainless steel needle inserted in the nuchal muscle of the animal served as the external reference electrode during the recordings. Figure 10. (a) local field potentials (LFP) simultaneously recorded from the neocortex (red), hippocampus (green), and thalamus (blue). Traces were obtained from the raw data recorded in LFP mode (internal reference, gain 500, low-pass 500 Hz); (b) multi-and single-unit activity recorded simultaneously from the neocortex (red), hippocampus (green), and thalamus (blue). Traces were recorded in action potentials (AP) mode (internal reference, gain 1000, high-pass 500 Hz). Dashed and dotted box indicate neocortical/thalamic up-states (U) and down-states (D); (c) schematic of a coronal rat brain section indicating the estimated position of neural recordings, d. fast Fourier Transform (FFT) plot of the recorded neural activity showing the dominant brain rhythms in the investigated brain areas during ketamine/xylazine anesthesia. Note that slow wave activity (1-1.5 Hz) appeared in all three brain structures, while high gamma activity (30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40) was present only in the hippocampus.
The hardware and software components of the electrophysiological recording system and the CMOS probe have been tested successfully; spontaneous local field potentials (LFP), multi-and single-unit activity (MUA and SUA, respectively) could be recorded from neocortical, hippocampal and thalamic locations of the rat brain (Figure 10a,b,d).
Furthermore, this provides an initial confirmation for the thermal model described in Section 2.3, as no degradation due to overheating was observed. However, since these recordings were short term, a more thorough evaluation through signal quality monitoring over longer experiments is needed. A brain rhythm, the so called slow wave activity (SWA), with a characteristic peak frequency of about 1 Hz that emerges in the thalamocortical system of rats during KX anesthesia (e.g., Figure 10a,d) was used as a benchmark to verify the recorded brain signals [28]. During SWA, the rhythmic alternation of two phases, both with a duration of a few hundred milliseconds can be observed in the neocortex and in various thalamic nuclei: up-states with high spiking activity and down-states with ceased action potential (AP) firing [28]. These two states could be clearly recognized on the cortical and thalamic recordings acquired in AP mode from the brain tissue of the anesthetized rats (Figure 10b). Furthermore, the neocortical depth profile of the SWA constructed from LFP and MUA traces was found to be comparable to our previous findings obtained with a laminar 24-channel passive silicon probe in the somatosensory cortex of KX-anesthetized rats [29]. In the hippocampus, beside the SWA, another dominant brain oscillation can be detected during KX-induced anesthesia: 30-40 Hz gamma activity [30]. This KX-induced hippocampal gamma activity is indicated in the power spectrum (computed from a hippocampal trace recorded in LFP mode) by an increased spectral power in the frequency range of 20-40 Hz (Figure 10d).
By using full probe recording, we were able to record the brain electrical activity from more than 1250 electrodes simultaneously ( Figure 11). This allowed us to monitor the spiking activity during SWA in the neocortex and in various nuclei of the thalamus at the same time, with both high spatial and temporal resolution. The SWA, which is thought to be generated in the thalamocortical network, has a complex spatiotemporal dynamic with the underlying mechanisms still barely known due to the lack of appropriate apparatuses to record brain activity from multiple, large areas of the neocortex and thalamus simultaneously. Therefore, the use of high-channel count, high-density neural probes might have a great potential to significantly further our knowledge of the SWA in the near future. Figure 11. Representative spiking activity across more than 1250 channels of the probe shank, spanning approximately 7.5 mm of brain tissue. The raw data is shown. The spike-map was constructed from 1 second of data recorded in AP mode; the time series of each channel's data is plotted as a horizontal line using brightness to encode the absolute amplitude, with darker areas being an indication of neural spiking activity. Ketamine/xylazine anesthesia induces slow wave activity (with a peak frequency of 1-1.5 Hz) or delta rhythm (1.5-4 Hz) in the neocortex and thalamus, which can be observed as a rhythmic alternation of high and low spiking activity. Notes: the first~90 channels are not displayed as they were outside of the brain and only recorded noise; the picture requires one line per channel (~1250), therefore resolution of the provided image was scaled down. Occasionally neurons near the reference electrode may spike, causing a line to be displayed on all channels using that specific local reference. Such artefacts can be eliminated during offline processing.
One of the fundamental analysis methods in the field of neuroscience is the examination of the spiking activity of individual neurons and correlating their activity to different brain states, external stimuli, or certain behaviors. Since the mammalian brain contains millions of neurons, it is essential to record the simultaneous activity of as many neurons as possible. State-of-the-art silicon-based probes can monitor the spikes of several dozen to a few hundred neurons at once [31][32][33]. To assess the single unit yield of the CMOS probe, we performed spike sorting on the data recorded in AP mode using a software capable to process high-channel-count recordings [34]. Full probe recordings obtained from three of five rats were analyzed. In total, 247 well-separable single units were sorted from the neocortex (mean ± standard deviation (SD) of neuron clusters, 29.67 ± 10.5; range, and the thalamus (52.67 ± 21.39, 34-76). The peak-to-peak amplitude of the mean spike waveform of these units usually exceeded 100 µV suggesting good separation from other neuron clusters and the background activity (Neocortex, mean ± SD, 293.23 ± 138.31 µV, range, 96-786 µV; Thalamus, 268.86 ± 117.98 µV, 100-661 µV). Using a less conservative sorting approach (including units with spike amplitudes below 100 µV, but still with clear refractory periods on their auto correlograms) would yield additional two dozen neuron clusters in both structures. Therefore, by using full probe recording, the activity of about a hundred or more neurons can be monitored with a single CMOS probe simultaneously. Using multiple probes in the same animal at the same time might further increase the unit yield. However, it is important to note that several factors may influence the number of separable single units, e.g., the actual brain state, the investigated brain areas, the spike sorting method used, or the tissue damage caused during probe implantation. Furthermore, we used relatively short recordings (~5 min) for spike sorting, therefore a significant amount of neurons with low firing rates might have been omitted. Hence, the single unit yield provided here is rather an underestimation of the actual unit number.
To quantitatively assess the quality of the isolated neuron clusters, we calculated two measures commonly used for this purpose: the isolation distance and the percentage of spikes violating the absolute refractory period (<2 ms) of neurons ( Figure 12, [35,36]). Furthermore, we also computed these measures for neuron clusters (n = 101) obtained from data recorded with passive silicon probes (laminar (A1x32-6mm-50-177) or Buzsaki64 type probes from NeuroNexus Technologies) from the somatosensory cortex of rats. Four 10-min-long recording files were analyzed, which were acquired either under ketamine/xylazine anesthesia (n = 3) or under urethane anesthesia (n = 1). Isolation distance values of single units recorded with the CMOS probe were found significantly higher as compared to the isolation distance values of neuron clusters recorded with traditional silicon probes (p < 0.001, Student's t-test). Furthermore, although the difference between the active and passive probe data was significant in terms of the second measure as well (p < 0.01, Student's t-test), most of the neuron clusters had refractory period violations below 1%, suggesting that the majority of clusters contained only a low number of spikes fired by other neurons. These results suggest that the CMOS probe is capable of recording single unit activity with a quality as good as, or even better than, traditional silicon probes. Cluster quality metrics (isolation distance and refractory period violations) calculated for single units recorded with the CMOS probe (n = 247) and with passive silicon probes (n = 101). Red line: median; blue box: 1st quartile-3rd quartile; whiskers: 1.5× interquartile range above and below the box; green dots: outliers. Extreme outliers are not displayed (isolation distance: 12 data points from the CMOS probe data ranging from 183 to 475; refractory period violations: 22 data points from the CMOS probe data ranging from 2.2 to 13.1 percent and 3 data points from the passive silicon probe data ranging from 2.8 to 4.2 percent). **: p < 0.01; ***: p < 0.001.
The high spatial resolution of the probe allows for spikes of the same neuron to be recorded on multiple, adjacent electrodes, providing a two-dimensional map of the neuron's spike waveform with both high spatial and temporal resolution ( Figure 13). The mean spike waveforms of a putative pyramidal cell calculated from the sorted spikes of the single unit recorded on 4 × 14 electrodes are shown in Figure 13a. Individual spikes of the isolated neuron cluster recorded on a single electrode (Figure 13b) and its autocorrelogram (Figure 13c) indicate good unit separation quality. Based on color-coded maps constructed from the two-dimensional mean spike waveform of the neuron (Figure 13d), the backpropagation of the AP into the apical dendritic shaft (propagation of the red patch in Figure 10d that corresponds to the negative peak of the spike waveform) could be observed during the time course of the action potential, a phenomenon typical of pyramidal cells [37]. Furthermore, probes produce similar data as recorded in a more traditional way of using passive silicon probes, with an analogous layout of recording sites [38,39] and external amplifiers. In conclusion, our results suggest that the CMOS probe system may provide valuable neural data from multiple brain sites of rodents with high spatial resolution. High-resolution electrical images of action potentials provided by these probes allow for the detailed examination of the spatiotemporal dynamics of spikes recorded in vivo or in the near future may be applied to identify various types of neocortical neurons. Figure 13. (a) The mean spike waveforms of a putative neocortical pyramidal cell captured on 4 × 14 electrodes. The waveform with the largest peak-to-peak amplitude is colored red; (b) individual spikes (waveforms in gray color, n = 90) of the same pyramidal cell recorded by the electrode corresponding to the red waveform in panel a. The mean spike waveform is displayed in red color; (c) the autocorrelogram of the demonstrated pyramidal neuron (bin size: 1 ms). The two peaks indicate burst firing (multiple spikes fired in rapid succession); (d) Color-coded potential distribution maps corresponding to different time points of the mean spike waveform. The maps are visualized according to the layout of the 4 × 14 electrodes. The potential map corresponding to the time point of the negative peak of the mean spike waveform shown in panel b is indicated with an asterisk. Note the temporal propagation of the negative peak of the spike (red patch) from lower electrodes to upper electrodes. The spikes of the neuron were recorded in AP mode (internal reference, gain 500, high-pass 500 Hz).
Conclusions
Attempting to multiplex active electrodes on a long and narrow shank in order to increase the number of simultaneous readout channels comes with a series of drawbacks and limitations. By implementing various innovative circuit design techniques (required to mitigate power supply drop and ripple, bias generation, filter and amplifier instability, as well as noise folding) we have succeeded in designing a new type of neural amplifier. With the help of this amplifier, we have demonstrated the first high density, multiplexed active neural probe capable of recording the complete set of electrodes present on the shank.
As such, this work demonstrates an active neural probe featuring 1356 simultaneous recording channels that are equivalent to a 3.5 times increase when compared to the state of the art. Extensive in vivo probe validation has been carried out to demonstrate the expected capabilities of the device.
By providing the possibility to record the entire length of the shank as well as providing high density and increased electrode count, this novel active neural probe opens the possibility of new types of neuroscience observations, as demonstrated briefly in the captured in vivo data.
|
2017-10-31T11:20:41.946Z
|
2017-10-01T00:00:00.000
|
{
"year": 2017,
"sha1": "543ff8191734da25ae591e3288d6aa0051a386d8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/17/10/2388/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "543ff8191734da25ae591e3288d6aa0051a386d8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Computer Science",
"Medicine"
]
}
|
253499052
|
pes2o/s2orc
|
v3-fos-license
|
Zeros of Dirichlet $L$-functions near the critical line
We prove an upper bound on the density of zeros very close to the critical line of the family of Dirichlet $L$-functions of modulus $q$ at height $T$. To do this, we derive an asymptotic for the twisted second moment of Dirichlet $L$-functions uniformly in $q$ and $t$. As a second application of the asymptotic formula we prove that, for every integer $q$, at least $38.2\%$ of zeros of the primitive Dirichlet $L$-functions of modulus $q$ lie on the critical line.
This improves Montgomery's result in the range
This result is proved using an asymptotic for the second moment of the Dirichlet L-functions in both the q and the t-aspect, twisted by a mollifier. As the Dirichlet L-functions have different functional equations depending on whether their associated Dirichlet character is odd or even (i.e. odd characters satisfy χ(−1) = −1, while even satisfy χ(−1) = 1), we split the sum over the characters into sums over the odds and the evens. The sum over the odd characters and the sum over the even characters are denoted as − χ (mod q) and + χ (mod q) respectively. For the sake of simplicity, we focus on just the even sum then address the minor differences in proof needed for the odd sum in Section 2.5. In total, there are φ * (q) principal characters of modulus q. To distinguish the principal character of modulus q, we write it as χ 0,q . Theorem 1.2. Let q be a positive integer with T ≫ q ǫ . Let ψ(t) be a smooth real valued function supported on [1,2] with ψ (j) (t) ≪ T ǫ . Let α, β ∈ C satisfy α, β ≪ log log(qT )/ log(qT ). Suppose that 1/2 < κ < 1/2 + 1/66. For all n ∈ N, α n , β n ∈ C such that α n , β n ≪ n ǫ , By introducing the small shifts α and β, we not only derive a more general result, but calculating the second moment (by letting the shifts tend to zero and taking the limit) is actually easier. In the case that α = −β, then the above result should be considered as a limit.
A natural choice of mollifier (and one that we shall use to prove Theorem 1.1 is M (s, χ) = is uniform in both has its own challenges, mostly due to terms that are negligible in the qaspect no longer being negligible when the t-aspect is introduced. Previous results in just the q aspect only work when q is prime, while this result applies to all positive integers q.
We demonstrate a second application of Theorem 1.3, using it to prove a result on the proportion of simple zeros on the critical line. Let N (T, χ) denote the number of zeros ρ = β + iγ of the Dirichlet L-function L(s, χ) for a character χ of conductor q, with 0 < γ < T . Let N 0 (T, χ) denote the number of these zeros that are simple with β = 1/2. By choosing Q to be a non-linear polynomial, we would obtain a lower bound on the number of zeros on the critical line, simple or otherwise. In fact it is conjectured that all non-trivial zeros are simple. By choosing R, P , and Q optimally, we arrive at the following corollary. Informally, this means that for integer q at least 38.2% of zeros up to a large height T of the primitive Dirichlet L-functions of modulus q lie on the critical line as we vary q such that log(q) ≪ log(T ). Theorem 1.4 comes from applying Levinson's method to Theorem 1.3. Levinson's method is an elegant and widely used technique for determining the proportion of critical zeros of an L-function. See [5] for a nice demonstration of the method, and [16] for an elegant application of the method to the Riemann zeta-function.
Levinson's method has been used by Conrey in just the t-aspect in [4] to show that at least 40.7% non-trivial zeros of the Riemann zeta-function are critical (this has since been improved to 41.7% in [13]), while in [5] Conrey, Iwaniec, and Soundararajan consider the qaspect, averaged over q ≤ Q to conclude that at least 56% of low-lying zeros lie on the critical line (see also [14]). In comparison, our result is uniform in q and t, and does not require averaging over q ≤ Q.
We begin by proving Theorem 1.2 and Theorem 1.3 in Section 2. Then we focus on the applications and prove Theorem 1.1 in Section 3 and Theorem 1.4 in Section 4.
Throughout this paper we shall use the convention that ǫ is an arbitrarily small positive constant that may change value between lines. . Let χ be an even primitive character. Then we have the approximate functional equation
The proof is standard. For example, see Theorem 5.3 of [8].
The proof is a simple application of Stirling's approximation applied to The proof of this result is standard. See for example, (3.1) and (3.2) of [9]. Applying Lemma 2.3 to the approximate functional equation gives D − and O − are obtained by from D + and O + by substituting α, β → −β, −α and replacing V + by V − . As the D + and O + cases are almost identical to the D − and O − cases, we shall only demonstrate the former.
2.1.1. The Diagonals. As the diagonals are made up of sums over the condition am = bn, we may write a, b, m, n = ad, bd, bn ′ , an ′ with (a, b) = 1 and (n ′ , q) = 1. Hence, by relabelling, Note that we chose the contour of integration in the V -function to be Re(s) = 2 at first so that the sum over n converges to the L-function, and then moved the contour back to Re(s)= ǫ with the pole at s = −(α + β)/2 being cancelled by the zero coming from X + (s, t). Similarly q abπ s L(1−α−β+2s, χ 0,q )ψ t T ds s dt.
2.1.2.
Off-diagonals. The remaining terms (i.e. when am = bn) are the off-diagonals. The following lemma will allow us to show that the terms in the sum with am and bn sufficiently far away from each other will contribute a negligible amount.
and hence this integral is vanishingly small unless Proof. By (1), for t ∈ [T, 2T ] By taking K → ∞, this becomes negligibly small unless | log(bn/am)| ≪ T −1 . Taking the Taylor expansion of log(1 + x) = x + O(x 2 ) for |x| < 1 to see that the t-integral is vanishingly small unless To help restrict to these non-negligible cases, we introduce a dyadic partition of unity to the sums over m and n: let W be a smooth non-negative function supported in [1,2] such that where M runs over a sequence of real numbers with |{M : X −1 ≤ M ≤ X}| ≪ log X. By the rapid decay of V ± , in (1) and (2) we may assume that M N ≪ (qT ) 1+ǫ . We also split up the mollifying coefficients α n , β n dyadically, supposing that α n (A) is supported on n ∈ [A, 2A] and β n (B) is supported on n ∈ [B, 2B] i.e. α a = A α a (A) and by the assumptions in Theorem 1.2, A, B ≪ (qT ) κ . In the next section, we extract the main term from the off-diagonal terms, and bound the rest into an error term.
Main Propositions.
When the mollifier is short enough, a trivial bound is sufficient to bound the contribution from the off-diagonal term. However, to break the half-barrier, a more sophisticated method is needed as the off-diagonals begin to contribute to the main term. The trivial bound shall be of use later on in the proof.
Proof. We bound this sum trivially by summing over a and r. Then we sum over m ≡ af wr (mod h), of which there are ≪ 1 + M/f h possible values of m. Then we bound the sums over b and n using the divisor bound.
The next proposition shows how the off-diagonals contribute to the main term.
Proposition 2.1. Let T ≫ q ǫ for a positive integer q, and w|q. Let ψ(t) amd W (x) be smooth real valued functions supported on [1,2] such that for all j ≥ 0 ψ (j) (t) ≪ T ǫ and W (j) (x) ≪ ǫ (qT ) ǫ . Let α, β ∈ C satisfy α, β ≪ log log(qT )/ log(qT ). Suppose that 1/2 < κ < 1/2 + 1/66. Suppose that for positive constants We begin by writing the am ≡ ±bn (mod w) condition as am = ±bn+wr. As am = bn, r must be non-zero, and by Lemma (2.4) we may assume that |r| ≤ 2AM w −1 T ǫ−1 , so we sum over 0 < |r| ≤ R/w where R := 2AM T −1+ǫ . We remove the (mn, q) = 1 condition as follows: for any smooth function F (a, b, m, n) for a fixed a, b, w and r, Note that if (f, rw) > 1 then the sum is empty as then (am, q) > 1. Given this, we can then relax the condition that (m, q) = 1 to (m, q/f ) = 1, as m must be coprime to f by the residue condition am ≡ wr (mod bf ). Suppose for contradiction that p|(f, q/f ) then p 2 |q and hence as q/w is square free it must be the case that p|w. Hence p can not divide f , so (f, q/f ) = 1. So . Let x 1 ≡ aq/f (mod bf ) and x 2 ≡ abf (mod q/f ), so that by appealing to the Chinese remainder theorem m ≡āwr (mod bf ) and Then we apply Poisson summation to find that Summing over u gives a Ramanujan sum i.e.
f |q (f,rw) For the contributions when g = 0, we expand out c q/f (g) to get We will be able to bound the size of g by integrating by parts j times i.e.
Bf h gdM j for any fixed j ≥ 0. So we may restrict the sum to 0 with For each w, h and f , we treat the error term differently depending on the size of hf . In short, when hf is large compared to qT , then the contribution to the error term (and the main term) can be trivially bounded to be small enough to be absorbed into the error term.
When hf is small, we need a more sophisticated method which is an adaptation of Bettin and Chandee's Theorem 1 in [1]. To this aim, define The contribution to the main term and the error term for a fixed w, f and h can be bounded trivially by reversing the Poisson summation to get the contribution Using the trivial bound Lemma 2.5 we see that Hence for When the trivial bound will not suffice, we use Mellin inversions to separate the variables in (4) to reduce to finding a bound for where we may assume without loss of generality that f ≤ h, otherwise we take Poisson summation modulo ah instead of bf . We may also factor out (w, h) from both w and h, so that we can assume w, h and f are all pairwise co-prime, and all divide q, hence whf ≤ q.
By an adapted theorem of Bettin and Chandee from [1], we arrive at the conclusion that the error For the first, second, and sixth terms substitute in f h = q γ and f ≤ h ⇒ f ≤ q γ/2 . For the third term, write h −1/5 ≤ f −1/5 then substitute in f 2/5 ≤ q γ/5 . For the fourth and fifth terms, we use the fact that whf ≤ q. Hence the error is .
The first and sixth terms are smaller than the second, and the fourth is smaller than the fifth so Now, we calculate the x integral. If r > 0 then the integral over x is restricted to x > wr/ab and if r is negative then we have x > 0. For absolute convergence, if r > 0, we impose the condition Re(α + β + 2s + u + v) > 0, Re(β + s + v) < 1/2 and if r < 0 we impose the condition Under these assumptions, the x-integral is equal to (see for example 17 if r < 0 and hence In the M − cases, due to the extra minus sign in the x-integral, we arrive at the same result but with H + (s) replaced by Writing H(s) := H + (s)+H − (s), and summing over A, B ≤ (qT ) κ in the dyadic decomposition allows us to write where Proof. Writing (for the sake of clarity) which has poles at It is easy to check that these have residue 2. Also note that if x + y = 1 then the second fraction vanishes (as there is a pole in the denominator from Γ(1 − x − y)) and the first fraction is (using Γ(s)Γ(1 − s) = π/ sin(πs)) Γ(1) Γ(1/2 + x)Γ(1/2 + y) π sin(π(1/2 + y)) + π sin(π(1/2 + x)) = 0 as sin(π(1/2 + y)) = sin(π + π(1/2 − x)) = − sin(π(1/2 − x)) = − sin(π(1/2 + x)).
Returning to O + (M, N ) we move the contours to replace the r-sum with a zeta-function. Choose c 1 = 0, c 2 = ǫ and move the s-contour to the right to 1/2 − ǫ/3 crossing a simple pole of H(s) at s = 1/2 − β + it − v. Write P + ′ (M, N ) as the integral along the new line and R + (M, N ) as the residue. We can then move the u contour in the residue to Re(u) = 2ǫ which hits no poles and allows us to replace the r−sum with a zeta function. i.e.
Using the following lemma, we can simplify R + and P + .
Proof. If p|f and p|q/f then p 2 |q but (f, w) = 1 so p 2 |q/w ⇒ µ(q/w) = 0. Hence we may factorise φ(q/f ) = φ(q)/φ(f ), so Given that as if p 2 |q and p|w then either µ(w) = 0 or p|(w, q/w) so either the sum is empty (i.e. equal to zero) or the product is empty (equal to 1). Then rearranging gives that By Lemma 2.7, With the P + ′ (M, N ) term, we replace the r-sum with a zeta-function as before, apply Lemma 2.7 and shift the s-contour back to Re(s) = ǫ. This crosses the same pole at s = 1/2−β +it−v, while the pole from the zeta function at s = (1 − α − β − u − v)/2 is cancelled out by the zero of H(s) at this point. Denote the contribution from the first pole as R + ′ (M, N ) and the new integral with the r-sum replaced as P + ′′ (M, N ) so The difference between the two residue terms is in the u-contour i.e. integrating over Re(u) = 0, 2ǫ. Therefore We can now write H(s) as [17] is equal to . Hence, Applying the functional equation and the change of variable s → −s gives O + 0 as To summarise: The O − case is identical by replacing X + with X − and the substitution α, β → −β, −α.
Combining the Main Terms.
We have shown that where for instance q π −α−β Note that the pole at s = (α + β)/2 of the L-function is cancelled by the function G. A similar expression holds for the sum of the other two terms, giving the result in Theorem 1.2 for the sum over even Dirichlet characters.
This manifests itself in our definition of the function H(s) at (6). In this setting we must redefine H(s) := H + (s) − H − (s). The same method still works as our new H(s) has zeros in the same positions, and no poles so there are not any residue terms to deal with. To show at (7) that 2.6. Proof of Theorem 1.3. This proof is the same as Theorem 1.2, except that we use the Vaughan identity with the Möbius function to split up E w,f,h in (4) into three sums, which are then bounded separately. where (using Mellin transforms to separate variables) Let W = A 1/4 . This means that E 2 (A, B, M, N ) is an empty sum as the sequence α n (A) has support on [A, 2A] ∩ [1, we write it as a linear combination of at most O ǫ ((qT ) ǫ ) sums, each of which is with A 1 A 2 A 3 = A and where we may assume that a 1 , a 2 , a 3 , d are all pairwise coprime and square-free due to the presence of the Möbius function. By the definition of c 4 we see that A 1 , A 2 ≫ W/d and without loss of generality A 1 ≤ A 2 . By defining we change (8) into sums of the form Note that we may bound c 4 (n), c 5 (n) by O ǫ ((qT ) ǫ ). By applying Lemma 6 of [7] (slightly adapted to include the extra h, f in the trilinear fraction) with to bound the sums in (9) by for κ < 4/7 and κ 1 ≤ κ/2. This bound is less effective when A 1 is small, so another bound is needed in this case. Using lemmas 10 and 11 from [10] with we may bound the sums in (9) by for κ < 1/2 + 5/128 and κ 1 < κ/2. We use the first bound when κ 1 ≤ κ − 39 128 and the second bound for when κ 1 ≥ κ − 39 128 , resulting in the bound may be bounded by a sum of at most O ǫ ((qT ) ǫ ) sums of the form This means that A 1 ≤ W 2 = A 1/2 . When A 1 ≫ A 1/4 we use the same method as for E 1 , but when A 1 ≪ A 1/4 we shall apply the Weil bound for Kloosterman sums. This implies that By partial summation over a 2 we may bound the sums above by By the trivial bound in Lemma 2.5 and (5), we may assume that for 1/2 < κ < 1/2 + 5/128 This concludes the proof of Theorem 1.3.
Also, for σ − 1 2 ≥ 28 log log(qT ) log(qT ) , the theorem is true by Montgomery's result. So it is sufficient to prove the following proposition and κ < 1/2 + 5/128, * To prove this proposition, we rely on Littlewood's lemma (see [15] Theorem 9.16), which reduces the problem of bounding where ψ(t) is a smoothing function as in Theorem 1.2. By expanding out the square in the integral, we get three terms We look first at the term (10). Using methods similar to those used by Iwaniec and Sarnak in [9], if our mollifier is of the form n σ then the optimal mollifier (with the normalisation that v(1) = 1) can be shown to be close to for 1 ≤ n ≤ x and 0 otherwise. Note that which is a standard mollifier on the half-line. This choice of mollifier satisfies the conditions of Theorem 1.3 and so by defining then we see that by Theorem 1.3 that (10) is equal to φ * (q)Tψ(0)L(2σ, χ 0,q )S 1 ((qT ) κ ) + O ǫ (qT ) 2−2σ L(2 − 2σ, χ 0,q )|S 2 ((qT ) κ )| + (qT ) 1−ǫ .
To deal with S 1 , we will need the following lemma.
Inserting the definition of v(n) in to the definition of y n gives by partial summation. As 2σ > 1, the sum converges so we may write As 2σ is close to 1, it is not sufficient to bound the error by O t 1−2σ 2σ−1 . Instead, we write This means Note that for square-free n, where rad(m) = p|m p.Therefore So, supposing that f (t) is a differentiable function with f (x) = 0, partial summation shows that Hence, We now bound S 2 (x).
Proof. Similar to before, we see that Hence by lemmas 3.2 and 3.3, we arrive at the conclusion that 10 is equal to We turn our attention to the first moment.
Moving the contour of integration to have real part −1 + ǫ, we hit a pole at s = 0. The integral at the new contour may be bounded by the exponential decay of the Gamma function, and by the functional equation for Dirichlet L-functions, By (13), * χ (mod q) If an > 1 then by integration by parts K times (an) it ψ(t/T )dt ≪ K log(an) −K T 1+ǫ−K so we may make the error term arbitrarily small. When an = 1 then the integral is just Tψ(0). Hence * χ (mod q) L(σ + it, χ)M (σ + it, χ)ψ(t/T )dt = φ * (q)Tψ(0) + O (qT ) ǫ+κ(1−σ) .
By (11) and (12), we see that * Levinson's original proof was long and allegedly had a reputation for being difficult. In this section we shall follow the elegant reformulation of the method by Young in [16], but in the context of families of Dirichlet L-functions. Assume the conditions of Theorem 1.4 and let L = log(qT ), and where P (x) = i a i x i with P (0) = 0, P (1) = 1 and for convenience we shall write P [a] = P log(X/a) log(X) . Levinson's method (see for example Corollary A of [5]) shows that as qT → ∞. Additionally, restricting Q(x) to be a linear polynomial restricts N 0 (T, q) to only counting simple zeros. Defining if a > X The next step is to choose R, P, and Q to maximise 1 − 1 R log(c(P, Q, R)) subject to the conditions that R is a positive constant, P (0) = 0, P (1) = 1 and Q(0) = 1. We shall stipulate that Q is a linear polynomial, in order to determine a lower bound on the proportion of simple zeros on the critical line. The optimisation process can be found in Section 4 of Conrey's paper [4]. This method demonstrates that the optimal choice for P (x) is of the form P (x) = e rx − e sx e r − e s for r, s constants. While this is not a polynomial, it may be uniformly approximated by real polynomials. Choosing Q(x) = 1 − 1.035x, R = 1.179 gives 1 − 1 R log(c(P, Q, R)) = 0.382156 and hence N 0 (T, q) N (T, q) ≥ 0.382 for large enough qT .
|
2022-11-14T06:41:52.471Z
|
2022-11-11T00:00:00.000
|
{
"year": 2022,
"sha1": "09db0fda440fdfe789f2abe470bfd96f707cce64",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "09db0fda440fdfe789f2abe470bfd96f707cce64",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
238203240
|
pes2o/s2orc
|
v3-fos-license
|
Azithromycin Protects Oligodendrocyte Progenitor Cells against Lipopolysaccharide-Activated Microglia-Induced Damage
Oligodendrocyte progenitor cells (OPC) are the primary cellular targets of brain white matter injury (WMI) in very low-birth weight (VLBW) infants. Microglia plays a significant role in inflammation-induced WMI. Our previous study showed that lipopolysaccharide (LPS)-induced OPC damage is mediated by activated microglia in vitro. We hypothesized that azithromycin (AZ) could protect OPCs against LPS-induced cytotoxicity by blocking microglial activation. Highly enriched primary rat microglia and OPCs were treated with LPS. There were 4 groups: control, LPS + Veh, AZ, and LPS + AZ. Microglia conditioned medium (MCM) was used to determine inflammatory cytokines by enzyme-linked immunosorbent assay or subsequent treatment of OPCs. We found that AZ significantly suppressed TNF-α, IL-1β, and IL-6 in LPS+Veh-treated–microglial MCM and blocked microglial nuclear factor-κB p65 nuclear translocation. AZ prevented LPS-MCM-induced OPC death and improved OPC survival as measured by activated caspase-3 immunostaining and XTT assay, respectively. AZ ameliorated LPS-MCM-induced differentiation arrest and myelin basic protein deficit in oligodendrocytes. Our data suggest that AZ is a potent inhibitor for microglia activation and may hold the therapeutic potential for WMI in VLBW infants.
Introduction
There is a critical lack of knowledge for the prevention and treatment of white matter injury (WMI), the major form of brain injury in premature infants that is associated with a spectrum of motor, cognitive, visual, socialbehavioral, attention, and learning disabilities in 25-50% of very low birth weight (<1,500 g) survivors [1][2][3]. WMI is characterized by initial loss and/or dysmaturation of oligodendrocyte (OL) progenitor cells (OPCs) followed by subsequent hypomyelination and dysmaturation events of neuroaxonal structures evolving over a prolonged period [1]. OPCs are the dominant OL lineage population at 24-32 weeks of gestation, which is the highrisk window for developing brain WMI [4]. Mounting DOI: 10.1159/000519874 epidemiological evidence showed that perinatal infections and associated inflammation are major risk factors for WMI [5][6][7]. Clinical and experimental evidence suggests that activated microglia play a pivotal role in mediating OPC injury [8][9][10][11][12][13].
Microglia, the resident immune cells of the central nervous system, can mount innate immune responses upon inflammatory challenges. While such a response is intended to protect the brain, dysregulated microglial activation also leads to injury or developmental disturbance of neighboring neurons and OPCs. Bacterial endotoxin lipopolysaccharide (LPS)-induced damage and developmental disturbances of OPCs are primarily mediated by pro-inflammatory mediators released by activated microglia [12,13]. Therefore, approaches to suppress microglial activation and/or inflammation may protect OPCs and ameliorate the burdens of WMI. For example, the tetracycline derivative minocycline has been demonstrated to afford neuroprotection in various models of brain injury by primarily suppressing microglial activation [14]. However, due to its potential adverse effects on development, minocycline has limited therapeutic potential for WMI [15]. As concern on safety, especially longterm effects on neurodevelopment, remains a challenge in developing novel anti-inflammatory agents to fight WMI, drug repurposing may be a viable strategy in this endeavor. In this regard, drugs that have a track record of safety profiles for use in pregnancy should be considered.
Interestingly, macrolides, a group of antibiotics widely used to treat infections in pediatric patients, have been demonstrated as potent anti-inflammatory agents in suppressing various systemic immune cells. Macrolides' most common adverse effects are GI disturbances, elevated transaminases, sensorineural hearing loss, prolonged QTc (in adults and those with underlying heart disease), and most importantly, in neonates, infantile hypertrophic pyloric stenosis [16,17]. Among several major classes of macrolides, azithromycin (AZ) appears to be a potent anti-inflammatory agent with better safety profiles [17]. Therefore, we consider AZ an excellent candidate for suppressing microglial activation and protecting the developing brain against WMI. Still, we performed preliminary experiments to compare the effects of 4 major macrolides, clarithromycin, erythromycin, AZ, and roxithromycin, in suppressing pro-inflammatory cytokine IL-6 from LPS-activated microglia. We tested 4 different doses of macrolides (0.1, 0.5, 2.5, and 12.5 μg/mL) and measured IL-6 by enzyme-linked immunosorbent assay (ELISA) to evaluate the dose-response. The preliminary data showed that AZ had a consistent response at all dos-es on suppression of IL-6 compared to other macrolides. Thus, this study focused on AZ.
The immunomodulatory effect of macrolides has been well studied in the treatment of several chronic respiratory conditions and chronic gastritis caused by helicobacter pylori [18,19]. Macrolides reduce cytokine production by LPS-stimulated monocytes [20]. Macrolides affect several pathways of the inflammatory process, such as the migration of neutrophils, the oxidative burst in phagocytes, and the production of pro-inflammatory cytokines in monocytes [21][22][23][24]. However, it is unknown whether they also inhibit inflammatory responses from microglia. In this study, we first evaluated the effect of AZ on pro-inflammatory cytokines production and nuclear factor-κB (NF-κB) activation in LPS-activated microglia and then tested whether AZ protects OPCs against microglia-mediated cytotoxicity and developmental disturbance upon LPS exposure in vitro.
Primary Cell Culture
Preparation of Mixed Glia Culture from Neonatal Rat Brain The protocols for primary mixed glia, OPCs, and microglia culture were based on our previously described methods [12,25] but with significant modifications. Since a large quantity of pure microglia is needed in this study, a papain-based tissue dissociation protocol was developed for mixed-glia culture. Our preliminary tests show that this modification results in a significantly higher yield of viable total neural cells and microglia. The process of tissue dissociation was in accordance with the protocol provided with the Neural Tissue Dissociation kit. All procedures for animal care were conducted in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals, and were approved by the Institutional Animal Care and Use Committee at the University of Mississippi Medical Center. Every effort was made to minimize the number of animals used and their suffering. In brief, 1-day-old Sprague Daw-ley rat pups were decapitated and their brain cortices were dissected in ice-cold HBSS under a stereomicroscope. The meninges were carefully stripped away and the cortices from 6 pups were chopped into small pieces using a surgical blade in 1 mL cold HBSS. Next, the tissue was transferred into a 15-mL conical tube, mixed with 1,950 μL of Enzyme A, and gently rotated for 10 min at 37°C. After adding 30 μL enzyme mix 2, tissue was rotated for an additional 10 min at 37°C. The tissue was then mechanically dissociated by passing through a 1-mL pipette tip for 5 times and rotated again for 10 min. This process was repeated once, after which the dissociated cells were passed through a 40 μm strainer. Next, cells were pelleted by centrifugation at 350 g for 10 min and resuspended in 10 mL prewarmed cell culture medium (DMEM/F12 consisting of 10% FBS). The percentage of viable to dead cells was estimated based on trypan blue staining. Typically, this protocol can yield about 5 × 10 8 total cells from 6 neonate rats, with >90% viability. Cells were maintained in 75-cm 2 cell culture flasks at 37°C in a CO 2 incubator.
Isolation of Microglia from Mixed Glial Culture
The primary cultures consisting of all 3 glial cell types were maintained in DMEM/F12 with 10% FBS. The medium was changed every 4th day. About 2/3 of the old medium was replaced with a fresh medium, and precaution was taken not to disturb the loosely attached microglia on the top. Cells reached confluence in about 10-14 days. Many round, phase-bright cells grown on the top of dense cell layers or detached and floated in the medium were mainly microglia. Flasks were gently shaken with hand or strike against the palm 3 days after the last medium change to isolate microglia. The medium was collected in 50 mL conical tubes and centrifuge at 350 g for 10 min. The cell pellet was resuspended in 10 mL fresh medium. If necessary, the process was repeated for a second round after 1 week of culture to obtain more microglia. Based on 6 independent primary cultures, we estimated that at least 5 × 10 6 microglia could be harvested from mixed glia culture from 6 rat pups. Microglia were plated on poly-L-lysine-coated glass coverslips placed in 12-well plates at a density of 1.5 × 10 5 cells/coverslip and were maintained in 10% FBS/DMEM-F12 for 24 h before experimental treatments. To estimate the purity of microglia, cells on coverslips were fixed with 4% paraformaldehyde and immunostained with markers for microglia (CD11b and Iba1), astrocytes (GFAP), and OL lineage cells (Olig2). We found that cells were exclusively immunostained with CD11b and Iba1 (Fig. 1a), while very few GFAP-and Olig2-positive cells were detected. The estimated purity of microglia is >95%.
Isolation and Culture of OPCs Following isolation of microglia, the flasks were shaken in an orbit shaker at 180 rpm overnight, followed by 200 rpm for 2 h. The medium was collected and filtered through a 40-μm cell strainer. Cells were pelleted by centrifugation, resuspended in 10 mL medium, and transferred to a non-coated 75-cm 2 flask. After incubation at 37°C for 10 min, the flasks were shaken gently by hand to detach loosely attached OPCs, while microglia and astrocytes remained strongly attached to the surface. The supernatant containing predominantly OPCs was centrifuged at 350 g for 10 min. Cells were resuspended in a 1:1 mixed medium of NBM-B27 and chemically defined medium. Chemically defined medium consists of DMEM/F12, 0.1% BSA, 100 μM putrescine, 20 nM progesterone, 10 nM sodium selenium, 20 nM biotin, 5 μg/mL cysteine, 5 nM hydrocortisone, 5 μM insulin, 50 μM transferrin, 2 nML-glu-tamine, and penicillin/streptomycin). Platelet-derived growth factor and basic fibroblast growth factor (10 ng/mL, each) were included in the medium to promote OPC growth. Cells were passaged 3 times before experimental treatments.
The purity and identification of OPCs were determined by immunocytochemistry using a panel of well-defined antibody markers (i.e., NG2, O4, and Olig2 to identify OPCs, GFAP for astrocytes, and CD-11b/Iba1 for microglia). The estimated purity of OPCs was close to 99% after 3 generations.
Preparation of Microglia Conditioned Medium and Treatment of OPCs
Microglia conditioned medium (MCM) was used in 2 experimental settings: for measurement of cytokine levels and treatment of OPCs. To maintain optimal survival and differentiation and eliminate the cofounding effects of serum on OPC differentiation, the serum-free NBM/B27 was used to prepare MCM to study the effects on OPCs. For the MCM used in ELISA detecting pro-inflammatory cytokines, we used DMEM to prepare MCM to avoid potential interference of proteins and chemicals.
Cell Survival/Death Assay
OPCs were seeded in a poly-L-lysine-coated 96-well plate at 1.5 × 10 4 per well and were incubated overnight. Cells were washed with prewarmed NBM/B27 and treated with MCM for 24 h. Cell survival was quantified by the XTT method following the manufacture's instruction. The optical density (OD) at 492 nm was acquired by a plate reader (BioTek), and the cell survival rate was calculated as a percentage (%) of the OD in the treated group over that in the control group, as previously described [12].
Immunocytochemistry
Cells grown on glass coverslips were rinsed twice with ice-cold PBS and fixed with 4% paraformaldehyde for 15 min at room temperature (RT). Following washing in PBS, cells were permeabilized with 0.2% Triton X-100 and blocked with 5% normal serum/1% BSA/0.1% Triton X-100 in PBS for 1 h. Cells were sequentially incubated with primary antibodies, biotin-conjugated second antibodies, and avidin-conjugated Alexa Fluor 488 or 555, each for 1 h at RT with 3 washes in PBS. Cells were mounted on slides and viewed under a fluorescence microscope (Olympus BX60). Images were acquired by a monochrome digital camera.
Quantification of OL Differentiation
To quantify OL differentiation rate, treated cells were doubleimmunostained with NG2 (for OPCs) and APC (for mature OLs). Ten random high power images (×40 objectives) were captured for each coverslip, and positive cells were counted manually in a double-blinded fashion. Cell counting from 10 images per coverslip was averaged to represent 1 sample, and 3 samples were included in each experimental group. Enzyme-Linked Immunosorbent Assay At 24 h following treatment, MCM was collected to determine levels of IL-1β, IL-6, and TNF-α by ELISA (R&D system) following the manufacturer's instruction. Samples were run in duplicates in a 96-well plate. Cytokine contents were presented as pg/mL medium.
Western Blot
Cleaved Caspase-3 and MBP in whole-cell extraction of OLs and NF-κB/p65 in nuclei fraction of microglia were assessed by Western blot (WB). To prepare whole-cell lysis from OLs, cells were washed once with ice-cold PBS and then detached from the culture surface using a cell scraper. Cells were pelleted by centrifugation, washed twice with ice-cold PBS, and incubated with cell lysis buffer containing 10 mM Tris, 100 mM NaCl, 1 mM EDTA, 1 mM EGTA, 1 mM NaF, 20 mM Na4P2O7, 2 mM Na3VO4, 1% Triton X-100, 10% glycerol, 0.1% SDS, 0.5% deoxycholate, 1 mM PMSF, and protease inhibitor cocktail (Sigma) for 30 min on ice, with vortexes at 10-min intervals. Microglial nuclei proteins were extracted using the nuclei extraction kit. Total proteins were determined by the BCA kit (ThermoFisher). Samples were denatured and subjected to SDS-PAGE using Bio-Rad TGX stainfree gels, and proteins were transferred to nitrocellulose membranes. The membranes were blocked with 5% non-fat milk in PBS for 1 h at RT and incubated with primary antibodies overnight at 4°C. Following washing, the membranes were incubated with HRP-conjugated secondary antibodies. Signals were detected using ECL select kit. Images were acquired by the ChemiDoc MP Imaging system and data were analyzed by Image Lab software (Bio-Rad). The OD of the target bands was normalized to that of total protein bands, which was acquired prior to developing chemiluminescent signals. Statistics Data were presented as median with the range using box and whisker plots. Differences between groups were tested using twoway ANOVA followed by post hoc Holm-Sidak analysis. p < 0.05 was considered to be significant. SigmaPlot (version 11) was used (Sigmaplot; Systat Software, Inc., San Jose, CA, USA) for statistical tests. Fig. 1b), IL-1β (p = 0.001, Fig. 1c), and IL-6 (p = 0.002, Fig. 1d), which were significantly reduced with pretreatment with AZ (LPS + AZ, p < 0.001, Fig. 1b-d). There was no difference in TNF-α, IL-1β, or IL-6 between control and AZ as well as AZ and LPS + AZ (Fig. 1).
AZ Inhibited NF-Κb Pathway Activation in LPS-Activated Microglia
NF-κB/p65 nuclei translocation was determined by both immunocytochemistry and WB. As shown in the WB in Figure 2a, minimal immunoreactivity of NF-κB/ p65 was detected in the nuclear extracts of the control and AZ groups. In contrast, a robust increase in NF-κB/p65 immunoreactivity was observed following LPS + Veh in a time-dependent manner. Pretreatment of microglia with AZ significantly reduced nuclei NF-κB/p65 immunoreactivity. Quantification of nuclei NF-κB/p65 shows that the effect of LPS depended upon whether microglia were pretreated with AZ; there were statistically significant interactions between LPS and AZ (two-way ANO-VA, F [1, 11] = 4.4, 8.7, 21.4, p = 0.07, 0.018, 0.002 at 0.5, 1 and 4 h, respectively). On post hoc analysis, compared to the control, LPS + Veh caused a 2-fold increase in NF-κB/p65 at 0.5 h (p = 0.011) and 2.5-fold increase at 1 h (p = 0.002) and 4 h (p < 0.001). This effect was significantly reduced with pretreatment with AZ (LPS + AZ, p = 0.011, p = 0.002, p < 0.001 at 0.5, 1, and 4 h, respectively). There was no significant difference between control and AZ, as well as AZ and LPS + AZ (Fig. 2b). Consistent with the WB data, immunofluorescence staining clearly showed that AZ blocked LPS-induced NF-κB/p65 translocation from the cytoplasm to the nucleus. As shown in Figure 2c, relatively weak NF-κB/p65 immunoreactivity in the nucleus but strong signals in the cytoplasm was observed in microglia in the control and AZ groups. In contrast, LPS + Veh-treated microglia showed a marked increase of immunoreactivity in the nucleus with a reduction in the cytoplasm. Pretreatment with AZ markedly suppressed LPS-induced NF-κB translocation to a level comparable to the control.
AZ Protected OPCs against LPS-MCM-Induced
Damage OPC survival was determined using XTT at 24 and 30 h following treatments. The results show that the effect of LPS depended upon whether OPC was treated with MCM from AZ-pretreated microglia; there were statistically significant interactions between LPS and AZ (twoway ANOVA, F [1, 31] = 15.9, 24.7 at 24 and 30 h, respectively, p < 0.001). On post hoc analysis, compared to the control, LPS + Veh reduced XTT at 24 (p < 0.001) and 30 h (p < 0.015); these reductions were significantly prevented with pretreatment with AZ (LPS + AZ, p < 0.001). A minimal increase in XTT was noted following AZ compared to control at 24 (p = 0.044) and 30 h (p = 0.015). There was no difference between AZ and LPS + AZ at 24 h but a minimal increase in XTT at 30 h (p < 0.001, Fig. 3a). Figure 3b shows the quantification of cleaved activated caspase-3 positive OPCs at 24 h following treatment with MCM from LPS and AZ-treated microglia. The effect of LPS depended upon whether OPCs were treated with MCM from AZ pretreated microglia; there were statistically significant interactions between LPS and AZ (two-way ANOVA, F [1, 11] = 9.5, p = 0.015). To further determine whether an increase in OPC survival measured by XTT is primarily due to a reduction in cell death rather than an increase in proliferation, we quantified cleaved caspase-3 in OPCs by immunofluorescence and WB. Our data show that compared to the control, LPS + Veh significantly increased the number of caspase-3 positive OPCs (p = 0.003), which was significantly prevented with pretreatment with AZ (LPS + AZ, p = 0.001). Consistently, WB demonstrated a marked increase of activated caspase-3 at 24 h following treatment with MCM-LPS (Fig. 3c). Quantification of WB data in Figure 3d showed statistically significant interactions between LPS and AZ (two-way ANOVA, F [1, 11] = 32.2, p < 0.001). On post hoc analysis, compared to the control, LPS + Veh caused a 2.5-fold increase in activated caspase-3 (p < 0.001), which could be significantly prevented DOI: 10.1159/000519874 with pretreatment with AZ (LPS + AZ, p < 0.001). No differences in cleaved activated caspase-3-positive OPCs or activated caspase-3 protein were noted between control and AZ as well as AZ and LPS + AZ.
AZ Ameliorated OPC Differentiation Arrest Induced by LPS-Activated Microglia
Next, we investigated whether AZ could prevent OPC differentiation arrest by LPS-activated microglia. The a Representative WB of NF-κB p65 and Lamin B1 in the nuclear extract. b Quantification of NF-κB p65 band intensities as normalized to Lamin B1, and data were presented as fold changes relative to the control. Box and whiskers represent the median, 25th, 75th percentiles, and 5th, 95th percentiles, respectively; the dotted line represents mean, there were no outliers, *p < 0.05, **p < 0.01, n = 3. c Representative double-immunofluorescence staining of NF-κB p65 and actin in microglia. In the control, microglial NF-κB p65 immunostaining is primarily co-localized with actin (cytoplasm, orange) but not DAPI (nuclei). Following LPS treatment, NF-κB p65 immunostaining is predominantly co-localized with DAPI (appears magenta) but not actin, suggesting NF-κB p65 is activated. AZ pretreatment shows a similar NF-κB p65 immunostaining pattern as that of the control. Arrows indicate co-localization. Scale bar, 50 μm. AZ, azithromycin; LPS, lipopolysaccharide; NF-κB, nuclear factor-κB; DAPI, 4′,6-diamidino-2-phenylindole; WB, Western blot. differentiation of OLs was assessed by quantifying the relative abundance of OPCs (NG2 immunopositive) versus mature OLs (APC immunopositive). As shown in Figure 4, after 5 days of exposure to the MCM, the majority of OPCs in the control cultures differentiated into APC+ mature OLs, leaving only a minority of cells remained as undifferentiated NG2+ OPCs. In contrast, significantly higher numbers of NG2+ cells but lower numbers of APC+ cells were found in LPS-MCM-treat-ed cultures. This LPS-MCM-mediated OL differentiation arrest was ameliorated by pretreatment with AZ, as shown by a significant increase in APC+ cells and a decrease in NG2+ cells in LPS + AZ group compared to the LPS + Veh group. Statistical analysis shows that the effect of LPS depended upon whether OPCs were treated with MCM from AZ pretreatment; there were statistically significant interactions between LPS and AZ (twoway ANOVA, F [1,11] Fig. 4. a, b AZ ameliorated LPS-MCM-induced OL differentiation arrest. OPCs were exposed to a conditioned medium for 5 days and double-immunostained with NG2 and APC. The extent of cell differentiation was determined by counting the number of undifferentiated OPCs (NG2+, red) and differentiated mature OLs (APC+, green), which are mutually exclusive. The control culture contains predominantly differentiated APC+ OLs with only a minority of undifferentiated NG2+ OPCs. Exposure to LPS-MCM led to the arrest of OL differentiation as indicated by a higher ratio of NG2+ versus APC+ cells. Treatment with AZ reversed the adverse effect of LPS on OL differentiation. c, d Box and whiskers represent the median, 25th, 75th percentiles, and 5th, 95th percentiles, respectively; the dotted line represents mean, there were no outliers, *p < 0.05, **p < 0.01, n = 3. Scale bar, 50 μm. AZ, azithromycin; LPS-MCM, lipopolysaccharide microglia conditioned medium; OPC, oligodendrocyte progenitor cells, OL, oligodendrocyte, NG2, neuron-glia antigen 2, APC, adenomatosis polyposis coli.
AZ Prevented LPS-MCM-Mediated MBP Deficits in OLs
To further assess the differentiation and myelination potential of OLs, MBP expression was quantified by WB. As shown in Figure 5, LPS-MCM significantly reduced MBP expression in OPC cultures following a 5-day exposure. LPS-MCM-mediated MBP deficit was prevented by the pretreatment with AZ. Quantification data showed that the effect of LPS depended upon whether OPCs were treated with MCM from AZ pretreated microglia; there were statistically significant interactions between LPS and AZ (two-way ANOVA, F [1, 11] = 13.4, p = 0.006). On post hoc analysis, compared to the control, LPS-MCM exposure significantly reduced MBP protein (p = 0.010). In contrast, pretreatment with AZ completely blocked MBP reduction (LPS + AZ, p = 0.002). There were no differences between the control and AZ only as well as AZ and LPS + AZ.
Discussion
The major findings of our study are 2 folds. First, we demonstrate that AZ is a potent inhibitor of microglial activation that is associated with its ability to block NF-κB activation. Second, AZ is protective against OPC injury and differentiation arrest by LPS-activated microglia.
A large number of studies suggest that OPC injury and developmental disturbance are linked to inflammatory mediators from activated microglia [12,13,[26][27][28]; thus, anti-inflammatory drugs have the potential to prevent and/or treat WMI. One of the hurdles in translating basic research finding to clinical therapy in neonatology is the concern that many anti-inflammatory drugs have potential long-term adverse effects on brain development. Therefore, our current study was aimed to identify strong anti-inflammatory reagents that have been extensively used in pediatric patients. AZ is well known for its systemic immunomodulatory effects due to its properties like extensive tissue distribution, high accumulation in phagocytes, ability to be delivered at high concentration at infection sites, and suppressive effects on cytokine production [29,30]. Evidence suggests that AZ reduces brain infiltration of neutrophils and inflammatory macrophages during infection [30]. Furthermore, AZ has excellent permeability through the blood-brain barrier, as demonstrated by a wide distribution into the brain tissues but not the cerebrospinal fluid following systemic administration [31]. AZ inhibited LPS-induced pregnancy loss in pregnant rats by reducing TNF-α and increasing IL-10 levels [32]. AZ is used to treat various inflammatory diseases such as cystic fibrosis and bronchopulmonary dysplasia because, in addition to antimicrobial effects, AZ has anti-inflammatory effects linked to its ability to suppress NF-κB activation and TNF-α production [33,34]. However, it is not clear whether AZ suppresses microgliamediated neuroinflammation and ameliorates perinatal brain injury. As a first step, we focused on investigating the anti-inflammatory and neuroprotective effects of AZ in cell culture models, which have been used successfully in studying interactions between LPS-activated microglia and OL development [13]. We started with a pilot experiment to determine whether AZ would suppress IL-6 cytokine in the supernatant from the LPS-stimulated microglia and identify optimal doses. All incremental doses tested (0.1, 0.5, 2.5, and 12.5 μg/mL) had a similar inhibitory effect on IL-6. Considering that 0.5 μg/mL was used in other in vitro studies [21,35], this dose was used throughout our study. Activation of microglia release pro-inflammatory cytokines, especially TNF-α, IL-1β, and IL-6, among others [9,36,37]. The inflammatory response in baboons following the Escherichia coli challenge began with an appearance in plasma TNF-α, IL-1β, followed by a slow but continuous appearance and rise of IL-6 [38]; a similar trend was also noted in human subjects [39]. Previously we showed that these 3 cytokines exhibit distinct temporal profiles in the neonatal rat brain upon LPS challenge, that is, TNF-α and IL-1β start to increase within hours following LPS treatment, while the increase of IL-6 is relatively delayed [40]. In the current study, we found that LPS-activated microglia produce high levels of TNF-α, IL-1β, and IL-6 at 24 h. Conversely, AZ strongly inhibited LPS-induced microglial activation, as demonstrated by significant suppression of TNF-α, IL-1β, and IL-6 secretion into the conditioned medium, as well as NF-κB activation in microglia. NF-κB is one of the major signaling pathways that regulate LPS-mediated microglial activation. NF-κB exists as an inactive form in the cytoplasm; on stimulation with inflammatory triggering factors, it undergoes nuclear translocation and acts as a transcription factor by binding to regulatory DNA and aids in gene expression and production of pro-inflammatory cytokines. A previous study reported that AZ prevents inflammation-induced activation of NF-κB and subsequent release of IL-6 in tracheal aspirate cells from premature infants [41]. Consistent with this study, our WB data showed a significant increase of NF-κB in the nuclei fraction upon LPS treatment in a time-dependent manner, and NF-κB translocation from the cytoplasm to the nucleus was clearly visualized by immunofluorescence. Pretreatment with AZ significantly blocked LPS-induced NF-κB translocation, as indicated by a predominant retaining of cytoplasmic versus nuclei NF-κB fluorescence signals. It should be noted that our data do not conclude that blocking NF-κB is principally responsible for AZmediated anti-inflammatory effect, since other signaling pathways including MAPK-ERK and P38 are also involved in LPS-mediated microglial activation [42,43].
Preterm WMI is a complex sequential process. Increasing evidence suggests that activated microglia play a critical role in the pathogenesis of WMI. Microglia can be activated by both infectious and hypoxic-ischemic insults, 2 major risk factors of WMI. We have previously shown that activated microglia instigate not only OPCs damage but also differentiation arrest, which is considered to be a major mechanism underlying myelination deficits in WMI as well as adult demyelination disorders such as multiple sclerosis. Therefore, anti-inflammatory reagents are also under investigation for multiple sclerosis treatment. For WMI in preterm infants, several drugs targeting neuroinflammation are being studied in animal models [44][45][46][47]. For example, it was shown that melatonin promotes OL maturation through its specific receptors [48] with no significant effect on proliferation. Minocycline given immediately after cerebral hypoperfusion promoted OPC proliferation and decreased apoptosis. Though several immunomodulatory therapies are available commercially, their safety and efficacy in preterm neonates, especially on the developing brain, are unknown. In the current study, we found that AZ provides strong protection of OPCs against cytotoxicity by LPS-activated microglia, as indicated by a reduction in activated caspase-3. Such protective effects are likely due to the inhibitory effects of AZ on pro-inflammatory cytokines release. Importantly, we demonstrated that AZ not only prevented LPS-MCM-induced OPC death but also promoted their differentiation. The ultimate goal of therapeutics in WMI is to improve myelination and function-Dev Neurosci 2022;44:1-12 DOI: 10.1159/000519874 al outcomes of preterm infants. OPCs are the predominant OL lineage cells in the embryonic period, and they migrate to the developing white matter and differentiate into mature OLs, which ultimately form the myelin sheath around the axons. Recent human studies indicate that disturbances in OPC maturation rather than cell loss are the principal underlying cause for myelination failure in WMI [49,50]. Our study found that AZ treatment prevented LPS-MCM-induced OL lineage progression and downregulation of MBP, suggesting that AZ could restore OL differentiation and potentially myelination ability in an inflammatory environment.
In a recent study using a neonatal rodent model of hypoxic-ischemic brain injury, Barks et al. [51] showed that AZ improved functional and neuropathology outcomes. The authors hypothesized that anti-inflammatory mechanisms were likely involved in AZ-mediated neuroprotection, but those mechanisms were not evaluated in that study. Thus, further in vivo studies using neonatal WMI models are merited to confirm our in vitro findings.
In summary, we demonstrated that AZ not only reduced pro-inflammatory cytokines release associated with inhibition of the NF-κB pathway in microglia but also protected OPCs damage and differentiation arrest following exposure to LPS activated microglia. Thus, this proof of concept study provides the first-hand evidence that AZ and possibly other macrolides may be valuable anti-inflammatory candidates for protecting the developing brain against inflammatory insults.
Statement of Ethics
This study is approved by our Institutional Animal Care and Use Committee (IACUC), protocol #1177B.
|
2021-09-29T06:17:11.999Z
|
2021-09-27T00:00:00.000
|
{
"year": 2021,
"sha1": "b3d3b68f8ab8afc146868e9d2b9bf46e25961a0f",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/519874",
"oa_status": "HYBRID",
"pdf_src": "Karger",
"pdf_hash": "f21fe9c4e4c179cb6c3e1d581cc23d580b9e94f2",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261883976
|
pes2o/s2orc
|
v3-fos-license
|
Exploring the “Black Box” of Recommendation Generation in Local Health Care Incident Investigations: A Scoping Review
Background Incident investigation remains a cornerstone of patient safety management and improvement, with recommendations meant to drive action and improvement. There is little empirical evidence about how—in real-world hospital settings—recommendations are generated or judged for effectiveness. Objectives Our research questions, concerning internal hospital investigations, were as follows: (1) What approaches to incident investigation are used before the generation of recommendations? (2) What are the processes for generating recommendations after a patient safety incident investigation? (3) What are the number and types of recommendations proposed? (4) What criteria are used, by hospitals or study authors, to assess the quality or strength of recommendations made? Methods Following PRISMA-ScR guidelines, we conducted a scoping review. Studies were included if they reported data from investigations undertaken and recommendations generated within hospitals. Review questions were answered with content analysis, and extracted recommendations were categorized and counted. Results Eleven studies met the inclusion criteria. Root cause analysis was the dominant investigation approach, but methods for recommendation generation were unclear. A total of 4579 recommendations were extracted, largely focusing on individuals’ behavior rather than addressing deficiencies in systems (<7% classified as strong). Included studies reported recommendation effectiveness as judged against predefined “action” hierarchies or by incident recurrence, which was not comprehensively reported. Conclusions Despite the ubiquity of incident investigation, there is a surprising lack of evidence concerning how recommendation generation is or should be undertaken. Little evidence is presented to show that investigations or recommendations result in improved care quality or safety. We contend that, although incident investigations remain foundational to patient safety, more enquiry is needed about how this important work is actually achieved and whether it can contribute to improving quality of care.
The "Black Box" of Recommendation Generation Since the inception of the patient safety "movement," efforts to improve patient safety within hospitals have relied heavily on the retrospective investigation of adverse events. 1 Retrospective incident investigations as a mechanism for safety improvement are founded on an interpretation of safety theory, which proposes that errors are multifactorial in nature and that identifying and addressing organizational latent failures through investigation and recommendations will reduce future recurrence. 2,3][6][7] This interest has occurred in parallel with the establishment of national-level independent investigatory bodies (e.g., HSIB in the UK, Norwegian Healthcare Investigation Board in Norway), 8,9 and in the UK, an ever increasing number of public inquiries and the ever expanding set of associated recommendations (e.g., Kirkup, 10 Ockenden, 11 Infected Blood Inquiries 12 ).Therefore, exploring the act of recommendation generation is of increasing relevance as the number of recommendations across both local and national level investigation activity grows exponentially.
Although there are a plethora of aims and processes for investigations, a consistent feature is the production of recommendations.Despite 3 decades of incident investigation activity in health care, 13 few studies have critically examined the process. 5,14In addition to the lack of empirical work examining recommendation generation, there is a lack of practical guidance, on the generation of recommendations. 6One systematic review used a modified version of the National Institute for Occupational Safety and Health hierarchy of risk controls to categorize the recommendations from included studies, 5,15 concluding that 80% of recommendations were "weak," that is, unlikely to result in significant improvements in safety or risk reduction.Furthermore, Hibbert and colleagues 16 undertook a retrospective study, following investigations within an Australian regional health system.The study used and modified the U.S. Department of Veteran Affairs action hierarchy (AH) to categorize recommendations as strong, medium, or weak and concluded that only a small number of recommendations were strong and the most common types of recommendations involved reviewing or enhancing policies/guidelines/documentation as well as training and education. 16It is important to note that these issues extend beyond health care.Indeed evidence suggests that a lack of guidance and a plethora of other sociotechnical factors impede the generation, implementation, and evaluation of recommendations across safety investigations in contexts such as rail, maritime, and nuclear. 6,17
Recommendation Generation Within Local Health Care Investigations
Despite the centrality of incident investigation and recommendation generation within patient safety policy globally, there is a surprising lack of understanding about what actually happens in local health care settings with respect to this important activity.In particular, there is a lack of empirical focus and consensus about recommendation generation by people conducting investigations at a local health care organization level. 4,13This review therefore aims to examine the extant empirical knowledge about this issue.[20]
Scoping Review Aims
The purpose of this review was to consider the following questions:
METHODS
We conducted a scoping review, following the preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews guidance. 21
Sources and Searches
Searches were performed on February 28, 2019, and January 30, 2021, using MEDLINE, EMBASE, PsychINFO, and CINAHL.Search terms were iteratively developed to capture the key phases of incident investigation including terms for the incident, investigation, and subsequent recommendations (see Appendix 1 for search terms, http://links.lww.com/JPS/A565).Searches were restricted to English language and studies published since 1999, when the Institute of Medicines' seminal report, To Err Is Human, was published, 22 prompting greater focus on patient safety.
Study Selection
The aim of this review was to examine the routine investigation and recommendation generation processes that occur in hospitals.
Studies were included if they reported on a series of incidents occurring in the hospital, which were chosen for investigation by hospital-based staff, who also generated subsequent recommendations.Studies reporting on incidents from any clinical context or level of harm were included.
Studies were excluded if they reported data from the following: 1. Community, primary care, or primarily mental health care 2. Investigations/recommendations carried out or proposed outside of a hospital, for instance, by an external research team or regional organization 3. Investigations primarily carried out for the purposes of research 4.Not published/peer-reviewed (e.g., conference papers) Searches yielded 15,010 articles.The article title and abstracts were reviewed by W.L. Random samples of 5% (n = 720) were screened independently by both J.O.H. and R.L. to check congruence.A total of 246 articles were selected for full-text review.Full-text screening was undertaken by W.L., with 10% independently screened by each of J.O.H. and R.L. (n = 20).Any discrepancies were discussed and resolved between authors.Eleven articles met the inclusion/exclusion criteria (all agreed with W.L., J.O.H., and R.L.) and contributed to the review (Fig. 1).Regular meetings with the other author (C.V.) allowed discussion of article eligibility.
Data Extraction and Quality Assessment
The purpose of the review was to examine the nature of recommendations proposed within hospitals, which was not the primary aim of all the included studies, but those included did contain empirical data on recommendations.
We assessed study quality using the Quality Assessment for Diverse Studies (QuADS) tool. 23This tool is a well-cited approach to assessing the quality of methodologically heterogeneous studies, which demonstrates reliability and validity. 23,24After discussion of the application of the tool and relevance of quality scoring by all the authors, W.L. reviewed and scored all included articles.A random sample (n = 4 [36%]) of studies were independently reviewed and scored by J.O.H. and R.L., with disagreements resolved with discussion.
Data Synthesis and Analysis
To address research questions 1, 2, and 4, we undertook content analysis of the included studies using 4 stages; decontextualization, recontextualization, categorization, and compilation. 25First, authors read and made themselves familiar with the included studies before extracting "meaning units" of text relevant to answering the aims of the review (decontextualization).After extraction of meaning units, the remaining article text was checked for further relevant content (recontextualization).Next the extracted meaning units were split into specific areas relevant to each research question; the word count was reduced without losing the meaning/ content (categorization).The research questions were answered by condensing the extracted text using the original study terms and language, as well as providing numerical counts of how often content was reported across the studies.
7][28][29] Recommendations from the included studies were discussed by all the authors across 2 meetings and assigned to the core categories of the AH, then counted, to report frequency.If, after discussion, it was felt that a recommendation or category of recommendations did not fit into one of the AH categories, a new category was created and agreed.
RESULTS
The characteristics of included studies (n = 11) are summarized in Table 1.Included studies contained 4680 recommendations from 2818 investigations carried out across 171 hospitals.
Country of Origin
Included studies were conducted in the United States (n = 4), the United Kingdom (n = 2), and Australia (n = 2), with one each from the Netherlands, Brazil, and Hong Kong.
Clinical Context and Incident Harm
Studies reported data from across all clinical specialties (n = 6), pharmacy/medication (n = 1), anesthesia and intensive care (n = 2), and pediatric care (n = 2).Incidents reported within studies varied in their type (e.g., delay in care, fall, dispensing of medication) and resulting harm (see Table 1 for more detail).
Quality Assessment
The included studies demonstrated an average QuADS score of 56% (range, 26%-69%) Five of 11 studies lacked theoretical underpinning such as the discussion of an accident causation model.Half of the studies did not report, in sufficient detail, the justification of sampling or selection of data collection tools.Six studies had no evidence that research stakeholders had been involved in their planning or conduct.Four studies had limited or no discussion of their strengths or limitations.No studies were excluded based on quality.
As part of the investigation process, 3 studies reported interviewing staff, 33,36,37 one of which specified that incidents were reconstructed from a median of 6 interviews (n = 3-15). 36One study reported that parents of children involved in incidents were interviewed "if felt to be useful," and this occurred in 2 of 17 incidents. 36our studies reported on the time spent undertaking investigations.This was highly variable, ranging from 3 to 90 hours. 26,34,36,37hree studies reported that investigations should be completed within a set period of time, ranging from 30 to 60 days, 28,30,33 although they did not specify if this was from when the incident occurred or was reported, or the decision to investigate was made.
RQ2) The Processes for Generating Recommendations After A Patient Safety Incident Investigation
None of the included studies reported using specific tools or methods for recommendation generation.One article reported that staff and parents were invited to suggest recommendations, whereas none of the remainder reported this kind of stakeholder involvement. 36Eight studies proposed that recommendations should prevent incident recurrence 16,27,28,30,33,34,36,37 and eliminate, mitigate, or reduce a risk, hazard, or "root causes." 28,30,33,34No purpose or aim for recommendations was stated in the remaining 3 studies.
RQ3) The Number and Types of Recommendations Proposed
A variety of terms were used to describe the recommendations generated after investigations.We present these terms in Table 2, but because the terms were not clearly defined within the studies, we were not able to determine differences or similarities and have therefore reported them as written.A total of 4579 recommendations were extracted from 10 included studies (Table 3), with an average of 3.7 (1-5) per investigation.Recommendations were not extracted from the 11th included study because of insufficient detail to enable categorization. 34Six studies assigned recommendations to predetermined categories based on (i) the U.S. Department of Veteran Affairs' criteria or AH, 16,[26][27][28] (ii) factors influencing clinical practice devised by Woloshynowych et al, 3,36 or (iii) the "hierarchy of intervention effectiveness" (people versus system focused). 342][33]37 Education or training represented the most common recommendation (27.2% [n = 1257]), followed by new procedure/memorandum/ (n = 3) 32 ; "policy, procedure and process actions" (n = 5) 30 ; and "provide counseling" (n = 280). 31
RQ4) Criteria Used to Assess the Quality or Strength of Recommendations Made
Two of 11 articles reported that the original internal hospital investigations made judgments of recommendation "quality" or "strength." 30,37One study reported that the hospital prospectively tagged incidents to identify trends and therefore monitor for process improvements, although it did not report any data in relation to this. 37Another study reported that implemented action (n = 277) effectiveness was rated by local managers as "much better" (47.4%), "better" (37.0%), "same"(7.4%),"worse" (0%), or not reported or measured (8.2%). 30Although none the studies provided comprehensive data on incident recurrence, one study reported that similar incidents did reoccur despite multiple investigations. 33ncluded studies, in secondary analysis, used a range of terms or phrases to "judge" recommendations as follows.
• Effectiveness (Hibbert et al, 16 Kwok et al, 28 Corwin et al, 30 Figueiredo et al, 31 Kellogg et al, 33 van der Starre et al, 36 Robbins et al 34 ) • Strength (Hibbert et al, 16 Morse and Pollack, 26 Hamilton et al, 27 Kwok et al, 28 Kellogg et al 33 ) • Whether implemented (Morse and Pollackm 26 Hamilton et al, 27 Corwin et al, 30 Kellogg et al, 33 van der Starre et al 36 ) • Aimed at system level improvements or modifying processes (Morse and Pollack, 26 Kwok et al, 28 Kellogg et al 33 ) • Likelihood they would prevent incident recurrence (Morse and Pollack, 26 Kellogg et al, 33 van der Starre et al 36 ) • Quality (Morse and Pollack, 26 Robbins et al 34 ) 7][28] One study referenced a "Model of Sustainability and Effectiveness in RCA Solutions," 33,38 WHEREAS another reported effectiveness of recommendations according to the "Hierarchy of Intervention Effectiveness," which proposes that "systemfocused changes have greater impact." 34,35One article commented on recommendation likelihood of preventing incident recurrence, 36 based on a classification of recommendation strength (weak, medium, strong) proposed by the New South Wales Root Cause Analysis Review Committee. 39
DISCUSSION
To the author's knowledge, this review represents the first review of the extant empirical evidence for the practice of generating recommendations in hospitals, specifically examining how 3.
and what recommendations were generated, as well as the way in which their effectiveness was judged.This process is central to the efforts to improve patient safety and health care quality globally.
Our review highlights the paradoxical situation that, despite the ubiquity of recommendation generation, very little is known about it in practice.Our findings suggest that, although RCA dominates as the approach to investigation, there are no specific tools or approaches used to generate recommendations.Recommendations focus on training or adding or improving policies.In other words, recommendations largely focus on staff knowledge and skills.
There is a lack of agreement in the literature on how effectiveness of recommendations should be judged, meaning that there is very little understanding of what makes a "good" recommendation.These findings raise some important issues, which we will address in turn.
Recommendation Generation Is Confused and Unclear
The variety of terms used to describe recommendations (Table 2) and lack of consensus for categorization suggests differences in vision and purpose at best, and confusion and disagreement at worst.Although this review provides some steer in terms of the espoused investigation techniques, the actual process of how investigation outcomes result in specific recommendations remains opaque.We found that, beyond the investigators, there are committees or teams within hospitals as well as within local or regional organizations that review investigations and their findings; although what role these groups had in selecting or modifying recommendations is unclear.Studies in the wider literature have attempted to explore this process in practice.Braithwaite et al 40 found a number of challenges to RCA such as time constraints, lack of resources, and unwilling colleagues.Another study suggested that recommendations may actually be related to other ongoing improvement work; that is, the incident was used to support existing agendas rather than to generate new findings. 41Furthermore, an ethnography of investigations identified attempts by investigators to manage scrutiny and maintain reputations, and concluded that a failure to appreciate the complex organizational agendas as well as social and political influences on recommendation generation would likely hamper improvements in patient safety. 42Beyond health care, studies of investigations from other domains, such as nuclear and rail, have demonstrated that the design of approaches to investigation and associated manuals lack emphasis or detail on the generation and evaluation of recommendations. 6Another cross-domain study identified that there are a large number of cognitive, political contextual factors that influence the investigation and recommendation generation process, such as cost-benefit analysis, willingness of stakeholders to engage, or the experience or knowledge base of the investigator. 171][42][43] New approaches and tools for recommendation generation [44][45][46][47] are more likely to be successful if adapted and designed relative to the unique and complex context of health care. 48,49urther research to understand the reality of the movement from investigation to recommendation generation is therefore important.
Recommendations Are Classified as Weak and Lack System Focus
This review identified that less than 7% of the extracted recommendations might be considered "strong" or system-focused, such as standardizing equipment, architectural changes, or simplifying processes.Our findings provide further evidence for the continued tendency for "weaker" recommendations that focus on improving individuals' behavior and practice, rather than the wider system deficiencies that contribute to incidents.This tendency, shown in numerous studies from across the globe, 5,50-57 suggests explanatory reasons beyond national culture or specific differences in health care systems and is completely at odds with health care policy and safety research. 3,29,39Furthermore, it would suggest that, globally, health care organizations may have some way to go toward achieving a more just culture, with this focus on weaker individual-focused recommendations both reflecting this and serving to reinforce it. 2oot cause analysis and frameworks, used to support investigation, have themselves been identified as narrowing the view of causation 4 or giving greater attention to causative factors relating to individuals. 58With a tendency for investigations to identify individual factors, 58 it is perhaps not surprising that recommendations are targeted at the same level.Other reasons for a lack of system-level recommendations include lack of investigator training, expertise, 5 or health care-tailored guidance, 3 and difficulty in designing and implementing at the system level. 15,48[61][62][63] It Is Not Clear How to Judge Recommendations Although the focus of recommendations at the weaker individual level has been widely challenged, a further compounding problem with recommendation generation is the lack of agreement on how to judge their effectiveness and what makes a "good" recommendation.
The range of terms, in our included studies, such as "strength," "quality," "sustainability," and "implementability," indicates the complex nature of judging recommendations.Our review found 2 broad approaches: (i) the use of predefined hierarchies of recommendation effectiveness and (ii) assessing the effectiveness of recommendations over time.
][66] These hierarchies, largely originating from non-health care settings, 48,67 are used in health care with minimal empirical Before this review, there have been challenges of the use of hierarchies to predict recommendation effectiveness, 47,48 with arguments that recommendations should be judged on how well they align with the identified risks and context, 46 their likelihood of effecting necessary change, 68 or level of system targeted for change. 47Our review suggests that hierarchies may not yet be widely used in practice, but with the growing number of variations and lack of consensus, they have the potential to cause confusion for hospital safety teams looking to adopt evidence-based approaches.Beyond the need for empirical evaluation of these options, we suggest that future research will also need to consider the practical application of these in health care.
The second approach to judging recommendation effectiveness seems to be "post-hoc" measures, more specifically assessing what difference is made to processes and outcomes, as well as future incident occurrence.In problem solving, determining the effectiveness of solutions is a key step. 67There is a surprising absence of post-hoc measures reported within the included studies, with none of the included studies comprehensively reporting the rates of incidence recurrence.With "the prevention of incident recurrence" being the most commonly quoted reason for incident investigation, it is of note that these data are lacking within this review, as well as the wider literature. 4,5,430][71] Incident recurrence may be a poor marker of investigation success, if reporting remains unreliable.We contend that more research is needed to consider specifically what measures are appropriate for measuring recommendation or investigation effectiveness.
Although Reasons' organizational accident model is central to much of health care investigation practice, 2,3 the included studies demonstrate a lack of translation of the complexity and nuance of the original model.For instance, the recommendations largely focus on reducing error rates rather than putting in place defenses to more broadly improve system safety and quality or reduce the impact of an error if it does occur.The studies included within this review provide no evidence that carrying out investigations and generating recommendations improve the quality or safety of care.Furthermore, there seems to be little consideration of the potential negative consequences of recommendations themselves.
Limitations
Despite the volume of incident reporting and investigation within health care, there is a relative lack of peer-reviewed research with empirical data from "real-world" hospital investigations.Relevant studies may have been excluded if there was ambiguity as to whether they reported data from usual practice within hospitals, as this was the focus of the review.Because of the lack of studies exploring the specific aims of this review, the included study's aims were not necessarily aligned with the aims of the review, rather relevant empirical data were extracted.Many of the included studies do not report the entire investigation process in detail or the effect of recommendations, which has impacted our ability to answer some of the review questions.It was not possible to analyze recommendations at the incident level, which would have allowed us to identify the proportion of recommendations at the individual and system levels.We recognize that this would be an important area for future research.Because we have focused on internal hospital investigations, as opposed to those at a regional or national level, there is a chance that this is one reason there are less observed recommendations targeting those contributory factors or organizations external to the hospital; internal hospital investigations may be more likely to focus on what they perceive they can change. 17This review has focused on the generation of recommendations, but no assumption is made that "good" recommendations will necessarily improve safety.Implementation of recommendations and the challenges and barriers is another important factor to consider but was beyond the scope of this review.
CONCLUSIONS
The aim of this review was to explore hospitals' approaches to incident investigation, recommendation generation, the types of recommendations proposed and how their effectiveness is judged.Although RCA dominates as the approach to investigation, how recommendations are selected remains unclear.Recommendations are generally classified as weak, focusing on improving individuals' skills, knowledge, and understanding so as to change behavior rather than addressing deficiencies in the systems in which staff work.Our review demonstrates a lack of evidence and consensus regarding how recommendations should be judged for effectiveness.We argue that greater clarity is needed in terms of the purpose of investigations and the language used to describe them.Furthermore, empirical work needs to explore and explicate how to generate appropriate recommendations, as well as how these approaches are adopted within the complex sociotechnical context of health care.
Finally, we suggest that, although incident investigations remain foundational to patient safety measurement and improvement, more enquiry is needed about their effectiveness or impact.The generation of recommendations themselves is only one step in the process.Both policy and practice will also need to engage with the growing body of literature and adopt a more evidenced-based approach to investigation and recommendation selection.
1. What approaches to incident investigation are used before the generation of recommendations? 2. What are the processes for generating recommendations after a patient safety incident investigation? 3. What are the number and types of recommendations proposed?4. What criteria are used, by hospitals or study authors, to assess the quality or strength of recommendations made?
TABLE 1 .
16,[26][27][28]uded Studiespolicy (15% [n = 676]), change of process or routine (10.7% [n = 500]), and adjustment/improvement to policy or guideline (6.7% [n = 306]).Fourteen percent of the extracted recommendations were too vague or unclear to categorize.Table3shows the full breakdown of recommendations by category.Recommendation categories 1 to 26, in Table3, are from the AH,16,[26][27][28]and categories 27 to 36 are those proposed by the study authors.Six hundred fifty-six recommendations were categorized as "vague/ unclear" either by the authors of the included studies or authors of this review during analysis.Examples of vague/unclear recommendations included "Medication incident action plan implemented" ) • Efficacy (Hamilton et al 27 ) • Innovation (Robbins et al 34 ) • Level of impact (Morse and Pollack 26 )
TABLE 1 .
( *Lacked detail to enable categorization and therefore not included in Table
TABLE 2 .
Terms Used to Describe the Recommendations After Investigations
TABLE 3 .
Recommendations Extracted From Included Studies They generally propose that recommendations targeted at the individual level (e.g., training and reminders) are weaker than those at the system level (e.g., equipment design).
|
2023-09-16T06:17:25.173Z
|
2023-09-15T00:00:00.000
|
{
"year": 2023,
"sha1": "81431da568e9147133e5acf40d93456483cacb71",
"oa_license": "CCBY",
"oa_url": "https://journals.lww.com/journalpatientsafety/fulltext/9900/exploring_the__black_box__of_recommendation.151.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "32c318d42934eb0df11efe27a25dc0c63bb3b0df",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235329289
|
pes2o/s2orc
|
v3-fos-license
|
SENSORY PROPERTIES OF ANALOG COFFEE FROM BANANA PEELS
Article history ABSTRACT Diterima: 27 Desember 2019 Diperbaiki: 19 Juni 2020 Disetujui: 28 Agustus 2020 Banana peel had the potential to be used as analog coffee. The sensory of analog coffee will determine consumer acceptance and potential for production. This research aims to study the effect of banana peel maturity and the length of the oven on the sensory characteristics of banana peel analog coffee. Ripe and unripe Kepok banana peels were dried and then baked in the oven with time variations of 5, 10, and 15 minutes/ 50 gram. Then it was reduced in size and sieved with 60 mesh. The sensory test was used hedonic were performed by untrained panelists and descriptive methods were performed by the first training panelists. The results were showed that the panelists' preference towards color and aroma of analog coffee powder increased with the length of baking time. The preference for ripe banana peel analog coffee was greater than that of unripe banana peel. The panelists' preference for the color, aroma, and taste of brewed analog coffee was increased with longer baking time. The flavor of brewed analog coffee of ripe banana peel was stronger than that of the unripe peel. The most preferred banana peel coffee was ripe banana peel baked at 180°C for 15 minutes. It was showed characteristics of darker color and a stronger aroma of coffee powder. The most preferred brewing analog coffee was that with the darkest color, the coffee's strongest aroma, and the strongest bitter taste. Keyword Analog coffee; banana peel; sensory; descriptive
INTRODUCTION
Banana peels constitute 30% of the inedible portion (González-montelongo et al. 2010). This part is generally only thrown in the trash. Whereas in industries that process bananas, the peel is usually collected and taken by farmers to be used as feed. This is a challenge to develop banana peels into food products.
Banana peels have been made into analog coffee by using peel from ripe and unripe bananas (Mentari et al., 2019;Sofa et al. 2019). Making analog coffee consists of reducing the size of the peel, drying, curing, grinding, and sieving. Banana peel coffee is proven to contain phenol antioxidant compounds and has antioxidant activity, as evidenced by testing with DPPH radical compounds (Mentari et al. 2019).
Therefore, banana peel coffee has the potential to be developed and developed on a production scale. But a product is worth trading if consumers like it (Sidel & Stone 1993). The role of consumers is very important in testing product acceptance in the market (Costa & Jongen 2006). This research aims to study the consumer preference characteristics represented by panelists on banana peel coffee. It also to explore the descriptive profile of banana peel coffee.
Materials and Tools
The main ingredients were used Kepok banana peel (unripe and ripe) obtained from Karangawen, Demak, Central Java. Material for sensory analysis was used drinking water and standards for descriptive testing. The tools were used cabinet dryer, blender, 60 mesh sieve, shot glass.
Experiment Design and Data Analysis
The research design was used a randomized factorial design, with the first factor being the difference in the level of fruit maturity that was unripe and ripe. The second factor was 5, 10, and 15 minutes of oven. Hedonic data were analyzed using Anova, while comprehensive data were analyzed using Pearson product-moment. Both of these were analyzed using SPSS 16 software.
Banana Peel Preparation
Kepok banana peel was separated from the fruit flesh. Then it was reduced in size (2 x 2 cm) and spread in a pan. Furthermore, it was dried in a cabinet dryer at 60ºC for 24 hours. The dried sample was stored in airtight plastic to the treatment stage.
Making Banana Peel Coffee
Fifty grams of dried banana peels were baked in an oven at 180 ºC for 5, 10, and 15 minutes. The sample was mashed with a dry blender, furthermore, sieved with a 60 mesh sieve. The analog coffee sample powder was stored in aluminum foil packaging until analysis.
Sensory Hedonic Analysis
Consumer preference was assessed by the hedonic method. The number of panelists were used was 25 people. The criteria of the panelists chosen were not trained. The panel was asked to rate the favorite score of banana peel coffee powder on the color and aroma parameters. While brewing coffee was assessed on the parameters of color, aroma, taste, and overall. Scores used were 1-6, were 1 (very dislike) -6 (very like).
Sensory Descriptive Analysis
Profile description of coffee powder and brewing of banana peel were analyzed with descriptive method. Twenty-five panelists were trained and introduced to the standards and intensity of each parameter. After the panelists were familiar with all the assessment attributes, a profile of coffee powder and coffee brewing was tested. The intensity scale ranged from 1-10. The assessment parameters of the powder consisted of coffee aroma, banana aroma, caramel aroma, and dark color. While the brewing parameters tested consisted of banana aroma, caramel aroma, coffee aroma, bitter taste, sweet taste, astringent taste, sour taste, and dark color.
RESULTS AND DISCUSSION Sensory Hedonic Characteristics
The hedonic test was carried out on banana peel coffee powder. This test was intended to determine the level of preference of panelists towards banana peel coffee powder. The difference in the level of maturity and length of the oven had a significant influence on the hedonic score of the coffee powders of banana peels in several treatments (Figure 1). Coffee was derived from unripe banana peels shows the panelists' preference for powder colors that were significantly different from coffee from ripe banana peels. The duration of the oven processing of unripe banana peel did not have a significantly different effect on panelists' preference for the color of coffee powders. This was different from coffee from the ripe banana peel, which showed the panelists' liking score of the powder color, which was significantly different from the difference in oven duration. Panelists said they preferred the color of coffee produced from ripe banana peels.
The color of the dried banana peel will turn light to dark brown during 5 minutes and 10 minutes oven respectively. In contrast, a 15minute oven produces dark brown powder. This change in color to brown and dark is due to nonenzymatic browning. Maillard and caramelization reactions occur during curing at high temperatures. Banana peels contain reducing sugars such as glucose, fructose, and maltose (Chandraju et al. 2011). The total sugar content is 29%. Glucose content is 2.4%, fructose 6.2%, and sucrose 2.6% (Mohapatra et al. 2010).
Meanwhile, according to Emaga et al. (2007), sugar content in ripe banana peel does not contain sucrose but contains 15% glucose and 26% fructose. Banana peels contain 38% free sugar (Emaga et al. 2011). The protein content in banana peels is 1.8%. The dominant amino acid content in banana peels includes Leucine, Valine, Phenylalanine, and Threonine. Total amino acids range from 4.3% to 8.1% (Emaga et al. 2007). The sugar and amino acids in banana peels are a precursor of the Maillard reaction and caramelization. Maillard's reaction occurs because of the reaction between reducing sugars and amino acids at high temperatures. In contrast, caramelization occurs in total sugar at high temperatures. Both of these reactions produce brown compounds. Maillard's reaction produces melanoidin, which has a high molecular weight, which causes brown and dark colors (Coghe et al. 2006). Increasing the temperature and the longer the heating will cause the formation of compounds 5-(hydroxymethyl)-2-furfural (HMF) more and more. HMF will condense and form high molecular weight polymers that make a brown color called melanoidin (Agila & Barringer 2012). Caramelization produces brown polymer compounds (Ajandouz et al. 2001). Warming sugar at high temperatures will cause the sugar to melt to form a dark brown viscous liquid called caramel. Caramelization involves the formation of glucose and fructose anhydrides, which will condense into caramelan and caramelan. These two compounds will condense to form humin or The difference in the level of maturity of the banana peel caused the panelists' favorite score for the aroma of coffee powder to be significantly different in the oven lengths of 10 and 15 minutes ( Figure 2). The size of the oven showed no significant difference in the score of coffee powder flavor. Panelists prefer the aroma of coffee powders produced from ripe banana peels.
The aroma of banana peel coffee was formed during the oven process. Maillard and caramelization reactions occur at the oven temperature and produce volatile compounds that contribute to the scent of the coffee powders produced. Maillard reactions will produce volatile compounds such as furans, pyrazines, pyrroles, oxazoles, thiophenes, thiazoles, and other heterocyclic compounds that contribute to the aroma (Mottram 1994;Yanagimoto et al. 2002). Coffee brewing was made by dissolving 5 grams of coffee powder into 100 ml of water at 95°C, then stirring and filtering. Brewing was tested at warm, serving temperatures to the panelists. Panelists favor of brewing banana peel coffee was tested on the parameters of color, aroma, taste, and overall brewing ( Figure 3). The difference in the level of maturity caused the panelists to give a significantly different preference score for the color of the brewed coffee brew ( Figure 3A). The length of the oven had a significantly different effect on panelists' preference for the color of brewed banana coffee brewing, while on raw brewed coffee, brewing was not significantly different. Panelists tended to like the color of brewed coffee from an unripe banana peel.
The panelists' preference for the aroma of coffee brewing was not significantly different at different levels of maturity ( Figure 3B). The length of time of the oven has a tendency not to cause any real difference to the panelists' preference for the aroma of brewing banana peel coffee. Even so, panelists tend to like the aroma of brewing banana peel coffee brewing compared to unripe banana peel coffee brewing.
The difference in the level of maturity did not have a significantly different effect on panelists' preference for the taste of brewing banana peel coffee ( Figure 3C). The length of time of the oven has a tendency not to cause a noticeable difference in the panelists' preference for the taste of coffee brewed unripe and ripe bananas. Even so, the score of the sweetness taste of brewed banana peel coffee brewing tends to be higher than that of unripe banana peel coffee brewing. This shows that panelists prefer the taste of ripe banana peel coffee.
Overall, panelists assessed that preference for brewed and unripe banana peel coffee was not significantly different ( Figure 3D). Differences in the level of maturity and length of covenants tend not to cause statistical differences in preference.
However, the panelists' preference scores tend to be higher than brewed banana peel coffee brew than unripe ones. This means that coffee brewed on a ripe banana peel is preferred by panelists.
Powder Sensory Descriptive Characteristics
The descriptive profile of banana peel analog coffee was showed by a descriptive test with trained panelists. Banana peel analog coffee powder has a sensory profile shown in the parameters of color and aroma ( Figure 5). The descriptive profile of unripe banana peel coffee powder has a not too dark brown color (Figure 4). Whereas ripe banana peel analog coffee powders tend to have a darker color. Sugar content such as sucrose and reducing sugar in ripe banana peels may be higher than unripe bananas (Emaga et al. 2007). Sucrose and reducing sugar will undergo caramelization at the oven temperature above 180°C. On the other hand, reducing sugars will react with amino acids contained in the peel so that the Maillard reaction occurs, which produces melanoidin brown. Besides carbon compounds such as cellulose that make up ripe banana peels will undergo pyrolysis at high temperatures so that the color becomes blackish. The longer oven time causes the coffee color to darken. This is related to the internal temperature of the material, which increases with the time of the oven. The longer the temperature was exposed to the material, will cause thermal degradation so that the caramelization reaction, the Maillard reaction, and pyrolysis occur more intensely. The result will cause brown and dark colors to increase in intensity.
The aroma of coffee was increased in intensity with a longer time of oven on ripe banana peels. Many of the coffee's volatile aroma compounds can form from the Maillard reaction process and caramelization on ripe banana peels. The aroma of bananas tends to be non-dominant and slightly decreases in intensity with more extended curing. In contrast, the aroma of caramel tends to be stable.
Brewed Sensory Descriptive Characteristics
Descriptive profiles of banana peel coffee brewing were showed in the parameters of color, aroma, and flavor of the brewing (Figure 6). The profile of brewing analog coffee of unripe banana peels shows that the longer the curing causes the brewing color to darken (Figure 6a). Besides, the longer oven produces the aroma of caramel and the aroma of coffee to increase. The scent of a banana tends not to change.
Ripe coffee brewing profile showed that the longer the oven time, the darker the brewing color ( Figure 6b). The dark color appears to have a higher dark intensity than brewing from raw banana peels (Figure 7). This is possible because of the amount of melanoidin, caramelan, caramelen, and carbon compounds that burn more in ripe banana peel coffee. Mature banana peels contain more non-enzymatic browning precursor compounds such as sugar and amino acids. The aroma profile of coffee increases in intensity with the length of time of the oven. This was made possible by the longer oven causing more coffee aroma formation reactions to occur to produce more volatile coffee aroma compounds. The bitter taste profile of coffee was increased with the length of time of the oven. This is possible because of the intensive reaction of the degradation of components in a banana peel during covenant, which results in an increased bitter taste.
The brewing process was carried out by dissolving as much as 5 grams of coffee in 100 ml of hot water. The brewing process will dissolve the water-soluble compounds and will leave insoluble compounds such as pulp. The colored compounds will dissolve, causing the brewing color to become more concentrated and dark (Figure 7). Brewing will increase the aroma of brewing because it is evaporated with water that evaporates at high temperatures. Besides, volatile aroma compounds will evaporate at high temperatures when brewed with hot water.
Correlation of Sensory, Physical and Chemical Properties
The relationship between various parameters as indicated by the Pearson correlation value ( Table 1). The results of correlation analysis showed the relationship between hedonic powder scores and coffee brewed banana peels showed a significant and robust closeness in all parameters, except for the taste parameters. This was indicated by the coefficient value that is close to one. The coefficient value shows a positive value, which means the close relationship between the two is synergistic. If the hedonic powder score increases, the brewing hedonic score also increases and vice versa.
The relationship between hedonic powder with a descriptive profile of dark color and the aroma of coffee powder showed very closely and significantly. The relationship has a positive relationship. This can be interpreted as increasing the intensity of dark colors and the aroma of coffee powder. The hedonic score will also increase.
The relationship between hedonic brewing with a descriptive profile of dark colors, coffee aroma, bitter taste, and brewing sour taste showed very closely and significantly. This was indicated by the coefficient value that is close to one. The close relationship between the two was showed by the positive coefficient value, which means the increasing intensity of dark colors, coffee aroma, bitter taste, and a sour taste in brewing, hedonic brewing scores will also increase.
The relationship between the content of phenol compounds with hedonic brewing colors, hedonic colors and powder aroma, descriptive profiles of color and powder aroma, descriptive profiles of dark colors, and caramel aroma showed very close and significant. The closeness was indicated by a positive coefficient value. This showed that if the phenol compound content increases, it causes hedonic brewing color, hedonic color, and powder aroma, descriptive color profile and powder aroma, descriptive color profile dark, and caramel brewing aroma will also increase.
CONCLUSION
The treatment that had the highest level of preference based on the hedonic test was oven banana peel coffee at 180°C for 15 minutes. This treatment powder profile had the darkest color, the strongest coffee aroma, the aroma of banana and caramel the same as other treatments. The brewing profile of this treatment had the darkest color, the strongest coffee aroma, and the bitter taste most felt compared to other treatments.
|
2021-06-03T21:30:45.397Z
|
2021-05-06T00:00:00.000
|
{
"year": 2021,
"sha1": "8c9abcd4078b9792d55fd5939d8840afd01b4e25",
"oa_license": "CCBY",
"oa_url": "https://journal.trunojoyo.ac.id/agrointek/article/download/6219/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8c9abcd4078b9792d55fd5939d8840afd01b4e25",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
220963344
|
pes2o/s2orc
|
v3-fos-license
|
A Neural Network Framework for Predicting the Tissue-of-Origin of 15 Common Cancer Types Based on RNA-Seq Data
Sequencing-based identification of tumor tissue-of-origin (TOO) is critical for patients with cancer of unknown primary lesions. Even if the TOO of a tumor can be diagnosed by clinicopathological observation, reevaluations by computational methods can help avoid misdiagnosis. In this study, we developed a neural network (NN) framework using the expression of a 150-gene panel to infer the tumor TOO for 15 common solid tumor cancer types, including lung, breast, liver, colorectal, gastroesophageal, ovarian, cervical, endometrial, pancreatic, bladder, head and neck, thyroid, prostate, kidney, and brain cancers. To begin with, we downloaded the RNA-Seq data of 7,460 primary tumor samples across the above mentioned 15 cancer types, with each type of cancer having between 142 and 1,052 samples, from the cancer genome atlas. Then, we performed feature selection by the Pearson correlation method and performed a 150-gene panel analysis; the genes were significantly enriched in the GO:2001242 Regulation of intrinsic apoptotic signaling pathway and the GO:0009755 Hormone-mediated signaling pathway and other similar functions. Next, we developed a novel NN model using the 150 genes to predict tumor TOO for the 15 cancer types. The average prediction sensitivity and precision of the framework are 93.36 and 94.07%, respectively, for the 7,460 tumor samples based on the 10-fold cross-validation; however, the prediction sensitivity and precision for a few specific cancers, like prostate cancer, reached 100%. We also tested the trained model on a 20-sample independent dataset with metastatic tumor, and achieved an 80% accuracy. In summary, we present here a highly accurate method to infer tumor TOO, which has potential clinical implementation.
INTRODUCTION
Worldwide, almost one in three cancer patients is clinically diagnosed with distant metastases. In most cases, primary and metastatic lesions are identified simultaneously; however, some primary tumors cannot be found after systematic clinicopathological diagnosis (Tomuleasa et al., 2017). Cases with cancer of unknown primary (CUP) lesions account for approximately 3-5% of all newly diagnosed cancers (Richardson et al., 2015); due to its poor prognosis, CUP is the fourthhighest cause of cancer-related deaths around the world (Pavlidis and Fizazi, 2005;Kamposioras et al., 2013). Cancer of unknown primary patients are generally treated with non-selective empirical chemotherapy, which leads to a very low short-term survival rate (Kurahashi et al., 2013). Thus, identifying the primary site is critical for improving long-term survival in CUP patients, especially when considering cancer-type specific targeted therapy (Hudis, 2007;Varadhachary et al., 2008;Hyphantis et al., 2013).
To identify the primary lesion of CUP, a systematic assessment is performed which consists of physical examination, patienthistory analysis, serum markers, radiological imaging; as well as immunohistochemical analysis. Immunohistochemical markers are very important for determining tissue-of-origin (TOO; MacReady, 2010; Molina et al., 2012;Oien and Dennis, 2012;Pavlidis and Pentheroudakis, 2012); however, the expressed markers may be non-specific sometimes (Handorf et al., 2013;Montezuma et al., 2013;Tothill et al., 2013). Recently, studies have shown that cellular-origin signatures, which are sufficiently retained in primary tissue, persist after primary cancer cells undergo dedifferentiation and colonization in different tissue types (Ma et al., 2005;Tothill et al., 2005). Molecular profiling is a promising technique that can improve primary-site diagnosis in CUP patients (Ma et al., 2005;Lazaridis et al., 2008;Meiri et al., 2012); it is based on expression microarrays and the quantitative real-time polymerase chain reaction (qRT-PCR) experimental platform (Ma et al., 2005;Lazaridis et al., 2008;Greco et al., 2012;Meiri et al., 2012).
In recent years, cancer classification based on gene expression data such as RT-PCR has attracted great interest and has been implemented in different studies (Lapointe et al., 2004;Mramor et al., 2007;Liu et al., 2008). Single studies are prone to laboratory-specific bias; they are usually limited to a relatively small number of samples and fail to yield novel markers for clinical application. However, applying Next Generation Sequencing (NGS) technology helps alleviate the issue of batch effect by providing gene expression data sets from multiple studies; thus, the integrative analysis of such data can be considered a source of cancer classification. In this regard, establishing a robust classification model is a challenging task; bioinformatics feature selection techniques for establishing such models have been introduced in a previous review (Saeys et al., 2007). Support vector machines (SVMs) based on the recursive feature elimination (RFE) algorithm represent embedded methods used for feature selection and classification modeling based on microarray gene expression data, which mined 11,925 genes to 154 genes with definite biological significance (Xu et al., 2016). More than 20,000 genes were generated from NGS RNA-Seq data in other studies (Bhowmick et al., 2019); this number is almost twice as much as that from microarray gene expression data. Hence, RNA-Seq data from nine cancer types (lung, liver, colon, thyroid, prostate, bladder, kidney, brain, and skin) were analyzed with different algorithms, and Artificial Bee Colony (ABC) yielded better results than Ant Colony Optimization, Differential Evolution, and Particle Swarm Optimization. Among different cancer types, lower grade brain glioma had the highest accuracy (99.1%) based on the ABC algorithm (Bhowmick et al., 2019). However, the robustness of feature selection and classification modeling methods still needs to be comprehensively evaluated; different algorithms might result in different results depending on their model (Chopra et al., 2010;Bhowmick et al., 2019). Therefore, it is necessary to design a robust classification algorithm based on NGS data that can yield accurate cancer type classification and supplement clinical examination.
In the present study, genome-wide gene expression profiles were established based on comprehensive RNA-Seq data. The gene expression data of ∼8,000 tumor samples were used to identify gene signatures for 15 common human cancer types (lung, breast, liver, colorectal, gastroesophageal, ovarian, cervical, endometrial, pancreatic, bladder, head and neck, thyroid, prostate, kidney, and brain). To screen gene features and evaluate cancer classifiers, the Pearson correlation Neural Network (NN) algorithm was implemented in this study to identify tumor origins.
RNA-Seq Datasets
NGS-based gene expression profiling data of 7,480 tumor samples were collected from The Cancer Genome Atlas (TCGA, release version v26), 1 and the tissue origins of those samples were confirmed through histopathological analysis. The downloaded data offered RNA-seq data of 21 cancer types that belongs to projects from United States, which is sequenced using the same protocols. Among them, melanoma had a distinct distribution from other cancer types (80 samples were sampled from primary tumor and 352 were sampled from metastatic tumor) and was excluded. Thus, the expression profiles of 15 common cancer types (lung, breast, liver, colorectal, gastroesophageal, ovarian, cervical, endometrial, pancreatic, bladder, head and neck, thyroid, prostate, kidney, and brain) were studied in this work. The normalized expression value of expression data was downloaded from TCGA and provided the expression levels of 20,501 unique genes for the 15 chosen cancer types.
To perform the bioinformatics analysis in this study, the transcript level of genes was normalized again to form a matrix with rows of sample numbers and columns of gene numbers.
The normalization was done by dividing the sum of the gene expression value of each sample. Normalized gene expression data were extracted and represented as a matrix with 'm' rows and 'n' columns, such that 'm' represented 7,480 tumor samples and 'n' represented the expression levels of 20,501 unique genes.
For log transformation, we used log 2 to transform the original dataset after replacing zeros to global minimum × 0.1. No normalizations were done after feature selection.
Among all the samples, 7,460 samples were sampled from primary tumors, remaining 20 samples sampled from the metastatic tumors.
Gene Feature Identification
To identify an optimal gene signature, we introduced a strategy of feature selection and multi-class classification modeling in this study. According to the mechanism of feature selection, the sets of genes were screened by the Pearson Correlation algorithm (Hall, 1998;Saeys et al., 2007). This study consisted of the following steps: (i) create an array to binarize rows for each cancer type (C columns) for the m tumor samples, labeling the sample as "true" if the sample belongs to the cancer type, otherwise the sample was labeled as "False, " where C is the total cancer types and m is the sample number; (ii) calculate the correlation of gene expression level with samples labeled "true" for each cancer type, then sort in decreasing order according to their correlation; (iii) take the most important signatures, appeared top N of the list, for each cancer type, where N is an integer; and (iv) combine C lists of the top N genes and remove the redundant genes, generating a gene set. Gene expression values from the gene set will be extracted for further usage.
Feature Performance Assessment
We used a NN (Hinton, 1989) to train the classification model. The gene expression values were used as input signatures for the NN. The NN was designed with three layers, in which the input layer has N units, the hidden layer has 50 units, and output layer has 15 units corresponding to each cancer where N is the gene number of the input matrix. The output layer of the NN was used as the input for the Softmax function to obtain the probabilities for each cancer type. To prevent overfitting, L2 penalty was set to 0.0001. For comparison, we used logistic regression as a baseline method. The parameter C was set to 10,000 for logistic regression. The algorithms were implemented using scikit-learn package (Pedregosa et al., 2011).
Gene Ontology Analysis
To perform the Gene ontology (GO) analysis of the identified gene features, GO consortium (Ashburner et al., 2000) was used. The enrichment result was generated by clusterProfiler, which performs a hyper geometric test between the tested genes and gene sets in GO terms (Yu et al., 2012). The biological significances of the selected genes were examined by GO enrichment analysis to identify the most enriched biologicalprocess terms. Benjamini-Hochberg was used to adjust the p value.
Collection of Gene Expression Datasets of Common Human Cancer Types
The main objective in this study is to identify putative gene biomarkers to classify cancer type. The workflow of the present study is shown in Figure 1. For this analysis, the TCGA was used to obtain gene expression profiles of 15 common solid tumor cancer types via NGS-based RNA-Seq, including lung, gastroesophageal, colorectal, liver, breast, thyroid, cervical, brain, pancreatic, ovarian, endometrial, bladder, kidney, head and neck, and prostate. In total, the expression data of 7,480 tumor samples were collected. Among those, the gene expression profiles of lung adenocarcinoma and lung squamous cell carcinoma samples were merged into lung cancer; those of colon adenocarcinoma and rectum adenocarcinoma were merged into colorectal cancer; those of kidney renal clear cell carcinoma and kidney renal papillary cell carcinoma were merged into kidney cancer; and those of glioblastoma multiforme and lower grade glioma were merged into brain cancer.
Around 20 of the 7,480 samples were sampled from metastatic tumors, whereas 7,460 were sampled from primary tumors. Thus, we split the dataset into the 7,460-sample training dataset and the 20-sample test dataset according to the sampling tumor type. All cancer types in the training dataset had more than 100 samples; the largest sample size was that of breast cancer (1,056 samples), whereas, the smallest sample size was that of pancreatic cancer (142 samples). Table 1 summarizes the datasets and provides information on the tumor samples.
Hundred and Fifty as a Feature Number Works Well With the Neural Network
A classification modeling database of 15 common cancer types was established based on the expression data of 20,501 unique genes obtained from TCGA. However, having a large number of samples per cancer type might result in variations due to intra-tumor heterogeneity; hence, it is critical to identify the gene expression features from high-dimension datasets. Pearson correlation-based feature selection represents a multivariable filter method for high-dimension data analysis (Hall, 1998;Saeys et al., 2007), which is fast in operation and simple in complex computation; they are used to assess the correlation between cancer type and corresponding gene-expression features.
Here, we used Pearson correlation to select the gene-expression signature from NGS-based mRNA expression data for each cancer type. In this study, we used integers from 1 to 20 as candidates for gene number for each cancer type, which might give rise to 20 possible gene sets of 15, 30, . . ., 300 with a step of 15. The regression model is an important mathematical model for classification. NNs, as types of deep learning algorithms, are advanced techniques that can analyze complex and highdimensional data. NNs have been applied in protein classification (Asgari and Mofrad, 2015) and anomaly classification (Suk and Shen, 2013;Plis et al., 2014;Hua et al., 2015). Here, we used NNs as the classification model to assess the performance of different numbers of features. The gene expressions levels were the input layer for the NN; 15 cancer types were the output layer obtained from NNs.
Usually, 10-fold cross-validation is used for minimizing the over-fitting issues and obtaining good performance. Hence, to avoid overfitting of the NN algorithm, we ran a 10-fold crossvalidation 10 times using the 7,460-sample training dataset to obtain relatively stable and reliable results, possibly minimizing the percentage of false positives and false negatives. The 10-fold cross validation was performed as follows. (a) Split the whole training dataset into 10 disjoint parts randomly. (b) Use 9 parts as the training set (9/10 training set). (c) Choose N genes using Pearson correlation from the 9/10 training set, where N is the gene number which might be 15, 30, . . ., 300 with a step of 15. (d) Train a model using the selected genes using the 9/10 training set. (e) Use the remaining one part as test set as the validation set of the previously trained model. (f) Repeat b-e 10 times with each part being the test set, until all the samples are predicted once. Finally, (g) merge the results from the test parts and evaluate the metrics.
The cross validation was done using different gene number and the accuracies from each 10-fold cross validation are plotted. For comparison, we also used logistic regression as a baseline model (Figure 2). We achieved a good accuracy when the selected gene number is 150. Though a better accuracy could be achieved using the 200 or more as the feature number, the growth curve of number-accuracy is slowing down. The 150 could be seen as a turning point for this curve. Thus, we finally chose the number 150 as the feature number. The results was calculated by averaging the results of 10 times of 10-fold cross validations and showed that the overall accuracy of each cancer type was 94.87% using 150 as the feature number; the sensitivity was on average 93.36%, while the precision was on average 94.07%, corresponding to the actual numbers of cancer samples ( Table 2). Among the 15 cancer types, the classifier sensitivity of 13 cancer types (lung, breast, liver, colorectal, gastroesophageal, ovarian, endometrial, pancreatic, head and neck, thyroid, prostate, kidney, and brain) was more than 90%, with that of prostate cancer having the highest sensitivity (100%). On the contrary, the remaining two cancer types had a sensitivity of <90% (74.75% for bladder cancer and 71.63% for cervical cancer) (Figure 3 and Table 2).
We also attempted to use the log-transformed data for in the cross validation since log-transformation was a common transformation for gene expression profile. For a reasonable comparison, we selected 10 genes for each cancer in each fold of cross validation. However, the overall accuracy by 10 times of 10-fold cross validations only reached 80.90% (Supplementary Table S1), which is not satisfactory. In contrast, the data by the previously described transformation method output the result of 94.87%, showing more optimization shall be done for a better result using the log-transformed data.
The Identified Genes Were Enriched in Several Organ-Specific Pathways
A 150-gene set was identified using the whole training dataset for subsequent processing ( Table 3). To understand how frequently those genes will show up in the cross validation phase, we counted the genes in all the 100 gene sets used in the cross validation and found that 117 genes out of the 150 gene showed up in all gene sets validation, showing the robustness of the feature selection method based on Pearson correlation (Supplementary Table S2). To investigate the biological processes of the involved signature genes, GO enrichment analysis was performed. We saw that the most functionally enriched processes related to our 150-gene panel by GO analysis were biological processes (Figure 4 and Table 4). Among those, GO:0048568 Embryonic organ development, GO:0061458 Reproductive system development, GO:0007389 Pattern specification process, GO:0043062 Extracellular structure organization, GO:0002009 Morphogenesis of an epithelium, and GO:0048732 Gland development were related to tissue or organ morphogenesis. Our signature genes were involved in these biological processes and might be useful for classifying distinct cancer types. Hence, the enrichment analysis in the present study might provide a basis to improve our understanding of lung, gastroesophageal, colorectal, liver, breast, thyroid, cervical, brain, pancreatic, ovarian, endometrial, bladder, kidney, head and neck, and prostate cancers.
FIGURE 3 | Prediction of cancer type by confusion matrix analysis. The confusion matrix is from one 10-fold cross validation and displayed the relationship between reference diagnosis and the predicted cancer type. The first column represents reference diagnoses; the predicted cancer types by transcript levels of the 150 genes are shown across the top row.
Frontiers in Bioengineering and Biotechnology | www.frontiersin.org Several hallmarked studies indicated that the cellular origin signatures that are expressed in primary tissue are sufficiently retained even after primary cancer cells undergo dedifferentiation and colonization in different tissue types (Ma et al., 2005;Tothill et al., 2005). A recent study compared four different algorithms and indicated that the modeling performance differed between these algorithms when analyzing RNA-Seq data from 4,127 primary tumor tissue samples related to nine cancer types (Bhowmick et al., 2019). Among those, ABC yielded the best results; it had an average precision of 91.16% and an average sensitivity of 96.5% for nine cancer types (Bhowmick et al., 2019). However, our study demonstrated an average precision of 94.07% and an average sensitivity of 93.36%, corresponding to 7,460 cancer samples related to 15 common cancer types. Although the average sensitivity from our study was a bit lower than that of ABC algorithm, we managed to dramatically minimize the falsepositive rate to 0.34% (Table 2). Moreover, the overall accuracy with an average of 94.87% is higher than that of other gene expression-based signatures, which ranged from 79-91% (Ma et al., 2005;Monzon et al., 2009;Kerr et al., 2012). Furthermore, the performance of the 150-gene panel was higher than that of the immunohistochemistry technique (75%), which represents the current clinical practice standard, as tested by a 10-antibody panel (Park et al., 2007).
In the present study, GO analysis revealed several overrepresented biological processes related to tissue morphogenesis, such as embryonic organ development, reproductive system development, pattern specification process/regionalization, extracellular structure organization, epithelial morphogenesis, FIGURE 4 | The most represented biological processes associated with our signature genes. Dot plot displaying the number of signature genes involved in each biological process, determined by enrichment analysis. Dot size represents the number of genes, and dot color represents p-value; a lower p-value represents a higher probability of a biological process being enriched with the signature genes. and glandular development (Figure 4 and Table 4). Additionally, the expression patterns of several signature genes of the 150gene panel were previously reported to be related to tissues of specific tumor types. For example, GRHL3 (Grainyhead-Like Transcription Factor 3) encodes a cancer suppressor that is a member of the grainyhead-like transcription factor family (Darido et al., 2011). The downregulated GRHL3 gene was associated with head and neck squamous cell carcinomas (Frisch et al., 2018); overexpression of the oncogenic mir21 was as result of decreased GRHL3 (Bhandari et al., 2013). In addition, KLKs (Kallikrein-Related Peptidases) are genes that encode serine proteases that exhibit a deregulated expression in prostate cancer. In our study, KLK2, KLK3, and KLK4 were identified as gene signatures for prostate cancer; KLK3 is a prostate-specific antigen that is a gold-standard clinical biomarker widely employed in the diagnosis and monitoring of prostate cancer (Fuhrman-Luck et al., 2014); KLK2 showed promise as prostate cancer biomarker, as well. Additionally, the deregulated expression of KLKs has been utilized in designing novel therapeutic targets for prostate cancer (Fuhrman-Luck et al., 2014).
GATA DNA-binding proteins, commonly abbreviated as GATAs, are zinc-finger binding transcription factors that regulate tissue differentiation and specification (Chou et al., 2010;Zheng and Blobel, 2010). In our study, GATA3 and GATA6 transcripts were identified as gene signatures for breast cancer and gastroesophageal cancer, respectively. Previous studies have indicated that GATA3 was weakly expressed in a wide variety of normal tissues, while its expression was remarkably elevated in breast cancer (Yang and Nonaka, 2010;Liu et al., 2012); moreover, GATA3 has been identified as a novel clinical marker for detecting primary and metastatic breast cancer (Cimino-Mathews et al., 2013;Krings et al., 2014;Shield et al., 2014;Braxton et al., 2015;Sangoi et al., 2016;Yang et al., 2017). GATA6 was initially cloned from rat gastric tissue, designated as GATA-GT1 (Tamura et al., 1994); however, recent studies have indicated that GATA6 was frequently overexpressed and/or amplified in human gastroesophageal cancer (Sulahian et al., 2014;Chia et al., 2015;Song et al., 2018). There's some limitations about our studies. First, we assessed the model based on NGS RNA-Seq data from the formalin-fixed and paraffinembedded materials, but not fresh materials. We did not evaluate it in fresh materials mainly due to the formalin-fixed and paraffin-embedded materials are most diagnostic materials in routine practice. Second, some solid tumor cancer types such as sarcoma was not included due to the unavailability of RNAseq data; besides, the non-solid tumors were currently excluded; melanoma was also excluded due to the data scarcity and the distinct distribution of its primary tumor sample number and metastatic tumor sample number. Thus, further efforts should be made for a broader application scope. Third, the training dataset could be further expanded. Since the final gene set contains some organ development-related genes, we can infer that the gene set does not only classify cancer types, but also organs. Staub et al. has already made efforts by expand the training dataset and achieved a better result (Staub et al., 2009). Thus, expression profiles from normal tissues could be further added to our training dataset for a better performance. Another limitation is that our method is based on the expression value without any manipulations. Recently, an algorithm called TSP was applied to this problem, which will generate gene pairs instead of single gene features, giving rise to a leap to the prediction accuracy (Shen et al., 2020). We believe that combining the
CONCLUSION
In the present study, our 150-gene panel exhibited promising results as a tumor classifier for inferring the origin of tumor tissue. First, we obtained NGS-based RNA-Seq data for 7,460 tumor samples from TCGA. Second, we built a fine pipeline to identify gene signatures based on their transcript-levels for 15 common cancer types. Third, we utilized the Neural Network to evaluate the performance of the genes; on average, the precision was 94.07%, while the sensitivity was 93.36%. In addition, GO enrichment analysis revealed several biological processes, including tissue morphogenesis; notably, most of the gene signatures were involved in key oncogenic pathways, supporting our 150-gene panel. Therefore, the 150-gene biomarker signature in our study might prove to be clinically useful for identifying cancers of unknown origin and confirming initial clinical diagnoses. In future studies, we will focus on the application of this model in metastatic cancer patients, in addition to patients with cancer of unknown origin, to evaluate their therapy outcome.
DATA AVAILABILITY STATEMENT
Publicly available datasets were analyzed in this study. This data can be found here: https://dcc.icgc.org/releases/release_26.
AUTHOR CONTRIBUTIONS
GT, JY, and HL conceived the concept of the work. BH, BW, YL, and JL performed the experiments. YZ wrote the manuscript. ZZ, HL, PB, LY, and DS reviewed the manuscript. All authors approved the final version of this manuscript.
FUNDING
This study was partially funded by Hunan Provincial Innovation Platform and Talents Department (Nos. 19A060 and 19C0185), and the Talents Science and Technology Program of Changsha (No. kq1907035).
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fbioe. 2020.00737/full#supplementary-material TABLE S1 | The result of 10 times of 10-fold cross validations using 10 genes for each cancer.
|
2020-08-05T13:05:41.168Z
|
2020-08-05T00:00:00.000
|
{
"year": 2020,
"sha1": "7acaecebfbb1c9068646a0c214c6a119ecafb24d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2020.00737/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7acaecebfbb1c9068646a0c214c6a119ecafb24d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
37526500
|
pes2o/s2orc
|
v3-fos-license
|
Hole pockets in the t-J model
We present an exact diagonalization study of the electron momentum distribution n(k) in small clusters of t-J model for different hole concentrations and t/J. Structures in n(k) which were previously interpreted as a `large' Fermi surface are identified as originating from the well known many-body backflow. To obtain reliable information about the true Fermi surface, we focus on the regime t=<J, where the backflow effect is weak and suppress the formation of a bound state by introducing a density repulsion between holes. We find clear signatures of a Fermi surface which, contrary to widespread belief but in agreement with recent photemission experiments and Monte Carlo studies for the Hubbard model, takes the form of small hole pockets. Spin ordering is shown to be irrelevant for this form of the Fermi surface. Comparison of the scaling of n(k) and that of the quasiparticle weight with t/J indicates that these pockets persist also for t>J.
I. INTRODUCTION
The unusual properties of high-temperature superconductors have led to great interest in the physics of correlated electrons near a Mott-Hubbard metal-to-insulator transition. Thereby a particularly intriguing problem is the volume of the Fermi surface (FS) for the slightly less than half-filled band: should one model the doped insulator by a dilute gas of quasiparticles corresponding to the doped holes (this would imply that the volume of the FS is proportional to the hole concentration) or do all electrons take part in the formation of the Fermi surface, so that its volume is identical to that of free electrons? It is the purpose of this paper to present evidence that for finite clusters of t−J model the first picture is the correct one: the FS as deduced from the momentum distribution takes the form of small hole pockets. The t−J model reads: The S i are the electronic spin operators,ĉ † i,σ = −c † i,σ (1 − n i,−σ ) and the sum over < i, j > stands for a summation over all pairs of nearest neighbors. Various authors [1][2][3] have computed the momentum distribution n σ (k)= ĉ † k,σĉ k,σ for the two-hole ground state of small clusters of this model (corresponding to a nominal hole concentration of ∼10%) and found it roughly consistent with a free-electron picture: n(k) is maximum at k=(0, 0), minimum at k=(π, π). It has become customary [2] to cite this as evidence that already at such fairly low hole concentrations the t−J model has a free electron-like ('large') FS. It is straightforward to see, however, that this shape of n(k) is simply the consequence of elementary sum-rules and has no significance for the actual topology of the FS [4]. We have therefore performed a systematic study of the n(k) for various doping levels and t/J.
II. SINGLE HOLE CASE
As compared to the uniform value of 1/2 for the half-filled case, the introduction of only a single hole changes n( k) in a rather complex way. Fig. 1 shows n(k) for the single hole-ground states with momentum k 0 = (π/2, π/2) in the 16-site cluster and momentum k 0 = (2π/3, 0) in the 18-site cluster. The k dependence of n(k) is roughly consistent with free electrons, i.e. n(k) is large near (0, 0) and small near (π, π). This structure, which simply ensures negative kinetic energy [4], is less pronounced the smaller t/J. The second characteristic feature are 'dips' at k 0 for the minority spin (i.e. the 'hole spin') and at k 0 + (π, π), for both spin directions. These dips are more pronounced for smaller t/J. The question arises which of these features should be associated with the FS, i.e. do we have a 'large' FS already for a single hole or is there a 'hole pocket' at k 0 ? We note that the magnitude of the discontinuity in n(k) has to be equal to the weight of the quasiparticle peak in the single particle spectral function, Z h . Since Z h has a pronounced [6] (and therefore characteristic) dependence on t/J, a potential FS discontinuity must have the same characteristic dependence on t/J. Then, the 'depth' of the dip at k 0 can be estimated by comparing with a symmetry equivalent k-point i.e. for k 0 = (π/2, π/2) we consider ∆ dip = n ↓ (−π/2, π/2) − n ↓ (π/2, π/2), for k 0 = (2π/3, 0) we study ∆ dip = n ↓ (0, 2π/3) − n ↓ (2π/3, 0). In Fig. 2 these differences are compared to Z h (obtained from the single particle spectral function for momentum transfer k 0 at half-filling) for various values of t/J. Obviously, ∆ dip = Z h over the entire range of t/J, so that that the dip clearly originates from the Fermi level crossing of the quasiparticle band, i.e. we have a 'hole pocket' at k 0 . On the other hand, differences ∆n(k) across the 'large' FS always show the opposite behaviour under a variation of t/J as Z h , indicating that these drops in n(k) are unrelated to any FS crossing. This suggests to associate this structure in n(k) with the well-known 'backflow' for interacting Fermi systems [5]. Such a strong backflow effect is by no means surprising if we consider the change in the single particle Greens function upon removing one electron with momentum k 0 near the FS: naively one might expect that the only effect be the shift of the quasiparticle peak at k 0 from the photoemission to the inverse photoemission spectrum. If this were true, however, the integrated photoemission weight (which equals the total number of electrons) had decreased only by Z h ≪ 1. Hence the bulk of spectral weight shift from photoemission to inverse photoemission must occur for momenta k = k 0 , i.e. the strong backflow. What is remarkable is the wide spread of the backflow in k-space. This implies that for each individual k the change of n(k) due to removal of an electron at k F is ∼ 1/N , with N the system size. For a small finite hole concentration δ it seems reasonable that the backflow contributions from the individual holes are additive, so that the total backflow contribution would scale with δ. What remains to be explained are the 'satellite dips' at k 0 + (π, π). The most natural explanation are antiferromagnetic spin correlations. To see this, let us consider the case t ≪ J, where the state 1/ √ 2ĉ k 0 ,↓ |Φ 0 (with |Φ 0 the half-filled ground state) to good approximation is an eigenstate. For this state with S(q) the static spin structure factor of |Φ 0 . Since the latter is peaked sharply at q = (π, π) we have a natural explanation for the 'satellite dips', and it seams reasonable to adopt this explanation also for larger values of t/J. Summarizing the results obtained so far, we may say that the introduction of a single hole changes n(k) in a rather complex way: there is a dip at the momentum of the hole, which originates from the Fermi level crossing of the quasiparticle band and thus represents the 'Fermi surface'. The dip is superimposed over a smooth freeelectron-like variation, the familiar many-body backflow. As a consequence of the small quasiparticle weight, this backflow is very pronounced in the t − J model. For later reference we note that the backflow contribution to n(k) to good approximation is a function of |k x | + |k y | only (this is also confirmed by investigating n(k) for other k 0 ). Finally, the strong antiferromagnetic spin correlations produce dips also at k 0 + Q. This shape of n(k) can be easily understood by recalling [7] that the elementary excitations near the Fermi energy are spin bags, where the hole is dressed by anti-ferromagnetic spin fluctuations; a simple calculation in terms of the string picture [8] reproduces the numerical results quantitatively.
III. TWO HOLE CASE
We proceed to the ground state with two holes. Various authors [1][2][3] have found that the free electron-like variation of n(k) observed already for a single hole becomes more pronounced for this doping level, and based on the criterion n(k) > 1/2 [2] the 'Luttinger Fermi surfaces' in Fig. 3 would be assigned. However, by the same arguments as for a single hole, these Luttinger Fermi surface are ruled out: Fig. 4 compares the t/J dependence of differences ∆n(k) across the respective Luttinger FS to that of the quasiparticle weight in the spectral function for the two-hole ground state. Z h decreases sharply, the ∆n(k) increase monotonically with t/J. The drop in n(k) upon crossing the large FS thus is obviously unrelated to any true Fermi level crossing. Instead, comparison with Fig. 2 shows that the t/J dependence of the ∆n(k) is very similar to the backflow contribution for a single hole. More precisley, if we assume that the backflow for the two holes is simply additive, we expect for the 'large FS' differences in the two-hole ground state: where ∆n (1h) σ (k) are the corresponding differences in the single hole ground state. Fig. 5 compares the 'large FS' ∆n(k) with the estimates obtained from (2) by using the ∆n (1h) σ (k) shown in Fig. 2. Both the magnitude and the t/J scaling are predicted very well by (2), which clearly suggests to associate the 'large FS' with the backflow contribution. The question then is: what is the true FS and how can we make it visible in n(k)? Fig. 1 shows that for a single hole the true Fermi surface (i.e. the hole pocket ) is most clearly visible for large J/t. This is simply related to the fact that the quasiparticle weight is large in this parameter region (see Fig. 2). Since the t/J-scaling of Z h is essentially the same at half-filling and in the two hole ground state, (see Figs. 2 and 4) we thus may expect to see the clearest FS signatures for large J/t also in the two-hole ground state. For more than one hole, however, we face an additional problem: the strong interaction between the holes, which manifests itself e.g. in a sizeable negative binding energy [9,10]. An interacting state of two 'quasiparticles' reads Thus, whereas for a single hole we could fix the location of the pocket simply by choosing the total momentum, in the two-hole ground state the holes will be distributed over different momenta with probability ∼ |∆(k)| 2 and one may not hope to observe any FS signature unless ∆(k) is well localized in k-space, i.e. ∆(k) ∼ δ k,k0 with the quasiparticle ground state k 0 . This in turn necessitates that the interaction energy be smaller than differences in single particle energy between neighboring kpoints, i.e. weak interaction and sufficiently strong dispersion. With this in mind we add a density interaction term H V =V <i,j> n i n j , to the Hamiltonian; adjusting the parameter V , one may hope to reach a situation, where H V to a certain degree 'cancels' the intrinsic attractive interaction of the holes. In addition, we include a small next-nearest neighbor hopping term in the Hamiltonian, so as to lift the unfavourable (near) degeneracy of the quasiparticle dispersion along the surface of the magnetic Brillouin zone; we fix the value of the respective hopping integral to be t ′ = −0.1t. For the 16-site cluster this term has the additional advantage that it breaks the spurious additional symmetry due to the mapping to a 2 4 hypercube and selects a unique two-hole groundstate with momentum (0, 0). Let us stress the following: due to the addition of these terms we are strictly speaking no longer considering the original t−J model. It seems quite plausible, however, that if a FS exists at all, its volume should be changed neither by changing the kinetic energy (t ′ -term) nor by introducing an additional interaction (V -term).
To demonstrate the adjustment of V , Fig. 6 shows the variation of the hole density correlation function in the two-hole ground states of the 16 and 20 site cluster with V (due to a subtle but understandable pathology in its geometry, analogous results cannot be obtained for the 18-site cluster, see Appendix). Its essentially identical behaviour in both clusters clearly signals a change of the net interaction between the holes from attraction to repulsion. For intermediate values of V , on the other hand, g(R) is quite homogeneous indicating that the single particle delocalization energy of the holes dominates over their interaction. Given this plus the large Z h we should therefore be in an optimal position for observing the FS. Figs. 7 and 8 show the single particle spectral function A(k, ω) for J/t=2 and the momentum distribution for J/t=1 and J/t=2 (and the respective 'optimal' V ). In the spectral function, the chemical potential E F is located near the top but within a group of pronounced peaks, well separated from another such group in the inverse photoemission spectrum. There are pronounced peaks both immediately above and below E F which comprise the bulk of spectral weight for the respective momenta. Corresponding to the well defined 'quasiparticle peaks' in the spectral function, n(k) exhibits a sharp variation: hole pockets at (π, 0) and (0, π). They are superimposed over the familiar backflow contribution, which again has the generic free electron like form so as to ensure negative kinetic energy. Fig. 8 also gives the values of the quasiparticle weight for the 'Fermi momenta'. For (2π/5, 3π/5) in the 20-site cluster the 'quasiparticle peak' in the phtoemission spectrum (PES) actually consists of two peaks with approximately equal weight; we consider these as a single 'broadened' peak, so that the two weights should be added. This is supported by the good agreement with Z h at (π/2, π/2) in the 16-site cluster (where no splitting occurs) and the reasonable agreement with the Z h deduced from the inverse photoemission spectrum (IPES) at (π, 0) (where no splitting occurs either). The 'depth' of the pockets approximately equals Z h and both quantities consistently decrease with decreasing J/t. A somewhat astonishing feature of these results is the location of the pockets at (π, 0) rather than at (π/2, π/2). This can be traced back to the point group symmetry of the ground two-hole ground state: when the symmetry of the half-filled ground state is A 1 (or s), that of the two-hole ground state is B 1 (or d x 2 −y 2 ) and vice versa (the former situation is realized in the 16 and 18-site cluster, the latter in the 20-site cluster). Addition of two holes thus always is equivalent to adding an object with d x 2 −y 2 -symmetry, which implies that the pair wave function ∆(k) in (3) should have this symmetry as well. This in turn implies ∆(k) = 0 for k along (1, 1), so that occupation of (π, 0) is favoured. Fig. 9 shows the 'FS discontinuities' in the 4 × 4 cluster under a variation of the repulsion strength V . They show maxima when the density correlation function is most homogeneous, precisely as one would expect for Fermions with a variable interaction strength. In the spectral function the reduction of the discontinuities as V → 0 manifests itself by the reduction in intensity of the big IPES peak at (π, 0) and the appearance of small low energy IPES peaks at the momenta next to (π, 0): the pockets are 'washed out'. We note that the ∆n(k) across the 'large' FS remains unaffected, another indication that it is unrelated to low-energy physics. A possible explanation for the hole pocket FS would be spin-density-wave-type broken symmetry: although the ground states under consideration are spin singlets, this might be realized if the fluctuations of the staggered magnetization M S were slow as compared to the hole motion, so that the holes move under the influence of an 'adiabatically varying' staggered field. A possible criterion for this situation would be τ tr ·ω AF ≪2π, where τ tr is the time it takes for a hole to transverse the cluster and ω AF is the frequency of fluctuations of M S . We estimate (for J/t = 2) the group velocity of the holes from the dispersion of the 'quasiparticle peak' in the PES spectrum and, using the energies indicated by arrows in Fig. 7 for the 20-site cluster and the peaks at (π/2, 0) and (π/2, π/2) for the 16-site cluster, we find τ tr ≃2π/0.5t(2π/0.2t) for the 20 (16)-site cluster. Typical frequencies for fluctuations of M S can be obtained from its correlation function, which, up to a constant, equals the dynamical spin susceptibility for momentum transfer (π, π); a rigorous lower bound on ω AF thus can be obtained by subtracting the ground state energy from the energy of the lowest state with total momentum (π, π) and the same point group symmetry as the ground state.
This gives ω AF >0.9t(1.2t) for the 20 (16) cluster, i.e. τ tr · ω AF >2π. 'Almost static' Néel order thus can be ruled out as origin of the small FS, even for this fairly large value of J/t. As an additional check we have introduced exchange terms J ′ between 2 nd and 3 rd nearest neighbors to reduce the spin correlations and again optimized the repulsion to enable 'free' hole motion. Ground state properties of this (highly artificial) model are summarized in Fig. 10: the momentum distribution, hole density correlation function and spin correlation function S(|R|) = exp(iQ · R) S i · S i+R (with Q = (π, π)). The density correlation function is homogeneous (no charge ordering), the spin correlations decay rapidly (no long range antiferromagnetic or spiral ordering) but still there are unambiguous hole pockets in n(k). The only possible conclusion is that it is only the large Z h which makes the pockets visible in the large J region, and not the onset of any kind of ordering. While the hole pockets can be made clearly visible for large J, the situation is more involved for t>J. In this parameter region the small overlap between 'quasiparticle' and 'bare hole' (as manifested by the small Z h ) makes the V -term (which couples only to the bare hole) increasingly inefficient in enforcing a noninteracting state: rather than separating from each other, the two holes remain bound on second-nearest neighbors up to fairly large values of V and the 'crossover' from attraction to repulsion, which gave an unambigous prescription for choosing V , cannot be obtained any more. We thus abandon both the V and t ′ terms and adopt a more indirect way of reasoning.
In the single hole case, we found that the pocket was superimposed over the smooth backflow contribution. We assume that the situation for 2 holes is similar, only with the additional complication that the pockets are now 'washed out' due to the interaction between holes (see Fig. 9). Therefore we expect that n(k) can be written as with the pair wave function ∆(k) introduced in (3). As discussed above, the point group symmetry of the twohole ground state necessitates that ∆(k) has d x 2 −y 2 symmetry, so that the pockets are located at (π, 0). Then, since the symmetry of the ground state is unchanged by adding either the t ′ or V term, we conclude that this should also hold true in the absence of these terms. Since n back (k) to good approximation is a function of |k x | + |k y | only (see Fig. 1), this contribution can be eliminated by forming the difference of two momenta with (almost) equal |k x | + |k y |. Next, if we choose one of these momenta along (or near) the (1, 1) direction, where the d x 2 −y 2 -symmetry requires that ∆(k) vanishes (or is small), and the other at (or near) (π, 0) we should obtain ∆n(k) = Z h · |∆(π, 0)| 2 , so that, in contrast to the 'large FS' differences indicated in Fig. 3 this difference should scale with Z h . To check this prediction, the t/J dependence of various such differences is shown in Fig. 11 and obviously, they are to excellent approximation proportional to Z h over a wide range of t/J. The scaling of n(k) with t/J thus is completely consistent with the assumptions a) that there are washed out hole pockets at (π, 0), b) that these are superimposed over the smooth backflow contribution, which is the sum of the backflows for the two individual holes (see Fig. 5).
IV. COMPARISON WITH OTHER NUMERICAL STUDIES AND EXPERIMENT
While the hole pockets are very clearly visible for large J, the evidence for their existence in the physical regime is of a more indirect character. A comparison with other numerical calculations and experiments on hightemperature superconductors is therefore necessary. As far as numerical studies on small clusters are concerned, hole pockets and/or rigid band behaviour upon doping are consistently suggested by most of the available numerical calculations. For the t − J model, Poilblanc and Dagotto [11] studied the A(k, ω) for single hole states and concluded that the two-hole ground state in the 4×4 cluster shows hole pockets at (π, 0), in agreement with the present result. On the other hand, Stephan and Horsch [2] studied n(k) and A(k, ω) for the two-hole ground state and concluded that there is neither rigid band behaviour nor hole pockets. However, these authors based their conclusions solely on the qualitative inspection of a rather limited data set, which is largely irrelevant [4] for deciding the FS topology. In addition to the inconsistent scaling behaviour found above (Fig. 4), numerical calculation of A(k, ω) for the 20-site cluster [14] rules out the Luttinger FS postulated by Stephan and Horsch. Castillo and Balseiro [12] computed the Hall constant and found its sign near half-filling to be consistent with a hole-like FS, i.e. with hole pockets. Gooding et al. [13] studied the doping dependence of the spin correlation function in clusters with special geometry and also found indications of rigid-band behaviour. Finally, a systematic study of the doping dependence of the single particle spectral function [14] shows rigid-band behaviour, i.e. holes are filled into the quasiparticle band present at half-filling (which naturally implies hole pockets). The situation is quite similar for the Hubbard model. While the generic [4] free-electron like shape of n(k) found in earlier Monte-Carlo studies [15] was initially considered as evidence against hole pockets, more careful and systematic analysis [16] showed that hole pockets are in fact remarkably consistent with the numerical data, their nonobservation in the earlier studies being simply the consequence of thermal smearing. It seems fair to say that the available numerical results for small clusters of both Hubbard and t−J models, when interpreted with care, are all consistent with rigid band behaviour and/or hole pockets. Let us next discuss experimental results on hightemperature superconductors assuming that the hole pockets found in the cluster studies persist in the real systems. The volume of the FS associated with the Cu − O plane-derived bands in these materials presents a well-known puzzle: early photoemission experiments [18] show bands, which disperse towards the Fermi energy and vanish at points in k-space which are roughly located on the free electron FS corresponding to electron density 1 − δ, where δ is the hole concentration; on the other hand transport properties can be modelled well [19,20] by assuming a FS with a volume ∼ δ. In a Fermi liquid, the apparently contradicting quantities actually fall into distinct classes: photoemission spectra depend on Z h , transport properties do not. Hence, if one wants to resolve the discrepancy entirely within a Fermi liquid-like picture, the simplest way would be to assume a 'small' FS and explain the photoemission results by a systematic variation of Z h along the band which forms the FS, similar to the 'shadow band' picture [21]. A trivial argument for such a strong k-dependence of the quasiparticle weight is, that a distribution of PES weight in the Brillouin zone (and hence a n(k)) that resembles the nointeracting FS, always optimizes the expectation value of the kinetic energy. Therefore it is favourable if those parts of the band structure, which lie inside the free-electron FS have large spectral weight, and the parts outside small weight. Then, it seems that in a recent photoemission study by Aebi et al. [22], structures which are very consistent with such a shadow band scenario have indeed been observed. Moreover, another key feature of the dispersion relation for a single hole, namely the extended flat region near (π, 0) [23,24] has also been found as an universal feature of high temperature superconductors [25,26] Adopting a rigid band/hole pocket scenario thus would explain many experiments in a very simple and natural way, which moreover is remarkably consistent with the existing numerical data as a whole.
V. CONCLUSION
We have discussed the problems in directly determining the FS from n(k) in the t−J model: small quasiparticle weight, a pronounced 'backflow' effect and strong interaction between the doped holes. We have then examined the single particle spectral function and n(k) in a situation where these problems were largely avoided and found signatures of a FS which takes the form of small hole-pockets. Analysis of the scaling of n(k) with t/J suggested that these pockets also persist in the regime t > J. Available numerical data all support this picture, and we have outlined a possible scenario to reconcile experiments on high-temperature superconductors.
The assumption of a small Fermi surface implies that the phase of some given basis state is determined by a Slater determinant of rank N − N e , (N e being the number of electrons) rather than N e (as it would be e.g. in a Gutzwiller projected Fermi sea). Moreover, it would not be the positions of the electrons which enter this Slater determinant, but those of the hole-like quasiparticles, so that we have a very different nature of long range phase coherence. The Fermi surface in an interacting system, being a 'remnant' of the noninteracting one, is obviously a consequence of the requirements to have minimum kinetic energy and to satisfy the Pauli principle. On the other hand, close to half-filling most of the electrons are immobile, so that the gain in kinetic energy from creating the long range phase coherence between electrons (which is responsible for the singularity of n(k)) may not be very large. On the other hand, the vacancies are almost unconditionally mobile, so that phase coherence between holes may be more favourable.
It is a pleasure for us to acknowledge numerous instructive discussions with Professor S. Maekawa. Financial support of R. E. by the Japan Society for the Promotion of Science is most gratefully acknowledged. Computations were partly carried out at the computer Center of the Institute for Molecular Science, Okazaki National Research Institues.
VI. APPENDIX
The 18-site cluster has a pathological geometry, which does not allow for an unbound state of two particles with d x 2 −y 2 -symmetry. The ultimate reason is that the primitive lattice translations in this cluster are (3,3) and (3, −3). Writing an interacting two-hole state in real space, we have If we choose R = (2, 1), rotate counterclockwise by π/2, reflect by the x-axis and add (3,3) we recover the original vector. A state with d x 2 −y 2 symmetry picks up a factor of (−1) during these operations, hence φ(2, 1) = 0. Analogous reasoning shows that φ(3, 0) = 0, so that all large distances between particles are 'symmetry forbidden' (possible distance in this cluster are (1, 0), (1, 1), (2, 0) (2, 1) and (3, 0)). The 'unbinding transition' for this cluster thus can occur only via level crossing and this is indeed the case: when V is switched on in the d x 2 −y 2 ground state, the holes stay close to each other even for fairly large values of V . Instead, at V ∼ 3t (for J/t = 2) a level crossing occurs, and a new ground state with momentum (2π/3, 0) is stabilized.
FIG. 1. Momentum distribution for the single hole ground state with Sz = 1/2 (i.e. with a '↓-hole') of the 4 × 4 cluster with t/J = 4 (top) of the 4 × 4 cluster with t/J = 1 (middle) 18-site cluster with t/J = 2 (bottom). The upper values refer to the majority spin, the lower values to the minority spin, the ground state momentum k0 is marked by a black box and k0 + (π, π) by a dotted box.
|
2018-04-03T04:03:22.216Z
|
1994-07-25T00:00:00.000
|
{
"year": 1994,
"sha1": "c8000f5f3c6b832063226e9e28bc5b1cce9a157e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9407097",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c8000f5f3c6b832063226e9e28bc5b1cce9a157e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
266873080
|
pes2o/s2orc
|
v3-fos-license
|
Hemodynamic changes in progressive cerebral infarction: An observational study based on blood pressure monitoring
Abstract Progressive cerebral infarction (PCI) is a common complication in patients with ischemic stroke that leads to poor prognosis. Blood pressure (BP) can indicate post‐stroke hemodynamic changes which play a key role in the development of PCI. The authors aim to investigate the association between BP‐derived hemodynamic parameters and PCI. Clinical data and BP recordings were collected from 80 patients with cerebral infarction, including 40 patients with PCI and 40 patients with non‐progressive cerebral infarction (NPCI). Hemodynamic parameters were calculated from the BP recordings of the first 7 days after admission, including systolic and diastolic BP, mean arterial pressure, and pulse pressure (PP), with the mean values of each group calculated and compared between daytime and nighttime, and between different days. Hemodynamic parameters and circadian BP rhythm patterns were compared between PCI and NPCI groups using t‐test or non‐parametric equivalent for continuous variables, Chi‐squared test or Fisher's exact test for categorical variables, Cox proportional hazards regression analysis and binary logistic regression analysis for potential risk factors. In PCI and NPCI groups, significant decrease of daytime systolic BP appeared on the second and sixth days, respectively. Systolic BP and fibrinogen at admission, daytime systolic BP of the first day, nighttime systolic BP of the third day, PP, and the ratio of abnormal BP circadian rhythms were all higher in the PCI group. PCI and NPCI groups were significantly different in BP circadian rhythm pattern. PCI is associated with higher systolic BP, PP and more abnormal circadian rhythms of BP.
at present, the diagnosis of PCI mainly depends on the standardized neurological scale, which can only be derived after the deterioration of neurological function and the aggravation of clinical symptoms.Considering the increasing incidence of ischemic stroke, there is a high clinical need for early detection of PCI. 3,6ny physiological, clinical, and neuroimaging features have been studied as potential risk factors of PCI, 7 for example, diabetes, homocysteine, National Institutes of Health Stroke Scale (NIHSS) score on admission, fibrinogen and intracranial artery stenosis. 4The excess thrombin generation and fibrin turnover with a high D-dimer level were observed in patients with PCI compared with stable and improving patients. 3However, existing early indicators of PCI are inconclusive.In addition, the biomarkers derived from lab tests are not always available in clinical practice.Currently, there is a lack of well recognized and clinically available test for reliable early detection of PCI.
Blood pressure (BP), as one of the commonest physiological parameters in clinical practice, can reflect PCI-related hemodynamic changes. 8I is associated with hemodynamic changes in the collateral circulation, decrease in brain tissue perfusion, and edema, which may influence the regulation of BP and its variability. 9On the other hand, hypertension is a main risk factor for ischemic stroke. 10The poststroke BP increase is associated with a higher risk of neurological deterioration in patients with ischemic stroke. 11High values of systolic blood pressure (SBP), SBP variability, mean arterial pressure (MAP), and pulse pressure (PP) are associated with poor functional outcome, early neurological deterioration, recurrence of stroke, and high mortality. 10,12Zhao and coworkers performed a dynamic analysis of BP in PCI patients, where BP was measured every 8 ± 1 h from 16 h to 5 days after admission, with SBP, diastolic blood pressure (DBP), and MAP recorded. 5They found that high SBP, abnormal circadian rhythm of BP (extreme-dipper), and a medical history of hypertension for over 5 years, were associated with PCI.The existing studies indicated that the hemodynamic parameters derived from BP may enable the early detection of PCI.However, there is a lack of comprehensive investigation on the association between BP-derived hemodynamic parameters and the occurrence of PCI.
To fill this research gap, we comprehensively analyzed the relationship between PCI and the changes in BP-derived hemodynamic parameter based on post-admission BP measurements in 7 days among ischemic stroke patients.The underlying scientific hypothesis is that the BP-derived hemodynamic parameters could be significantly different between ischemic stroke patients with PCI and those with non-progressive cerebral infarction (NPCI).(3) PCI was defined as the neurological deficits that did not show any improvement and continued to progress within 1 week under regular treatment, with the NIHSS score increased by at least 2 points. 13,14e exclusion criteria were as follows: (
Medical history, NIHSS, and blood test
For all included patients, clinical information was collected: history of hypertension, diabetes mellitus, and atrial fibrillation, NIHSS at admission and 7 days after admission, and time to clinical event of neurologic deterioration after admission.The blood test results included total cholesterol, triglycerides, high-density lipoprotein cholesterol, lowdensity lipoprotein cholesterol, creatinine, urea nitrogen, fibrinogen, vitamin B12, and folic acid.
Data collection of blood pressure
The monitoring equipment was applied after the patient was admitted in the Acute Stroke Unit.Noninvasive BP monitoring was performed using an automated sphygmomanometer in the non-hemiplegic arm with the patient lying supine for at least 3 min (Figure 1A).The first BP, that is, the baseline SBP and DBP, was retrieved from the first recording after admission.Patient's BP was measured every 2 h along with other physiological parameters during the acute phase of cerebral infarction (Figure 1B).All patients completed the entire BP measurement process in the Acute Stroke Unit.
In this study, the BP metrics from the original BP recordings were retrospectively collected for data analysis.The BP recordings of the 80 included patients (PCI and NPCI, 40 for each) in the first seven consecutive days after admission were retrieved.In total, 9152 BP recordings were included for analysis.Each BP recording included four BP metrics: SBP, DBP, MAP, and PP (Figure 1C).
The hemodynamic parameters were calculated from the BP metrics for daytime (i.e., diurnal: 6:00 to 22:00) and nighttime (i.e., nocturnal: 22:00 to 6:00 next day), respectively (Figure 1B).The hemodynamic parameters include: the diurnal mean SBP (DMSBP, i.e., the mean of SBP values measured after 6:00 and before 22:00), and NBPDR < 0, respectively.Among these rhythms, extremedipper, non-dipper, and reverse-dipper were deemed as abnormal BP rhythms. 15he multicollinearity was evaluated using the tolerant test, where the variance inflation factor value smaller than 5 was considered as no collinearity with other variables. 16In this case, Pearson correlation analysis was performed to double check if any linear relationship existed between two variables, defined as absolute value of correlation coefficient higher than 0.3. 17When collinearity was observed, principal components analysis was performed where the principal components were reserved in order until the proportion of variance explained was above 0.9.The selected principal components were used to substitute the variables with multicollinearity.
Statistical analysis
Time-to-event analysis based on the first PCI event was performed using Cox proportional hazards regression for all covariates.Binary logistic regression analysis was used to analyze the independent risk factors of PCI.For both Cox and logistic regression analyses, the variates with p < .1 in univariate analysis were selected for multivariate analysis. 18Significant difference was defined as p < .05.
Characteristics of two groups at baseline
There was no significant difference between the two groups in other factors except NIHSS on the 7th day after admission, hypertension and fibrinogen (Table 1).
3.2.1
Difference between PCI and NPCI groups in SBP and DBP As shown in Figure 2, the PCI group has higher SBP and lower DBP than the NPCI group, whereas the majority of these differences are not statistically significant.The DMSBP on the first day and NMSBP on the third day were significantly higher in the PCI group (p < .05for both).
There was no significant difference in DBP parameters.
Differences between diurnal, nocturnal, and 24 h values in PCI and NPCI groups
Regarding SBP, significant differences between DMSBP, NMSBP, and all, Figure 2A).Regarding DBP, we only observed significant differences between DMDBP, NMDBP, and 24-hMDBP in the NPCI group on the first day of admission (Figure 2B).
Comparison between values of difference days in PCI and NPCI groups
In the PCI group, there was a sharp decrease of DMSBP on the second day after admission, followed by a milder but significant increase on the third day (Figure 3A).By contrast, the DMSBP of patients in the NPCI group decreased gradually over time.The only significant decrease appeared on the sixth day.There were no significant difference between different days in NMSBP, in both NPCI and PCI groups.
Regarding 24-hMSBP, there was no significant differences between any 2 days in the NPCI group.In the PCI group, the only significant difference in 24-hMSBP was between the second and the first days where there was a sharp decrease (p = .012).
The DMDBP of patients in the PCI group was significantly higher on the first day of admission than on the following 4 days (Figure 3B).
In the NPCI group, DMDBP of the third day was significantly higher than those on the 4th and 7th days.Regarding NMDBP, there was no significant difference between different days in the PCI group.In the NPCI group, the NMDBP of the second day was significant lower than that of the sixth day (p = .013).As for 24-hMDBP, the value of the second day was significantly lower in the PCI group than that of the first day (p = .012).In the NPCI group, the 24-hMSBP on the seventh day was significantly lower than that on the third and sixth days.The intra-group comparison results of DMDBP showed minor differences between PCI and NPCI groups.
PP: Significant differences between PCI and NPCI
The 24-h mean PP in the PCI group were always higher than those of NPCI group.There were significant differences between PCI and NPCI groups on the 1st, 3rd, 4th, 5th, and 6th days (Figure 2C).
MAP: No significant difference
There was no significant difference in 24-h MAP between PCI and NPCI group.
Blood pressure circadian rhythm
As shown in Figure 4(A), in the PCI group, the NBPDR of the first day was significantly different from those of the 2nd, 3rd, and 5th days.There was no significant difference among 2nd to 7th days.In comparison, in the NPCI group, there were significant differences between the 1st day and the 2nd, 4th, 6th, as well as 7th days; between the 2nd day and the 3rd as well as 7th days.In general, the variation of circadian BP rhythm was less obvious in the PCI group.
In Figure 4(B), on the first day of admission, there was no significant difference between the PCI group and the NPCI group (p > .05 in chi-square test).On the third day of admission, compared with NPCI patients, the percentage of dipper pattern was significantly lower and the percentage of abnormal BP rhythm (non-dipper, reverse-dipper) was significantly higher in the PCI group (p < .001).
Potential risk factors of PCI
3.6.1 Time-to-event analysis: Effect of baseline clinical parameters on PCI Among the sixteen baseline variables (i.e., clinical parameters measured at admission), three (hypertension, fibrinogen, and baseline SBP) with p value less than .1 in univariate Cox regression analysis were selected for multivariate analysis where collinearity was not observed.
The multivariate analysis showed high fibrinogen and SBP at admission were associated with the development of PCI (Table 2).
Logistic regression analysis: Potential risk factors of PCI
For continuous variables with p <.1 in univariate analysis, multicollinearity was found among baseline SBP, 24-h DMSBP and 72-h NMSBP, and 5-day mean PP (correlation coefficients from 0.31 to 0.94, variance inflation factor > 5), where principal components analysis was performed.The first three principal components (proportion of variance explained: 90.69%) were selected and input with the hypertension history and fibrinogen into the multivariate logistic regression analysis based on backward stepwise approach.In the final multivariate regression model, the first principal component (Odds ratio 1.915, p = .022)and fibrinogen (Odds ratio 2.268, p = .001)were statistically significant (Table 3).Compared to other BP-derived hemodynamic
Summary of results
The prognosis of stroke depends on the progressive cerebral hemodynamic damage. 7,19In this study, we reported the temporal features of BP-derived hemodynamic parameters in patients with PCI.We initially investigated the relationships between these hemodynamic parameters and PCI by comparing the results between PCI and NPCI groups.The pattern of temporal changes in BP-derived hemodynamic parameters was significantly different between PCI and NPCI groups.
In the PCI group, both SBP and PP were significantly higher, accompanied by abnormal circadian BP rhythms.High baseline SBP, PP and fibrinogen were associated with PCI and might be potential risk factors of PCI.
Consecutive BP recordings: Towards higher accuracy
To investigate the temporal fluctuations of BP-derived hemodynamic parameters, we used consecutive BP recordings with strict inclusion/exclusion criteria.In existing studies, the relationship between BP and PCI was investigated mainly based on the cross-sectional BP values rather than consecutive BP recordings. 9The consecutive non-invasive BP recording not only enables the observation of temporal fluctuations of BP, but also reduces the effect of measurement errors and observer bias when compared to cross-sectionally recorded BP. 20,21 As far as we know, our study is among the first attempts towards BP-based PCI detection from a hemodynamic perspective.
Role of SBP in PCI-related hemodynamic changes
In this study, we observed a post-admission decrease in BP which has different patterns in PCI and NPCI patients.In the PCI patients, SBP (including DMSBP and 24-hMSBP) dropped sharply on the 2nd day, with a milder increase on the 3rd day (Figures 2A and 3A).High SBP at admission was associated with PCI, and PCI patients have significantly higher 24-h DMSBP and 72-h NMSBP than NPCI patients.These parameters might be potential risk factors of PCI.Our findings are in accordance with existing clinical observations.On the one hand, in both PCI and NPCI patients, SBP fluctuates in a time-dependent manner in the first few days after an ischemic stroke. 22An abrupt elevation in BP after an ischemic stroke was observed in the majority (approximately 80%) of the patients in a few hours or 1−2 days, and post-stroke BP elevations are generally transient and return to baseline level within a week among two-thirds of patients. 10,23On the other hand, PCI patients seem to have larger SBP fluctuations, which may indicate the hemodynamic instability.Castillo and coworkers found that the fall in BP during the first day after admission is detrimental and associated with worsening neurological function for patients with acute ischemic stroke. 24A sudden drop in SBP is considered the strongest predictor of poor prognosis of ischemic stroke and is associated with a final infarct volume of more than 60 mL. 24,25Patients with a larger drop in BP in the first 24 h had a higher risk of recurrent ischemic stroke. 26In addition, SBP is independently associated to the early mortality after acute ischemic stroke. 10Arterial BP at admission independently predicts PCI, where high SBP at admission indicates poor prognosis. 3,27In a study of ischemic stroke patients who received ambulatory BP monitoring within 72 h of admission, it was found that high daytime SBP was significantly associated with the recurrence of ischemic stroke. 2 High SBP was related to the deterioration of ischemic stroke, and high 24-h SBP could be used as a predictor of clinical deterioration caused by brain swelling. 28Therefore, both post-stroke SBP and its fluctuations deserve more attention as potential indicators of PCI.
Our observation on SBP and PCI indicates a possible pathologic mechanism of PCI.Ischemic tissues are particularly vulnerable to the fluctuations of SBP. 29 When SBP decreases, cerebral vessels with impaired vasomotor response may not be able to dilate and increase cerebral blood flow, leading to ischemia and PCI. 10,15The autoregulation of CI patients is impaired, which makes cerebral perfusion more dependent on SBP, where a low perfusion pressure might increase the infarct area. 27We speculated that the DMSBP increase on the third day might indicate the compensatory mechanism of cerebral perfusion maintenance. 25In addition, studies have shown that hypertension within a few hours after stroke can lead to the destruction of the bloodbrain barrier, an increase in cerebral perfusion pressure, and finally the formation of edema. 28Patients of CI with well-developed collateral circulation often have a lower post-stroke BP. 22,30 In the first few hours after onset, stroke deficits are unstable, prone to sudden worsening with clot propagation or collateral failure. 31Collateral failure is a likely mechanism for most ischemic stroke-in-progression, 32 of which SBP may reflect the severity from a hemodynamic perspective.The underlying pathophysiology between SBP and PCI deserves more in-depth investigation.
4.4
Pulse pressure In this study, we observe that PP has strong explanatory power in the first principal component of BP-derived parameters, which indicates that PP might be associated with PCI during its development.
Studies showed that 24-h PP is an independent risk factor for stroke and an independent predictor of long-term mortality. 21,33Grabska and coworkers calculated the mean PP 7 days after stroke in 1677 patients and found that elevated PP in the acute phase of CI was an independent predictor of poor early prognosis. 34PP plays an important role in predicting the recurrence of CI and is positively correlated with the deterioration (i.e., progression) of neurological dysfunction. 1,35gh PP is recognized as a marker of large arterial stiffening and widespread atherosclerosis, 36 which affect the macro-and microcirculation, inducing remodeling of vessel walls. 34,37For cerebral circulation, increased pulsatile stretching means damaged adaptive properties as a result of endothelial damage and stiffening, and low perfusion during diastole, both leading to the aggravation of CI. 5,38 Our observations highlight the significance of PP monitoring in early detection of PCI.Of note, the observations were based on a small cohort.Large-scale validation is essential and may derive quantitative PP thresholds/metrics to estimate the risk of PCI.
Circadian BP rhythm
In this study, we found that the variation of circadian BP rhythm was reduced in the PCI group where pathological circadian rhythms were more frequent and lasted longer.In the PCI group, on the third day after admission, we observed a low percentage of normal dipper pattern with high percentages of abnormal BP circadian rhythm patterns (non-dipper and reverse-dipper).We speculated that the abnormality in BP circadian rhythm might be associated with metabolic imbalance caused by higher sympathetic activity in the PCI patients. 39We also observed the instability in NBPDR over time, which might reflect the hemodynamic instability in the early stage of PCI.
The pathologically reduced or abolished dipper pattern after an acute ischemic stroke may lead to more target organ damage, which could deteriorate the neurological outcome. 40,41The patients with dipper pattern after an ischemic stroke showed good prognosis, whilst the abnormal circadian rhythm patterns were associated to poor prognosis. 42The abnormal circadian rhythm patterns of BP (nondipper and reverse-dipper) are independent predictors of stroke in patients with hypertension, 43 and are associated with more severe baseline stroke and poorer short-term functional recovery. 42In accordance with existing studies, our observations suggest that abnormal circadian rhythm of BP might be an early indicator of PCI.In the future, large-scale prospective studies may further clarify the trends of NBPDR in PCI patients.
Significance for clinical practice and research
The occurrence of PCI involves multifactorial pathology, including hemodynamic and metabolic factors. 9The underlying pathophysiological mechanisms include expansion of thrombus, collateral circulation angiemphraxis, decrease in brain tissue perfusion, brain cell edema, and apoptosis. 14Our results may provide new materials for the pathological research of PCI and hemodynamic factors.Our study also highlight the significance of BP monitoring and management in clinical practice since BP is recognized as a treatable risk factor for CI. 40 found that fibrinogen was associated with PCI and may be a potential risk factor for PCI, which is consistent with previous studies.
High fibrinogen level was found associated with neurological deterioration and poor functional prognosis in early CI. 4,44The measurement of fibrinogen is inexpensive and widely available.The fibrinogen measurement and continuous monitoring of BP dynamics at the early stage of ischemic stroke may provide more information of cerebral hemodynamics, enabling earlier detection of PCI.
Limitations and future directions
There are some limitations in this study.First, it was a single-center study based on a small cohort.The possible inaccurate or incomplete case identification may lead to bias in patient selection.Although all patients received standardized treatment in the same stroke unit, we did not have complete data on whether some patients were taking antihypertensive medications during the study period and could not detect statistically significant BP differences due to antihypertensive treatment.Second, this was a retrospective study.It was not our aim to study the sequence of BP-related changes and PCI from a pathologic perspective, and our results do not demonstrate a direct causal relationship between BP changes and PCI outcomes.It is worthwhile to further explore long-term BP changes in PCI patients.In addition, our estimation of NBPDR was based on a relatively low number of BP recordings.Frequent BP measurement in daily activities is essential for accurate estimation of NBPDR and circadian rhythm of BP.In the future, more prospective multicenter studies can be used to validate our findings and elucidate the underlying mechanisms.Recent studies have reported that BP and hemodynamic changes are effective indicators in the diagnosis and treatment of large groups of brain disorders. 45,46Therefore, with methodological adaptation (e.g., including other proposed metrics of cerebral circulation), the findings might be extended to the cohorts with other brain disorders, for example, hydrocephalus and cerebral atrophy, to enable the early detection of severe symptoms and improve the diagnosis and treatment. 47
CONCLUSIONS
We observed a decrease of BP in post-stroke patients, where the pattern was different in patients with PCI and NPCI.PCI patients showed higher daytime and nighttime SBP, higher PP, and more abnormal cir-cadian rhythms of BP.PCI was associated with admission high SBP, PP and fibrinogen.These factors deserve further attention in the early detection of PCI.
F I G U R E 1
Flowchart and research question of the study.(A) Physiological measurement and data collection.(B) Automatic blood pressure measurement based on a 2-h interval.(C) Data analysis and calculation of hemodynamic parameters.(D) Clinical imaging of PCI.Similar level of cranial diffusion-weighted magnetic resonance images were recorded before and after progression in a patient with cerebral infarction.(E) Research question.BP, blood pressure; DBP, diastolic blood pressure; MAP, mean arterial pressure; NPCI, non-progressive cerebral infarction; PCI, progressive cerebral infarction; PP, pulse pressure; SBP, systolic blood pressure.nocturnal mean SBP (NMSBP), 24-h mean SBP (24-hMSBP), diurnal mean DBP (DMDBP), nocturnal mean DBP (NMDBP), 24-h mean DBP (24-hMDBP), 24-h mean PP, 24-h mean MAP (24-h averaging is the mainstream processing method for PP and MAP), and nocturnal BP decrease rate (NBPDR).The related formulas were listed as follows: PP = SBP-DBP, MAP = DBP + 1/3 *(SBP-DBP), NBPDR = (DSBP-NSBP)/ DSBP.Regarding the circadian rhythm of BP, dipper, extreme-dipper, non-dipper and reverse-dipper patterns were defined as 10% ≤ NBPDR < 20%, NBPDR ≥ 20%, 0 ≤ NBPDR < 10%, 24-hMSBP (pairwise) appear on the first and third days after admission in the NPCI group, and only on the first day in the PCI group (p < .05for F I G U R E 2 Intra-and inter-group comparisons of BP-derived hemodynamic parameters in each day in PCI and NPCI patients.(A) systolic blood pressure.(B) diastolic blood pressure.(C) pulse pressure.DMSBP, diurnal mean systolic blood pressure; NMSBP, nocturnal mean systolic blood pressure; 24-hMSBP, 24-hour mean systolic blood pressure.DMDBP, diurnal mean diastolic blood pressure; NMDBP, nocturnal mean diastolic blood pressure; 24-hMDBP, 24-hour mean diastolic blood pressure.PP, pulse pressure.The thin arms show significant difference among diurnal, nocturnal, and 24 h results.The thick arms show significant differences between PCI and NPCI groups.* and ** denote p < .05 and p < .001.F I G U R E 3 Intra-group comparison of blood pressure values of different days.(A) DMSBP.(B) DMDBP.The p values of paired t-test were shown in the rectangles.
F I G U R E 4
NBPDR in PCI and NPCI groups.(A) Comparison of NBPDR within 7 days after admission in PCI and NPCI patients.(B) The percentage of circadian pattern of BP on the first and third days of admission in PCI and NPCI groups.
Data characteristics of the two groups of patients.
Statistical analysis was performed using SPSS software (version: 26.0;IBM Corp, USA).The quantitative data with normal distribution were TA B L E 1 expressed as mean ± standard deviation, and the quantitative data of skew distribution were expressed by M (P25, P75) where M, P25, and P75 are the median, 25% and 75% quartiles, respectively.For normally distributed quantitative data, independent samples t-test was used for comparison between PCI and NPCI groups, and paired samples t-test was used for intra-group comparison of the values in different days, or corresponding BP parameters.For quantitative data with skewed distribution, the Mann-Whitney U-test was used for comparison between groups, and the Wilcoxon signed-rank test was used for comparison within groups.Count data were compared using the Chi-squared test or Fisher's exact test.
Results of univariate and multivariate Cox regression analyses.Results of multivariate logistic regression analysis.
|
2024-01-10T06:18:04.160Z
|
2024-01-08T00:00:00.000
|
{
"year": 2024,
"sha1": "003287d3fee744842a243836f8c833c64754d404",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jch.14759",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9594e2335fe8d29cb7d8952eefae1769792fbb08",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
126016393
|
pes2o/s2orc
|
v3-fos-license
|
VARIATION IN LOBES AND FISSURES OF LUNG
Address for Correspondence: Dr. Jaideo Ughade, Associate professor, department of Anatomy, Late Lakhiram Agrawal Memorial Government Medical College, Bendrachua, Raigarh 496001 E-Mail: drjaideoughade@gmail.com Introduction: Lungs are the paired vital organs meant for respiration situated in the thoracic cavity on either side of the heart. The right lung is divided into supe-rior, middle & inferior lobes by oblique and horizontal fissure. While the left lung is divided into supe-rior & inferior lobes by an oblique fissure. The fissures permit distension of the lobes during respiration. The fissures may be complete, incomplete or absent. Aim: To find out the variations in fissures and lobes along with their patterns, in human lungs; collected from cadavers. Result: Out of 50 right lung specimen, the horizontal fissure was absent in two cases whereas the horizontal fissure was incompletely seen in 18 specimens. Incomplete oblique fissure was seen in 7 right sided lungs. We reported accessory fissures and accessory lobes in 14 specimens. The oblique fissure was absent in 4 left lungs and it was incomplete in 14 lungs. Accessory fissures and lobes were present in 8 specimens. Conclusion: Knowledge of any variations is necessary in performing segmental resection and lobectomy. Accessory fissures indicate persistence of prenatal fissures.
ward on the costal surface, continue across the diaphragmatic surface & ends below the hilum on to the medial sur-face.Horizon-tal fissure is present only in the right lung.It starts from oblique fissure, runs laterally & transversely over the costal surface then to the anterior margin &finally back to the hilum [1].The fissures permit distension of the lobes during respiration [2].The fissures may be complete, incomplete or absent.
Lungs are the paired vital organs meant for respiration situated in the thoracic cavity on either side of the heart.The right lung is divided into supe-rior, middle & inferior lobes by oblique and horizontal fissure.The left lung is divided into supe-rior & inferior lobes by an oblique fissure.The oblique fissure cuts the vertebral border of both lungs at the level of 4th or 5th thoracic spine.The oblique fissure runs down When the lobes of the lung are attached with each other only at the hilum by the pulmonary blood vessels and bronchi, the fissures are said to be complete.When the fissures are partly fused by parenchymal tissue between the lobes they are incomplete.Fissures assist in locating the bronchopulmonary segments [3].Knowledge of such variations is necessary in performing segmental resection and lobectomy.Accessory fissures indicate persistence of prenatal fissures.An incomplete fissure may cause post operative air leakage [4].Present study was conducted to note the variations in fissures and lobes of the lung in human cadavers and compare the findings with previous studies.Aim and objectives: To find out the variations in fissures and lobes along with their patterns, in human lungs; collected from cadavers.
A descriptive study to assess the variations in presence and completeness of fissures and lobes of the lung in human cadavers was conducted in the Department of Anatomy of Government medical colleges of Raigarh, Nanded and Yavatmal over a period of four years.Formalin fixed cadaveric lungs were collected while doing undergraduate dissection classes.Total number of speci-mens examined was 100 (50 of right side and 50 of left side).Variations in the form of complete, incomplete, absent, accessory fissure and lobe, if any were noted.The lobes and fissures of the lungs were observed for presence of variations in morphological features (i.e.complete, incomplete or presence, absence) and presence of any variant fissure, accessory fissure was noted and specimens were photographed.
MATERIALS AND METHODS
Among the total 100 specimens, 50 were of right side and 50 were of left side.The laterality was judged based on the hilar structures.Right lung: Out of 50 right lung specimen, the horizontal fissure was completely absent in two cases whereas the horizontal fissure was incompletely seen in 18 specimens.Complete absence of oblique fissure was not seen in any case out of 50 right lung specimens.Incomplete oblique fissure was seen in 7 right lungs.We reported
RESULTS
Accessory fissures and accessory lobes in 14 specimens.
Left lung: The oblique fissure was absent in 4 left lungs and it was incomplete in 14 lungs.Accessory fissures and lobes were present in 8 specimens.
Fig. 4 :Fig
Fig. 4: Accessory fissure and lobe on diaphragmatic surface of lung.
Fig
Fig. 7: Incomplete fissure not cutting the parenchyma of lung.
The percentage of variations was higher in the study conducted by Dutta et al.The presence of accessory fissure was noted in 38% specimen of right side and 32%specimen of left side by Ambali MP et al[8],Azmera Gebregziabher et al [9] showed presence of accessory fissure and accessory lobe in 2 lungs (8.69%) out of 23 right lungs.Accessory fissure was noted in 3 (15%) left lung specimens.We reported Accessory fissures and accessory lobes in 14 right sided specimens and accessory fissures and lobes were present in 8 left sided specimens.The mucosal linings of the bronchi and the epithelial cells of the alveoli are endodermal in origin.The vasculature, muscles & cartilage of the bronchi are derived from mesoderm.Lung bud appears as an outgrowth from the ventral wall of the foregut which bifurcates into right and left primary bronchial buds.The right bronchial bud branches into three secondary bronchial buds while the left one branches into two.At about 6th week of intrauterine life, the secondary bronchial buds branch into tertiary bronchial buds to form bronchopulmonary segments.The spaces between bronchopulmonary segments get obliterated except along the line of division of principal bronchi where deep complete fissures remain dividing the right lung into 3 lobes and left lung into 2 lobes.These fissures are oblique and horizontal in position in right lung where as only in oblique position in left lung[10].Incomplete or absence of oblique and horizontal fissures could be due to defective obliteration of these fissures either completely or incompletely whereas accessory fissures may be present due to non obliteration of spaces which should normally get obliterated[11].The percentage of variations in lobes and fissures in present study were similar to the study conducted by Varalakshmi et al.
Table 1 :
Showing the Percentage of lobes in the present study.
Table 2 :
Showing the present study findings while comparing with previous studies.
M Ughade, Poorwa B Kardile, Pawan R Tekade.VARIATION IN LOBES AND FISSURES OF LUNG
|
2019-04-22T13:10:54.586Z
|
2018-03-05T00:00:00.000
|
{
"year": 2018,
"sha1": "17f87cedf0e4ce394401d976a1fc2120b1a3312e",
"oa_license": "CCBYSA",
"oa_url": "https://www.ijmhr.org/ijar.6.1/IJAR.2018.101.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "17f87cedf0e4ce394401d976a1fc2120b1a3312e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
209408926
|
pes2o/s2orc
|
v3-fos-license
|
Adenosine A1-A2A Receptor-Receptor Interaction: Contribution to Guanosine-Mediated Effects
Guanosine, a guanine-based purine nucleoside, has been described as a neuromodulator that exerts neuroprotective effects in animal and cellular ischemia models. However, guanosine’s exact mechanism of action and molecular targets have not yet been identified. Here, we aimed to elucidate a role of adenosine receptors (ARs) in mediating guanosine effects. We investigated the neuroprotective effects of guanosine in hippocampal slices from A2AR-deficient mice (A2AR−/−) subjected to oxygen/glucose deprivation (OGD). Next, we assessed guanosine binding at ARs taking advantage of a fluorescent-selective A2AR antagonist (MRS7396) which could engage in a bioluminescence resonance energy transfer (BRET) process with NanoLuc-tagged A2AR. Next, we evaluated functional AR activation by determining cAMP and calcium accumulation. Finally, we assessed the impact of A1R and A2AR co-expression in guanosine-mediated impedance responses in living cells. Guanosine prevented the reduction of cellular viability and increased reactive oxygen species generation induced by OGD in hippocampal slices from wild-type, but not from A2AR−/− mice. Notably, while guanosine was not able to modify MRS7396 binding to A2AR-expressing cells, a partial blockade was observed in cells co-expressing A1R and A2AR. The relevance of the A1R and A2AR interaction in guanosine effects was further substantiated by means of functional assays (i.e., cAMP and calcium determinations), since guanosine only blocked A2AR agonist-mediated effects in doubly expressing A1R and A2AR cells. Interestingly, while guanosine did not affect A1R/A2AR heteromer formation, it reduced A2AR agonist-mediated cell impedance responses. Our results indicate that guanosine-induced effects may require both A1R and A2AR co-expression, thus identifying a molecular substrate that may allow fine tuning of guanosine-mediated responses.
Introduction
Guanosine is a guanine-based purine nucleoside that has been shown to exert neuroprotective and neurotrophic effects in both in vitro and in vivo studies (for review, see [1]). Thus, it has been postulated as a good candidate for the management of several central nervous system (CNS) disorders, including neurodegenerative diseases (i.e., Parkinson's, Alzheimer's) or ischemia [1,2]. Brain ischemia is one of the major health disability conditions worldwide [3]. It occurs after a blood supply collapse that leads to a reduced level of oxygen and glucose within the affected brain area. Similarly, upon excitotoxicity and oxidative stress a failure of cellular bioenergetics occurs [4]. Importantly, a neuroprotective role of guanosine has been extensively investigated in animal and cellular models of ischemia, excitotoxicity and oxidative stress [5][6][7][8][9][10]. Indeed, we have demonstrated that guanosine prevents reactive oxygen species (ROS) generation and cell death in hippocampal slices subjected to the oxygen/glucose deprivation (OGD) [11].
The mechanism by which guanosine exerts its neuroprotective effects is still intriguing. Despite the identification of a putative guanosine binding site in rat brain membranes [12], a specific guanosine receptor has not yet been discovered. Importantly, it has been hypothesized that adenosine receptors (ARs) may play a role in mediating guanosine effects, although with some controversy. For instance, it has been reported that AR selective ligands do not compete for guanosine binding to rat brain membranes [13,14], whereas AR ligands were able to block some of the guanosine-dependent neuroprotective effects [15]. In line with this, a selective adenosine A 1 receptor (A 1 R) antagonist (DPCPX, 8-cyclopentyl-1,3-dipropylxanthine) and a selective A 2A receptor (A 2A R) agonist (CGS21680, 2-(4-(2-carboxyethyl)phenethylamino)-5 -N-ethylcarboxamidoadenosine) inhibited guanosine-mediated neuroprotection in hippocampal slices subjected to OGD [11]. Overall, these findings, including those using multimodal A 1 R and A 2A R ligand treatments, supported the notion that both A 1 R and A 2A R would participate in guanosine-mediated effects.
Interestingly, it has been hypothesized that adenosine A 1 and A 2A receptor-receptor interactions (i.e., heteromerization) might be behind some of the guanosine-mediated effects, thus pointing to the A 1 R/A 2A R heteromer as a putative molecular target for guanosine [16]. Indeed, the existence of A 1 R/A 2A R heteromers has been demonstrated in presynaptic terminals of striatal neurons controlling glutamate release [17], thus acting as an adenosine concentration-dependent switch [18]. Consequently, low to moderate concentrations of adenosine predominantly activate A 1 R within the A 1 R/A 2A R heteromer (i.e., inhibiting glutamate release), whereas moderate to high concentrations of adenosine also activate A 2A R, which, by means of the A 1 R-A 2A R intramembrane negative allosteric interaction, antagonizes A 1 R function, therefore facilitating glutamate release. Altogether, in view of the already known experimental indications, the A 1 R/A 2A R heteromer might be viewed as a potential target for guanosine, thus deserving further attention. Here, we aimed to assess the role of A 1 R and A 2A R interaction in guanosine-mediated effects. First, we studied the neuroprotective effects of guanosine in an ex vivo model of brain ischemia, both in wild-type and A 2A R deficient (A 2A R −/− ) mice; subsequently, we aimed to elucidate, in vitro, both the putative guanosine binding and activation of the A 1 R/A 2A R heteromer.
Animals
Wild-type and A 2A R −/− CD-1 male and female mice [20] weighing 25-50 g were used at 2-3 months of age. The University of Barcelona Committee on Animal Use and Care (CEEA-UB) approved the protocol (Code 10033, 04/02/2018). Animals were housed and tested in compliance with the guidelines described in the Guide for the Care and Use of Laboratory Animals [21] and following the European Union directives (2010/63/EU), FELASA and ARRIVE guidelines. Mice were housed in groups of five in standard cages with ad libitum access to food and water and maintained under a 12-h dark/light cycle (starting at 7:30 AM), 22 • C temperature, and 66% humidity (standard conditions).
OGD Protocol
Mice were euthanized by cervical dislocation and hippocampi rapidly removed and placed in an ice-cold Krebs-Ringer bicarbonate buffer (KRB) ( PO 4 , and 5 HEPES), where 10 mM d-glucose was replaced by 10 mM 2-deoxy-glucose and equilibrated with a 95% N 2 /5% CO 2 gas mixture, as described previously [5] After 15 min of OGD the media of the slices was replaced by oxygenated KRB and maintained for 2 h for evaluation of cellular viability and ROS generation. Guanosine (100 µM), when present, was added 15 min before (in KRB) and during OGD (in OGD buffer), and maintained in the re-oxygenation period (2 h), when the OGD buffer was replaced by physiological KRB.
Measurement of ROS Production
For evaluating ROS generation, slices were incubated with 80 µM 2 ,7 -dichlorofluorescein diacetate (DCFH-DA; Sigma-Aldrich) for 30 min [23]. Then, subsequent to the OGD/reoxygenation protocol, slices were washed twice with KRB and maintained for 15 min before adding DCFH-DA. H 2 DCFDA diffuses through the cell membrane, and it is hydrolyzed by intracellular esterases to the non-fluorescent form dichlorofluorescin (DCFH). Afterwards, DCFH can react with intracellular H 2 O 2 to form dichlorofluorescein (DCF), a green fluorescent dye. Slices were then transferred to a 96-well black plate containing 200 µL of KRB, and fluorescence was read (excitation 480 nm, emission 525 nm) using a POLARStar plate reader (BMG Labtech).
Plasmid Constructs
The cDNA encoding the human A 1 R tagged at its N-terminal tail with the O6-alkylguanine-DNA alkyltransferase (i.e., A 1 R SNAP ) cloned in pRK5 vector (BD PharMingen, San Jose, CA, USA) was a gift from Prof. Jean-Philippe Pin (CNRS, Montpellier, France). Thus, to perform functional assays A 2A R SNAP [24] and A 1 R SNAP were used. Also, A 2A R RLuc and A 1 R YFP constructs [17] were used to perform classical BRET (Bioluminescence Resonance Energy Transfer) assays. Finally, to perform NanoBRET experiments with the MRS7396 fluorescent antagonist, we created an A 2A R NanoLuc sensor (A 2A R NL ). To this end, the cDNA encoding the human A 2A R was amplified by polymerase chain reaction from the pECFP-A 2A R vector using the primers: FA2AEco (5 -GCCGGAATTCCCCATCATGGGCTCC TCGGTGTAC-3 ) and RA2ANot (5 -CGCGGCGGCCGCtcaggacactcctgctccatcctggg-3 ). The amplified A 2A R insert was then cloned into the EcoRI/NotI sites of pNLF1-secN vector (Promega, Stockholm, Sweden) containing a hemagglutinin (HA) epitope tag. All the constructs were verified by DNA sequencing.
NanoBRET Experiments
The NanoBRET assay was performed on stably expressing (A 2A R NL ) HEK-293T cells, transiently transfected (or not) with A 1 R SNAP , according to [25]. In brief, cells were re-suspended in HBSS, and seeded into poly ornithine coated white 96-well plates. After 24 h, cells were challenged with/without the non-labelled A 2A R antagonist (SCH442416) or guanosine and incubated for 1 h at 37 • C. Subsequently, the fluorescent ligand (MRS7396) was added and the plate and returned to 37 • C for 1 h. Finally, coelenterazine-h (Life Technologies Corp.) was added at a final concentration of 5 µM, and readings were performed after 5 min using a CLARIOStar plate reader (BMG Labtech). The donor and acceptor emissions were measured at 490-510 nm and 650-680 nm, respectively. The raw NanoBRET ratio was calculated by dividing the 650 nm emission by the 490 nm emission. In competition studies, results were expressed as a percentage of the maximum signal obtained (mBU; milliBRET Units).
cAMP Assay
cAMP accumulation was measured using the LANCE ® Ultra cAMP Kit (PerkinElmer, Waltham, MA, USA) as previously described [26]. In brief, transfected (A 2A R SNAP or A 2A R SNAP + A 1 R SNAP ) HEK-293T cells were firstly incubated for 1 h at 37 • C with stimulation buffer (BSA 0.1%, ADA 0.5 units/mL, zardaverine 2 µM; in serum-free DMEM) and later on with CGS21680 for 30 min at 37 • C. Thereafter, cells were transferred to a 384-well plate in which reagents were added following manufacturer's instructions. After 1 h at room temperature, Time-Resolved-Fluorescence Resonance Energy Transfer (TR-FRET) was determined by measuring light emission at 620 nm and 665 nm by means of a CLARIOstar plate reader (BMG Labtech).
Intracellular Calcium Determinations
The A 1 R-mediated intracellular Ca 2+ accumulation was assessed by means of a luciferase reporter assay based on the expression of the nuclear factor of activated T-cells (NFAT), as previously described [27]. In brief, cells were transfected with the cDNA encoding the A 1 R, the NFAT-luciferase reporter (pGL4-NFAT-RE/luc2p; Promega) and the yellow fluorescent protein (pEYFP-N1; Promega). After 36 h post-transfection, cells were incubated with the indicated drugs for 6 h. Subsequently, cells were harvested with passive lysis buffer (Promega), and the luciferase activity of cell extracts was determined using a luciferase Bright-Glo TM assay (Promega) in a POLARStar plate-reader (BMG Labtech) using a 30-nm bandwidth excitation filter at 535 nm.
Label-Free Cellular Impedance Assay
The xCELLigence Real-Time Cell Analyzer (RTCA) system (ACEA Biosciences, San Diego, CA, USA) was employed to measure changes in cellular impedance correlating with cell spreading and tightness, thus being widely accepted as a morphological and functional biosensor of cell status [28][29][30]. Thus, 16-well E-plates (ACEA Biosciences) were coated with 50 µL fibronectin (10 µg/mL) at 37 • C for 1 h before being washed three times with 100 µL MilliQ-water before use. The background index for each well was determined with 90 µL of stimulation buffer (supplemented DMEM with ADA 0.5 U/mL and zardaverine 10 µM) in the absence of cells. Data from each well were normalized to the time point just before compound addition using the RTCA software providing the normalized cell index (NCI). Subsequently, HEK-293T cells permanently expressing the A 2A R SNAP construct [31] in the absence or presence of A 1 R SNAP (90 µL resuspended in stimulation buffer) were then plated at a cell density of 40,000 cells/well and grown for 18 h in the RTCA SP device station (ACEA Biosciences) at 37 • C and in an atmosphere of 5% CO 2 before ligand (i.e., CGS21680 and/or guanosine) addition. Cell index values were obtained immediately following ligand stimulation every 15 s for a total time of at least 50 min. For data analysis, the area under the curve (AUC) for each NCI trace response was quantified and normalized to the basal.
Statistics
Data are represented as mean ± standard error of mean (SEM). The number of samples/animals (n) in each experimental condition is indicated in the corresponding figure legend. Comparisons among experimental groups were performed by Student's t-test and ANOVA, using GraphPad Prism 6.01 (San Diego, CA, USA), as indicated. Statistical difference was accepted when p < 0.05.
Guanosine-Mediated Neuroprotection in Hippocampal Slices Depends on A 2A R Expression
It has been postulated that ARs might be involved in guanosine-mediated responses in vivo [16]. Within this line of inquiry, we first interrogated whether A 2A R expression is necessary for guanosine-mediated neuroprotection, a well-known guanosine effect in vivo [1]. To this end, we subjected hippocampal slices from wild-type (i.e., A 2A R +/+ ) and A 2A R −/− mice to an OGD protocol in the presence or absence of guanosine. Indeed, significant cell death (p < 0.001) and ROS production (p = 0.0359) were observed in A 2A R +/+ hippocampal slices subjected to the OGD protocol ( Figure 1A,B). Interestingly, guanosine (100 µM) was able to prevent these effects, thus cellular viability significantly increased (p = 0.0012) and ROS production decreased (p = 0.0389) ( Figure 1A,B), as previously reported [5,11]. Importantly, under the same experimental conditions, in hippocampal slices obtained from A 2A R −/− mice, guanosine failed to prevent OGD-mediated cell death (p = 0.005) and ROS production (p = 0.0279) ( Figure 1A,B), thus losing its neuroprotective effect. Overall, these results suggested that A 2A R expression was necessary for guanosine-mediated neuroprotection.
A 2A R Ligand Binding is Affected by Guanosine upon A 1 R Coexpression
Once we demonstrated that the neuroprotective effect of guanosine was A 2A R-dependent, we aimed to assess the putative direct interaction of guanosine with A 2A R through ligand binding studies. To this end, we engineered a fluorescent ligand BRET-based assay to assess A 2A R ligand binding in living cells (Figure 2A). We used a fluorescent A 2A R antagonist (MRS7396) that is able to engage in a BRET process upon interacting with a cell surface A 2A R tagged with the NanoLuciferase (NL) at its N-terminus (i.e., A 2A R NL ) (Figure 2A). MRS7396 is a BODIPY630/650 derivative of SCH442416 [19], which upon A 2A R binding can act as an acceptor chromophore for NanoLuciferase emission (490 nm) in a BRET process. Thus, we challenged stable A 2A R NL -expressing cells with increasing concentrations of MRS7396, in the presence/absence of non-labelled SCH442416. Interestingly, a bell-shaped binding saturation hyperbola, with a K D = 4.8 ± 2.7 nM, was obtained for MRS7396, while in the presence of a saturating concentration of SCH442416 (1 µM) the binding was displaced ( Figure 2B). Our results showed that the NanoBRET binding assay was a robust and reliable way to assess A 2A R ligand binding. Accordingly, we next assessed possible guanosine effects on A 2A R orthosteric binding by performing a competition assay with a fixed concentration of MRS7396 (10 nM) (occupying~80% of receptors at equilibrium) and increasing concentrations of guanosine. Interestingly, under these experimental conditions, guanosine was unable to alter MRS7396 binding to A 2A R NL ( Figure 2C), thus indicating that guanosine does not orthosterically bind to A 2A R, as previously reported [12,13].
Since A 2A R heteromerizes with A 1 R [17], and some of the physiological effects of guanosine were modulated by A 1 R ligands [32,33], we investigated whether A 1 R/A 2A R heteromer formation affected AR-related guanosine-dependent effects. To this end, we first recreated the formation of A 1 R/A 2A R heteromers in HEK-293T cells by transfecting A 2A R RLuc and A 1 R YFP constructs and monitoring A 2A R/A 1 R heteromerization by a classical BRET approach ( Figure A1). Interestingly, neither adenosine nor guanosine incubation altered A 1 R/A 2A R heteromer formation ( Figure A1). Subsequently, we assessed the impact of A 1 R co-expression in A 2A R binding of MRS7396 using our NanoBRET binding assay. Notably, in A 1 R-A 2A R doubly expressing cells, guanosine (100 µM) was able to significantly reduce by 19 ± 4% (p = 0.0138) the binding of MRS7396 to the A 2A R NL , thus indicating that the A 1 R/A 2A R heteromer might play a potential role in AR-related guanosine-dependent effects ( Figure 2C). nM) (occupying ~80% of receptors at equilibrium) and increasing concentrations of guanosine. Interestingly, under these experimental conditions, guanosine was unable to alter MRS7396 binding to A2AR NL ( Figure 2C), thus indicating that guanosine does not orthosterically bind to A2AR, as previously reported [12,13].
A 2A R Signalling, but Not A 1 R, is Modulated by Guanosine in an A 1 R Coexpression-Dependent Manner
Given that guanosine reduced A 2A R binding in an A 1 R-expression-dependent manner, we next aimed to determine whether guanosine also impinged into A 2A R signaling. Accordingly, we determined the effects of guanosine in A 2A R-mediated cAMP accumulation upon agonist incubation. In A 2A R-expressing cells, the selective A 2A R full agonist CGS21680 induced a concentration-dependent cAMP accumulation (pEC 50 = 7.98 ± 0.08), indicating that the receptor was expressed and functional at the plasma membrane ( Figure 3A). Subsequently, we challenged cells with a fixed concentration of CGS21680 (200 nM) and evaluated the effects of increasing concentrations of guanosine in A 2A R-dependent cAMP accumulation. As shown in Figure 3B, guanosine did not preclude A 2A R-mediated cAMP accumulation. Conversely, in cells doubly expressing A 1 R and A 2A R, guanosine (100 µM) was able to significantly reduce, by 19 ± 3% (p = 0.0460), the A 2A R-mediated cAMP accumulation ( Figure 3B). These results supported the hypothesis that the effects of guanosine might be dependent on an A 1 R-A 2A R interaction.
Interestingly, our NanoBRET-based binding results and cAMP determinations in the absence and presence of A 1 R suggested a direct involvement of this receptor in guanosine-mediated blockade of A 2A R ligand binding and signaling. Thus, to ascertain whether guanosine would directly interact with A 1 R we assessed its impact on A 1 R-dependent signaling. To this end, A 1 R-mediated calcium responses in HEK-293T cells were determined through a homogenous bioluminescence reporter assay system using a NFAT response element controlling luciferase gene expression. While the activation of A 1 R, via application of the agonist N 6 -R-phenylisopropyladenosine (R-PIA, 50 nM), increased intracellular Ca 2+ , the incubation with guanosine (100 µM) did not promote intracellular Ca 2+ mobilization ( Figure 4A). Similarly, when A 1 R-expressing cells were treated with R-PIA in the presence of increasing concentrations of guanosine, A 1 R-dependent intracellular Ca 2+ mobilization was not affected, as observed in doubly A 1 R and A 2A R transfected cells ( Figure 4B). Overall, these results indicated that guanosine did not interact with A 1 R, thus ruling out any orthosteric A 1
R-dependent trans-inhibition of A 2A R function in A 1 R-A 2A R expressing cells.
In A2AR-expressing cells, the selective A2AR full agonist CGS21680 induced a concentrationdependent cAMP accumulation (pEC50 = 7.98 ± 0.08), indicating that the receptor was expressed and functional at the plasma membrane ( Figure 3A). Subsequently, we challenged cells with a fixed concentration of CGS21680 (200 nM) and evaluated the effects of increasing concentrations of guanosine in A2AR-dependent cAMP accumulation. As shown in Figure 3B, guanosine did not preclude A2AR-mediated cAMP accumulation. Conversely, in cells doubly expressing A1R and A2AR, guanosine (100 µM) was able to significantly reduce, by 19 ± 3% (p = 0.0460), the A2AR-mediated cAMP accumulation ( Figure 3B). These results supported the hypothesis that the effects of guanosine might be dependent on an A1R-A2AR interaction. Finally, we assessed the functional activity of guanosine using the label-free technology. To this end, the whole-cell guanosine-mediated impedance responses were monitored in living cells expressing A 2A R in the absence or presence of A 1 R using a biosensor method, as previously reported [34]. First, we tested CGS21680-mediated changes in morphology (i.e., impedance) of A 2A R SNAP expressing HEK-293T cells, which were recorded in real-time. Interestingly, addition of CGS21680 resulted in a significant (p = 0.015) increase of impedance, which was blocked by incubation with the selective A 2A R antagonist ZM241385 ( Figure 5A,B). In addition, guanosine did not affect the cell basal morphology (p = 0.6105) nor its CGS218680-mediated changes (p = 0.1217) ( Figure 5B). However, in doubly expressing A 1 R/A 2A R cells guanosine significantly reduced (p < 0.0106) cell basal morphology and precluded (p < 0.0001) the CGS218680-induced increase in cellular impedance ( Figure 5B). Again, these results indicated that the A 1 R-A 2A R co-expression may play a potential role in AR-related guanosine-dependent cellular effects. activation of A1R, via application of the agonist N 6 -R-phenylisopropyladenosine (R-PIA, 50 nM), increased intracellular Ca 2+ , the incubation with guanosine (100 µM) did not promote intracellular Ca 2+ mobilization ( Figure 4A). Similarly, when A1R-expressing cells were treated with R-PIA in the presence of increasing concentrations of guanosine, A1R-dependent intracellular Ca 2+ mobilization was not affected, as observed in doubly A1R and A2AR transfected cells ( Figure 4B). Overall, these results indicated that guanosine did not interact with A1R, thus ruling out any orthosteric A1Rdependent trans-inhibition of A2AR function in A1R-A2AR expressing cells. Finally, we assessed the functional activity of guanosine using the label-free technology. To this end, the whole-cell guanosine-mediated impedance responses were monitored in living cells expressing A2AR in the absence or presence of A1R using a biosensor method, as previously reported [34]. First, we tested CGS21680-mediated changes in morphology (i.e., impedance) of A2AR SNAP expressing HEK-293T cells, which were recorded in real-time. Interestingly, addition of CGS21680 resulted in a significant (p = 0.015) increase of impedance, which was blocked by incubation with the selective A2AR antagonist ZM241385 ( Figure 5A and B). In addition, guanosine did not affect the cell basal morphology (p = 0.6105) nor its CGS218680-mediated changes (p = 0.1217) ( Figure 5B). However, in doubly expressing A1R/A2AR cells guanosine significantly reduced (p < 0.0106) cell basal morphology and precluded (p < 0.0001) the CGS218680-induced increase in cellular impedance ( Figure 5B). Again, these results indicated that the A1R-A2AR co-expression may play a potential role in AR-related guanosine-dependent cellular effects.
Discussion
Guanosine is a purine nucleoside with widely demonstrated extracellular neuromodulatory effects in the CNS, but so far without an identified receptor. Based on the use of selective ligands, ARs have been proposed as possible targets to explain guanosine-mediated effects in animal and cellular models of ischemia. However, at present, the mechanism of action of guanosine is not clear.
Discussion
Guanosine is a purine nucleoside with widely demonstrated extracellular neuromodulatory effects in the CNS, but so far without an identified receptor. Based on the use of selective ligands, ARs have been proposed as possible targets to explain guanosine-mediated effects in animal and cellular models of ischemia. However, at present, the mechanism of action of guanosine is not clear. Here, we show that A 2A R expression was crucial for guanosine-mediated protective effects in an ex vivo model of brain ischemia. In addition, when examining guanosine effects in a controlled heterologous system, we were able to reveal the importance of a proposed A 1 R-A 2A R interaction mediating guanosine effects, both in A 2A R-ligand binding and in receptor function.
In the OGD ischemia model in hippocampal slices, we previously showed that guanosine induced a neuroprotective effect (increase of glutamate uptake) that was inhibited by activation of A 2A R by CGS2180 [11]. This effect of CGS21680 in abolishing a guanosine-evoked increase in glutamate uptake in an OGD protocol was also observed in cultured astrocytes expressing the astrocytic glutamate transporter Glt-1 [15]. Therefore, here we evaluated guanosine's neuroprotective effects in A 2A R −/− mice and revealed an important role for this receptor. Thus, in A 2A R −/− hippocampal slices, we observed a loss of the neuroprotective effects of guanosine (increasing viability and controlling ROS production in OGD conditions) that were observed in slices from wild-type mice ( Figure 6A). This result, consistent with previous data, pointed to ARs as possible targets for guanosine [35,36], prompting us to further explore the mechanism by which guanosine might act. ( Figure 6A). This result, consistent with previous data, pointed to ARs as possible targets for guanosine [35,36], prompting us to further explore the mechanism by which guanosine might act. While guanosine does not interfere with A1R-dependent signaling, it modulates A2AR binding and intracellular signaling (i.e., cAMP accumulation and cellular morphology) only in A1R-A2AR co-expressing cells. Therefore, A1R and A2AR may constitute a molecular substrate involved in guanosine-mediated effects, but the precise mechanism of action of guanosine involving ARs is still lacking.
Our NanoBRET-based sensor data suggested that, as previously reported [13], guanosine apparently does not bind directly to the A2AR. However, in A1R/A2AR cells, it was possible to observe a guanosine-mediated partial displacement of A2AR-ligand binding ( Figure 6B). Together with the ex vivo data, this result would indicate that the mechanism of action of guanosine would be mediated by this receptor-receptor entity. Indeed, previous data showing both DPCPX-and pertussis toxindependent blockade of protective effects of guanosine in hippocampal slices subjected to OGD [11], supported the dependence on functional A1Rs coupled to a G-protein to mediate guanosine effects.
We found that guanosine reduced A2AR orthosteric binding only in A1R-A2AR expressing cells. Thus, we evaluated whether guanosine could modulate A2AR-dependent signaling under the same experimental conditions. Interestingly, while guanosine did not preclude CGS21680-induced cAMP accumulation in A2AR-expressing cells, it reduced A2AR-mediated cAMP accumulation in doubly in an A 1 R-dependent manner. While guanosine does not interfere with A 1 R-dependent signaling, it modulates A 2A R binding and intracellular signaling (i.e., cAMP accumulation and cellular morphology) only in A 1 R-A 2A R co-expressing cells. Therefore, A 1 R and A 2A R may constitute a molecular substrate involved in guanosine-mediated effects, but the precise mechanism of action of guanosine involving ARs is still lacking.
Our NanoBRET-based sensor data suggested that, as previously reported [13], guanosine apparently does not bind directly to the A 2A R. However, in A 1 R/A 2A R cells, it was possible to observe a guanosine-mediated partial displacement of A 2A R-ligand binding ( Figure 6B). Together with the ex vivo data, this result would indicate that the mechanism of action of guanosine would be mediated by this receptor-receptor entity. Indeed, previous data showing both DPCPX-and pertussis toxin-dependent blockade of protective effects of guanosine in hippocampal slices subjected to OGD [11], supported the dependence on functional A 1 Rs coupled to a G-protein to mediate guanosine effects.
We found that guanosine reduced A 2A R orthosteric binding only in A 1 R-A 2A R expressing cells. Thus, we evaluated whether guanosine could modulate A 2A R-dependent signaling under the same experimental conditions. Interestingly, while guanosine did not preclude CGS21680-induced cAMP accumulation in A 2A R-expressing cells, it reduced A 2A R-mediated cAMP accumulation in doubly A 1 R-A 2A R transfected cells, as observed in the ligand-binding assay ( Figure 6B). Additionally, the evaluation of guanosine effects on the functional activity of ARs using the label-free technology confirmed that guanosine-mediated cell impedance responses were dependent on A 1 R-A 2A R co-expression. Hence, our results indicate that guanosine could attenuate A 2A R signaling (i.e., agonist-mediated cAMP accumulation and cell impedance responses) in an A 1 R-dependent manner ( Figure 6B). On the other hand, when the A 1 R-dependent signaling (i.e., intracellular Ca 2+ mobilization) was assessed, guanosine was unable to modulate receptor's function both in singly and doubly A 1 R-A 2A R transfected cells. Taken together, our results suggest that while guanosine did not signal through A 1 R, it requires this receptor to exert its A 2A R modulatory effect, which could indicate that the A 1 R/A 2A R heteromer might be a molecular substrate for guanosine.
The A 1 R/A 2A R heteromer displays some functional characteristics similar to that reported for other AR-containing oligomers, for instance A 2A R combined with the dopamine D 2 receptor (D 2 R) or the cannabinoid CB 1 receptor (CB 1 R) [37]. Interestingly, these receptor heteromers have been shown to exert reciprocal receptor-receptor allosteric antagonistic interactions [38]. Precisely, an A 1 R/A 2A R heteromer-mediated transmembrane-dependent negative allosteric interaction at the ligand-receptor binding level has been described [39]. In addition, co-activation of both receptors led to a canonical protein Gs-Gi antagonistic interaction at the level of the adenylyl cyclase [40]. This situation makes it difficult to conclude whether an effect in a given signaling pathway is caused by either the allosteric or the canonical interaction. Thus, our data showing that guanosine was able to modulate AR functioning (i.e., cAMP assay) only in cells expressing A 1 R and A 2A R do not permit a clear determination of the interaction at the intracellular level (i.e., canonical protein Gs-Gi antagonistic interaction). However, considering the whole picture, it seems likely that guanosine effects in the physiological context may depend on the co-expression of both receptors and their and interaction. Indeed, guanosine did not disrupt the A 1 R/A 2A R heteromer, as observed by a saturable BRET signal, similar to that obtained following adenosine treatment, and by membrane co-localization of A 1 R and A 2A R in guanosine-treated cells ( Figure A1).
Overall, our data suggest an important role for the A 1 -A 2A receptor-receptor interaction in guanosine-mediated effects. Thus, while our results seem to rule out an eventual guanosine-mediated A 1 R-A 2A R canonical antagonistic interaction, further investigation is needed to ascertain whether guanosine may either modulate the well-known A 1 R-A 2A R allosteric interaction or an indirect mechanism of action yet to be discovered.
Conclusions
In summary, our results revealed that certain AR-related guanosine-mediated effects rely on A 1 R and A 2A R co-expression. Indeed, in ex vivo experiments, the well-known guanosine-mediated neuroprotective effect depends on A 2A R expression. Thus, guanosine failed to protect A 2A R −/− mouse hippocampal slices from ischemia-induced damage. In addition, while guanosine did not interfere with A 1 R-mediated signaling, it modulated A 2A R binding and intracellular signaling only in A 1 R-A 2A R co-expressing cells. Overall, our results suggest that A 1 R and A 2A R may constitute a molecular substrate involved in guanosine effects, but the precise mechanism of action of guanosine involving ARs still is intriguing. bell-shaped BRET saturation curve (BRET 50 = 0.38 ± 0.07 and BRET max = 90 ± 6), thus indicating the formation of constitutive A 1 R-A 2A R complexes in living cells ( Figure A1B). Importantly, under the same experimental conditions, the treatment with either adenosine (100 µM) or guanosine (100 µM) for 2 h did not alter the physical proximity of A 1 R and A 2A R. Thus, neither the BRET 50 [F (2,30) = 1.524, p-value = 0.2343] nor the BRET max [F (2,30) = 0.3135, p-value = 0.7333] was significantly affected by adenosine or guanosine incubation ( Figure A1B). Overall, these results corroborated the formation of A 1 R/A 2A R heterocomplexes in living cells, as previously described [17], and that these complexes were not affected by adenosine or guanosine, consistent with the general notion that GPCR homo-and heteromerization is often constitutive. . Cells transiently transfected with A2AR SNAP and A1R SNAP and incubated with vehicle or guanosine (100 µM) for 2h. Cells were processed for immunocytochemical (ICC) detection of A2AR (red) and A1R (green) using specific antibodies (see Appendix A1). Merged images reveal codistribution of A2AR SNAP and A1R SNAP (yellow). Scale bar: 100 µm. (B) BRET saturation curve between A2AR and A1R. BRET was measured in HEK-293T cells co-expressing A2AR Rluc and A1R YFP constructs and incubated with vehicle, adenosine (100 µM) or guanosine (100 µM) for 2 h. Cells were cotransfected with a fixed amount of A2AR Rluc and increasing amounts A1R YFP . Plotted on the X-axis is the fluorescence value obtained from the YFP, normalized with the luminescence value of the Rluc constructs 10 min after coelenterazine h incubation and in the Y-axis the corresponding BRET ratio (× 1000). mBU: mBRET units. Results are expressed as mean ± SEM of four independent experiments grouped as a function of the amount of acceptor fluorescence.
|
2019-12-19T09:15:51.790Z
|
2019-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "c833c7249d2c86c4075908f709725c669d1d6710",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/8/12/1630/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2cad11370399e61b62968e2f3c5a6a950afe2df0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
237383196
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Iron and Aluminum on the Aggregate Stability of Some Latosols in Central and Southern Liberia (West Africa)
1 Department of Soil Science, Faculty of Agronomy and Agricultural Science, University of Dschang, Dschang, Cameroon. 2 Department of Earth Sciences, Faculty of science, University of Yaounde I, Yaounde, Cameroon. 3 Department of Mining and Mineral Engineering, National Higher Polytechnic Institute, University of Bamenda, Bambili, Cameroon. 4 Department of Plant and Soil Sciences, Faculty of Agriculture and Sustainable Development, Cuttington University Suakoko, Bong County, Republic of Liberia. 5 Department of Soil Science, Faculty of Agriculture, Lasbela University of Agriculture, Water and Marine Sciences, Uthal, Balochistan, Pakistan. Key Laboratory of Plant Nutrition and Fertilization, Ministry of Agriculture, Institute of Agricultural Resources and Regional Planning, Chinese Academy of Agricultural Sciences, Beijing 100081, P.R. China.
INTRODUCTION
Soil aggregation contributes to soil quality improvement as it promotes root penetration, plant growth, soil erosion prevention, soil nutrient recycling and soil compaction reduction [1][2][3][4]. Moreover, soil aggregation can increase SOC sequestration by protecting carbon from decomposition [5]. Soil aggregate stability is commonly used as an indicator of soil structure [6], because better soil structure and higher aggregate stability are key factors to soil fertility improvement, sustainability and productivity [7].
Soil aggregate stability is estimated by investigating the process of aggregate fragmentation or the factors that stabilize aggregates. Soil stabilizing factors are principally linked to soil mineralogy and organic matter, which may be influenced by agricultural practices [8]. Indeed, SOM, Al or Fe oxides and colloidal silica or calcium carbonate are cementing substances that control aggregate formation [9]. The importance of these cementing substances is relative and depends on their abundance and associations, as well as on the environmental conditions under which soil aggregates are formed [10]. In highly weathered, humid tropical soils, Fe and Al oxides bind with soil particles to form stable structural units, promoting the stability of soil aggregates and therefore reducing soil erosion [11]. In kaolinitic soils with low clay and organic carbon, Fe-oxides may form granular soils, enhance the strength of soil aggregates and improve soil aggregate stability [12]. Fe oxides interact with positively charged oxides and negatively charged clay minerals to form organo-mineral complexes [13]. Fe oxides associated to organic and inorganic compounds, or aggregates via cation bridges, are the greatest dynamic components in soils to improve soil structure [14]. Al oxides improve the stability of aggregates by acting as flocculants, binding fine particles to organic molecules and precipitating as gels on clay surfaces [15,16]. Previous studies have identified amorphous aluminous compounds [17] as binding substances of silt size particles [18]. At low clay content, Alo controls the formation of large and resistant aggregates, maintaining the resistance of soil to wind erosion [19].
In addition to the aggregates stability, soil dispersion is among the main factors controlling the stability of soil structure in topsoil [20]. The soil dispersion index such as WDC, dispersion ratio [21] and CDI, clay-flocculation index (CFI) and silt + clay aggregate are widely used to estimate microaggregate stability [22,23]. Previous studies report the strong correlation between aggregate stability and WDC, which indicates that the variations affect mainly flocculation and aggregation in microaggregates [24]. Moreover, there is significant correlation between WDC contents and wetting and drying cycles, indicating that WDC is influenced by wetting and drying cycles [25]. The relationship between Fe and Al sesquioxides content and the stability index of micro-aggregates, such as the DR and ASC, are extensively documented [22,26]. Soils with high microaggregate indices (CFI and ASC) have greater structural stability than those with low indices [22,26].
There are three dominant soil classes in Liberia, namely laterites (latosols), sandy (Regosols) and alluvial (Fluvisols) soils [27]. Latosols are widely distributed and cover about 75% of land surface in Liberia where they are very acidic (pH 3-5) and contain abundant Al and Fe oxides [28]. Mining extraction (iron, gold and diamond) and deforestation remove vegetation and expose the soil to erosion. As the Fe and Al are abundant in the latosols, different forms of these oxides and their effects on aggregates stability need to be elucidated in order to secure the soils from erosion and enhance soil fertility. Consequently, the objective of this paper was to evaluate the effect of different forms of Fe and Al on the aggregate stability in the latosols. The results obtained will serve as baseline data for designing strategies to protect and evaluate latosols in Liberia for more efficient crop production.
Laboratory Procedures
In the laboratory, soil samples were air-dried, crushed, and passed through a 2 mm sieve to remove extraneous material such as root, plant, and others. The particle size distribution was determined by the Robinson's pipette method. The soil particle size distribution of < 2 mm fractions was collected using the pipette method [29], after H 2 O 2 pre-treatment using the methods described previously by [30]. The results obtained were thereafter used to determine the total clay content. Water-dispersible clay (WDC) and silt (WDSi) were determined using the particle size distribution analysis method reported above [29], with the exception that no chemical dispersant was applied on the samples. Nonetheless, only mechanical agitation with an end-over-end shaker was employed after soaking the samples for 16 h. The indices of microaggregate stability were determined using the relationships explained below [11,22] CDI (clay dispersion index) = WDC×100 / TC (4) Where: TS: the total silt content or the one from chemically dispersed soil, TC: the total clay content or the one from chemically dispersed soil, WDSi: the water-dispersible silt content, WDC: the water-dispersible clay content.
The pH of the bulk soil before fractionation was determined in distilled water by 1:2.5 soil: water ratio. The CEC was measured using the ammonium acetate (1 M and pH 7.0) method. SOM was determined by the K 2 Cr 2 O 7 wet oxidation method. The Feo and Feo were determined by ammonium oxalate, Fed and Ald were determined by dithionite-citrate-bicarbonate (DCB) method, and the Fep and Alp were extracted by sodium pyrophosphate method [31].
Data analysis of the stability indices and the Feand Al-oxide contents of soils and aggregates were performed using SPSS17.0 and origin 9 pro.
Soil Chemical and Physical Properties
The physico-chemical properties of studied soils are shown in in Lat2, 10.28 mmol.kg −1 in Lat3, and 11.98 mmol.kg −1 in Lat4. The sand and clay fractions are dominant particle size fractions in Lat1, Lat2, Lat3, and Lat4. These studied soils belong to sandy clay textural class ( Table 2).
Distribution of Iron and Aluminum Oxides in the Studied Soils
The distribution of Feo and Feo, Fed and Fed, and Fep and Alp were compiled in Table 3. The Feo and Feo extracted forms are the most dominant, followed by amorphous and chelated extracted forms (
Aggregate Stability Indices of Studied Soils
The aggregate stability indices of the studied soils are compiled in Table 4. The WDSi, which is used to estimate the instability, ranges from 313 to 343 g.kg -1 , with lowest value in Lat1 and the highest one in Lat4. The water-dispersible clay and the instability indices range from 50 g.kg in Lat1. The ASC is 106 g.kg -1 in Lat2 and 151 g.kg -1 in Lat4.
Relationships between Aggregate Indices and Aggregating Agents (SOC, Fe and Al)
The correlation matrix between the aggregate indices and the aggregating agents (SOC, Fe and Al) is shown in Table 5. WDC is only positively correlated with SOC and Ald. The DR correlates positively with Feo and Alo. Also, there is a positive correlation between CDI and SOC, Fed and Ald. In addition, CFI correlates positively with different forms of Fe and Al, except for Ald.
For ASC, there is a positive correlation with SOC, Fed and Fep. In the studied soils, ASC is an important aggregate stability index. A regression analysis was performed between ASC, clay, Feo, and Alo. The best regression equations were (Fig 1): 1 shows that silt+clay aggregates have an inverse correlation with amorphous Alo and Feo, whereas ASC correlated positively with SOC and total clay (Fig.1).
Aggregate Stability of the Latosols
The WDC, DR, and WDSi are all indicators of the rate of soil dispersion. It has been reported that soils with low WDC, WDSi and low to moderate DR are stable and less erodible [22]. In this study, the variability of WDSi values is not considerable among the studied soils. To distinguish between the stability between the study soils, the WDC and CDI will be considerate. As previously reported, high WDC and dispersion indices have negative consequences on soils and the entire environment in terms of water and wind erosion [32]. In the present study, the WDC is high in Lat1 and Lat2. Moreover, the CDI is high in Lat1 (16.4 g.kg -1 ) and Lat2 showing that Lat1 and Lat2 are less stable compared to Lat3 and Lat4. Lat3 and Lat4 appear to be more stable but in the field, there are several degradation indices which contrast with the results obtained in the laboratory. Previous studies indicate that there are external factors that can impact on the aggregate stability such as climate, pedogenic processes, land use, deforestation and biological factors [14,22]. In these studied sites, the annual precipitation attains 2500 mm and could cause the degradation observed in the field. Findings proof that rainfall breaks main structural units and causes the rapid formation of crust resulting in depositional seals [33]. According to the author, the consequence of this categorization of events is that soil aggregate breakdown is the prime regulation in the erosional system.
Role of SOC, Fe and Al Oxides in Aggregate Stability
In this study, SOC correlated positively with WDC, CDI and ASC, indicating the positive impact of SOC both as an aggregating agent and as a dispersing agent. This finding contrast with works of [22]. However, the present finding agrees with those on the key role of SOC in soil aggregation [34][35][36]. Previous studies reveal that the role of SOC as aggregation or dispersing agent is strongly linked to the soil quality and the quantity of SOC [22]. The quantity of SOC is low in the studied latosols, varying from 4.50 gkg -1 in Lat3 to 10.23 g.kg -1 in Lat1, indicating that the SOC might act as aggregating agent independently of its quantity in soil, but the significance of this action will increase with increasing SOC contents.
The present study reveals that WDC positively correlates with Ald, DR positively correlates with Feo and Alo. In addition, CDI positively correlates with Fed and Ald, CFI correlates positively with different forms of Fe and Al except for Ald, while ASC positively correlates with Alp and Fed. Such results unfold the significant role of the different forms of Fe and Al on the aggregate stability in the soils. There is ongoing debate on the forms of Al and Fe that may be responsible for aggregate stability in soils of tropical regions. For instance, some studies support the oxalate-extractable forms of Fe oxide might be responsible for the aggregation of some subtropical and tropical soils [21,35,37]. Other works argue that Al oxides are better aggregating agents than Fe oxides in some tropical soils [38]. The present study confirms that Al and Fe oxides, independent of their forms, contribute to the soil aggregate stability.
CONCLUSION
The present work investigated the role of Fe and Al oxides on the aggregate stability of some Liberian Latosols. Thus, all four studied Latosols were acidic, with low SOC. The dominant forms of Al and Fe were free Fe and Al, followed by amorphous Fe and Al, as well as chelated Fe and Al. The high values of water dispersible clay and clay dispersible index in some of the studied Latosols might imply a less stable aggregate stability. The Fe and Al in all their different forms seem to contribute to the soil aggregate stability. The SOC, although very low, contributes to soil aggregate stability. The present study suggests that Fe and Al as well as SOC are cementing materials which impact aggregate stability in the four Latosols. However, further studies are required to investigate the relationship between these cementing agents and the mechanisms accompanying the aggregate stability of these West African Latosols.
|
2021-09-01T15:04:52.571Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "b3cc4d7a1b8f07622c5d43d96e92b1db17252b0d",
"oa_license": null,
"oa_url": "https://www.journalijpss.com/index.php/IJPSS/article/download/30517/57271",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "423b6b23dac67c22a457b394f8815ea78a652bb0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
218632379
|
pes2o/s2orc
|
v3-fos-license
|
Ventricular fibrillation mechanism and global fibrillatory organization are determined by gap junction coupling and fibrosis pattern
Abstract Aims Conflicting data exist supporting differing mechanisms for sustaining ventricular fibrillation (VF), ranging from disorganized multiple-wavelet activation to organized rotational activities (RAs). Abnormal gap junction (GJ) coupling and fibrosis are important in initiation and maintenance of VF. We investigated whether differing ventricular fibrosis patterns and the degree of GJ coupling affected the underlying VF mechanism. Methods and results Optical mapping of 65 Langendorff-perfused rat hearts was performed to study VF mechanisms in control hearts with acute GJ modulation, and separately in three differing chronic ventricular fibrosis models; compact fibrosis (CF), diffuse fibrosis (DiF), and patchy fibrosis (PF). VF dynamics were quantified with phase mapping and frequency dominance index (FDI) analysis, a power ratio of the highest amplitude dominant frequency in the cardiac frequency spectrum. Enhanced GJ coupling with rotigaptide (n = 10) progressively organized fibrillation in a concentration-dependent manner; increasing FDI (0 nM: 0.53 ± 0.04, 80 nM: 0.78 ± 0.03, P < 0.001), increasing RA-sustained VF time (0 nM: 44 ± 6%, 80 nM: 94 ± 2%, P < 0.001), and stabilized RAs (maximum rotations for an RA; 0 nM: 5.4 ± 0.5, 80 nM: 48.2 ± 12.3, P < 0.001). GJ uncoupling with carbenoxolone progressively disorganized VF; the FDI decreased (0 µM: 0.60 ± 0.05, 50 µM: 0.17 ± 0.03, P < 0.001) and RA-sustained VF time decreased (0 µM: 61 ± 9%, 50 µM: 3 ± 2%, P < 0.001). In CF, VF activity was disorganized and the RA-sustained VF time was the lowest (CF: 27 ± 7% vs. PF: 75 ± 5%, P < 0.001). Global fibrillatory organization measured by FDI was highest in PF (PF: 0.67 ± 0.05 vs. CF: 0.33 ± 0.03, P < 0.001). PF harboured the longest duration and most spatially stable RAs (patchy: 1411 ± 266 ms vs. compact: 354 ± 38 ms, P < 0.001). DiF (n = 11) exhibited an intermediately organized VF pattern, sustained by a combination of multiple-wavelets and short-lived RAs. Conclusion The degree of GJ coupling and pattern of fibrosis influences the mechanism sustaining VF. There is a continuous spectrum of organization in VF, ranging between globally organized fibrillation sustained by stable RAs and disorganized, possibly multiple-wavelet driven fibrillation with no RAs.
Introduction
Over the last five decades, multiple competing mechanisms have been implicated in sustaining ventricular fibrillation (VF). However, amongst experts in the field no consensus exists on a single unifying mechanism. Epicardial VF mapping studies in patients undergoing cardiac surgery on cardio-pulmonary bypass have shown evidence to support both disorderly perpetual multiple-wavelet activity in some patients and highly organized re-entrant waves sweeping the whole myocardium in others. 1 These reentrant wavefronts are often referred to as scroll waves, rotors, rotational activity (RA), or rotational drivers, and are characterized by pivoting around a phase singularity (PS) point and implicated in driving fibrillatory wavefronts. RAs have been mapped transmurally during VF in ex vivo perfused cardiomyopathic human hearts 2 and more recently with noninvasive body surface mapping. 3 It has been postulated that catheter-based ablation of regions localizing RAs may present a suitable therapeutic strategy in prevention of VF in VF survivors, however, the role and existence of RAs in patients remains highly controversial and largely unproven.
Whilst no consensus exists on a unifying fibrillatory mechanism, there is some evidence to suggest that differing degrees of cardiac organization and complexity exists in fibrillation. Optical mapping studies of coronary-perfused sheep ventricular slabs have previously shown a spectrum of VF complexity as characterized by dominant frequency (DF) analysis, although few reported instances of sustained RAs. 4 Fibrosis and gap junction (GJ) remodelling are important substrates for the initiation and perpetuation of VF. High ventricular fibrosis burden post-myocardial infarction correlates with higher incidence of VF and ventricular tachycardia (VT). 5 In limited perfused hearts studies, areas of high fibrosis anchor RAs in VF. 2 In vitro experiments with co-cultures of myocytes and myofibroblast have shown that an increase in the volume of myofibroblast relative to myocytes can increase complexity of propagation, increase wavefront fractionation, and reduce stability of reentrant drivers that emerge. 6 However, the link between the complexity of fibrillatory mechanism, RAs, and the degree and pattern of fibrosis in intact hearts is not clearly defined.
Cell-cell connectivity via GJs is important in electrical propagation between neighbouring cardiomyocytes. Abnormal expression and distribution of connexin43 has been implicated in increased vulnerability to developing ventricular tachyarrhythmias, 7,8 whilst pre-treatment with GJ coupling enhancers reduces inducibility of VF 9 in perfused hearts. However, the mechanism by which GJ coupling modulates underlying fibrillatory mechanisms is also uncertain.
In this study, fibrillatory dynamics were studied in ex vivo perfused rat hearts with optical mapping of transmembrane potentials in VF. We hypothesized that there is a continuous spectrum of fibrillatory organization and mechanisms, modulated by two important electroarchitectural components, namely the pattern and degree of fibrosis, and GJ coupling.
Methods
The detailed methods are in the Supplementary material online.
Ethical approval
This work was performed in accordance with standards set out in the UK Animals (Scientific Procedures) Act 1986, ARRIVE guidelines and was approved by Imperial College London Ethical Review Board under the project licence PEE7C76CD and PCA5EE967. All animal procedures conformed to the guidelines from Directive 2010/63/EU of the European Parliament on the protection of animals used for scientific purposes. For ex vivo studies requiring explantation of the heart, the rats were anaesthetized with 5% isoflurane (95% oxygen mix) in an induction chamber and euthanized with cervical dislocation.
Experimental protocols
VF optical mapping of transmembrane fluorescence was performed in 65 explanted Sprague-Dawley (SD, Charles River, Harlow, UK) rat hearts. The SD rats were 9-12 weeks old, weighing 250-300 g. VF mechanisms were studied with pharmacological GJ modulation of control hearts in VF with a GJ coupling enhancer, rotigaptide (RTG) (n = 10), GJ uncoupler, carbenoxolone (CBX) (n = 10), or control perfusate (n = 5). VF mechanisms were separately studied in a chronic 4-week ventricular fibrosis model with compact fibrosis (CF) (n = 11), diffuse fibrosis (DiF) (n = 11), and patchy fibrosis (PF) (n = 13). A sham surgery (n = 5) group was used as a control. In addition, to study the effect of enhanced GJ coupling in chronic fibrotic hearts, the DiF hearts above were also infused with RTG 80 nM after initial VF optical mapping. A schematic of study protocol is shown in Supplementary material online, Figure S1.
Ventricular fibrosis
Three groups of ventricular fibrosis were generated in 35 rats. Separately, a sham surgical procedure was performed in five rats. All surgical recovery procedures were carried out with aseptic technique. The rats were first anaesthetized with 5% isoflurane (95% oxygen mix) inhalation in an induction chamber, intubated with a modified cannula, and ventilated using a Harvard rodent ventilator (MA, USA). Carprofen (5 mg/ kg), enrofloxacin (5 mg/kg), vetergesic (0.05 mg/kg), and marcaine (0.5%) were administered subcutaneously as a single dose. Surgical permanent left anterior descending (LAD) artery ligation (n = 11) was performed to generate CF. Twenty minutes LAD territory ischaemia followed by reperfusion (n = 13) was used to generate PF. The methodology for inducing DiF was adopted from a study by Messroghli et al., 10 whereby an osmotic mini pump (Azlet 2ml4, CA, USA) pre-loaded to deliver 500 ng/ kg/min of angiotensin (Abcam, Cambridge, UK) was implanted in the abdominal cavity. In the sham surgery group (n = 5), a suture was passed around the LAD without ligation. After surgery the rats were reviewed twice daily for adverse complications of the procedure (bleeding, infection, wound dehiscence) and pain was monitored by assessing behavioural changes, such as reduced feeding, loss of weight, ruffled coat, hunched posture, porphyrin staining, reduced mobility, ocular or nasal discharge, diarrhoea, and laboured breathing. Analgesia (Buprenorphine 0.05 mg/kg, subcutaneous administration) was given twice daily for the first 3 days, then reduced to once a day for another 4 days, and extended if needed beyond this period. After 4-week maturation, the hearts were explanted, Langendorff-perfused with KHB for VF optical mapping studies as previously described. 11 Selected rats underwent in vivo cardiac magnetic resonance imaging (MRI) after anaesthetization with isoflurane (2.5%/95% oxygen mix) with late-gadolinium enhancement (LGE) prior to optical mapping.
VF optical mapping
VF was induced with provoked electrical stimulation using an extra stimulus protocol (8 beat S1 train, cycle length 100 ms, 2 mA, and successive earlier S2, S3, and S4 stimuli) or burst pacing protocol (20 beat train, 2 mA, cycle length 40-100 ms) in all hearts from an implanted electrode. All hearts were treated with a potassium channel opener, Pinacidil (30 mM) to aid VF maintenance prior to optical mapping studies. Propensity to sustained VF induction was scored on an arrhythmia provocation scoring system (described in Supplementary material online). The excitation-contraction uncoupler, blebbistatin (Tocris Bio-Sciences, Cambridge, UK) was infused through a side port at a loading dose of 30 mM, followed by a maintenance concentration of 10 mM set up to recirculate in the perfusate. The hearts were stained with a voltagesensitive dye (40 ml of 5 mg/mL RH237 in dimethyl sulfoxide; Thermo-Fisher, MA, USA) given as a slow bolus through the side port. A custom made 128 Â 80 pixel complementary metal-oxide-semiconductor camera (Cairn, Faversham, UK) was used to record the optical fluorescence signals (Supplementary material online, Figure S2). All recordings were of the left ventricular (LV) anterior epicardial surface and 10 seconds in duration with a sampling rate of 1000 frames/s.
VF phase analysis
Optical fluorescence data were filtered and phase processed as previously described. 12 All raw optical fluorescence signals were processed in MATLAB R2018 (MathWorks, MA, USA) using custom made scripts. The data were first filtered using methodology and code adapted from the Efimov laboratory mapping toolbox. 13 Briefly, the signals were spatially filtered by binning in a 3-by-3 pixel matrix, high-frequency noise was removed with 0-100 Hz low pass filter, baseline drift was removed, and signals normalized. The filtered VF optical fluorescence data were analysed with a custom made MATLAB (R2018, MathWorks) fibrillation analysis script. 14 The methodology for phase analysis has been previously described. 12,15 Briefly, each pixel of optical fluorescence data was tagged for the minima and maxima and filtered to remove small amplitude fluctuations in the signals and fitted to a cubic spline to subtract the average of the minima and maxima splines to generate a zero mean. The real and imaginary parts of the Hilbert transform of this zero-mean signal were plotted in the phase plane and the phase angle calculated from this. A phase map of VF at each sampled time point was constructed and PSs / RAs tagged using our algorithm (Supplementary material online, Figure S2).
VF data analysis
Prior to induction of sustained VF, baseline conduction velocity (CV) and APD90 data in response to pacing at differing cycle lengths for the fibrosis and GJ groups were obtained as shown in Supplementary material online, Figure S4. From the phase processed data, the edge of each wavefront was tracked in a 9 Â 9 pixel window and RAs characterized with quantification of rotations, duration, rotational frequency, and meander. Furthermore, the total number of RAs/s and total duration of all RAs/s of fibrillatory recordings were calculated. A minimum two-rotation filter was used to threshold and define a significant RA and to construct RA heat maps. The methodology and parameters for wavefront tracking and RA characterization, such as wavefront length, RA spatial gap, and temporal gap were determined by sensitivity testing as previously described. 14 The path of the longest duration RAs (defined as those with at least >5 rotations) was tracked and meandering expressed as the centre shift per RA rotation. Centre shift was calculated as ͱ(x 2 þ y 2 ), whereby x and y are number of pixels of displacement in the x and y plane from the initiation point to termination point of the RA. The centre shift was normalized to the number of rotations for a given RA by dividing the centre shift by number of rotations for a given RA and expressed as pixels of centre shift displacement per rotation.
Frequency dominance index
Cardiac fibrillatory organization was measured using the frequency dominance index (FDI). The FDI is defined as the proportion of area occupied by the largest organized DF area in the global fibrillatory spectrum divided by the total area of all regions with a defined DF (Supplementary material online, Figure S3). The methodology for calculating FDI is similar to an index proposed by Berenfeld et al. 16 , domain density, which quantifies the number of DF domains per cm 2 . The methodology for calculating DF has been previously described in detail. 12
Statistical analysis
The Kolmogorov-Smirnov normality tests were applied to the data. When distribution was normal, Student's t-test or ANOVA (post hoc Bonferroni) statistical analyses were performed. n represents the number of experiment performed with rat hearts. For repeated measures with a single variable (i.e. GJ coupling experiments with differing concentrations), a repeated measures ANOVA (post hoc Bonferroni) was applied. A P-value <0.05 was considered statistically significant. Statistical analysis was performed with a commercially available software package GraphPad Prism 5.0. All values are expressed as mean ± standard error of mean or median with inter-quartile range (25-75%).
The degree of GJ coupling alters the VF electrocardiogram
Initially to establish whether GJ coupling altered fibrillatory mechanisms, changes in the global field electrocardiogram (ECG) morphology were studied in response to different degrees of GJ coupling. Pharmacological GJ modulation altered the periodicity and global organization of VF on the ECG. Enhancing GJ coupling with RTG regularized VF in a concentration-dependent manner, progressively organizing VF to VT at the 80 nM concentration ( Figure 1A). The DF power spectral density (PSD) analysis of the ECG traces showed a number of DFs in the global fibrillation spectrum at baseline, and as GJ coupling enhanced with RTG this organized to a single DF with a high PSD (Supplementary material online, Figure S5A). In contrast, GJ uncoupling with CBX progressively reduced the amplitude of the VF ECG and disorganized VF in a concentration-dependent manner ( Figure 1B). The DF PSD analysis of the ECG traces showed no well-defined DF as GJ uncoupling increased (Supplementary material online, Figure S5B). To examine these changes in the ECG VF morphology further, we studied the underlying mechanism in detail with phase analysis of high-resolution optical mapping recordings of VF activation patterns. Enhanced GJ coupling with highdose RTG (80 nM) demonstrated VF sustained by spatially stable longduration RAs on phase mapping ( Figure 1C), however, at high degrees of GJ uncoupling (with 30 and 50 mM CBX), there was no discernible activation pattern or periodicity identifiable on phase mapping ( Figure 1D).
In addition to phase analysis, VF activation was studied with analysis of the global frequency spectrum of fibrillatory activation. Enhanced GJ coupling with RTG progressively organized multiple DFs in the fibrillation spectrum to only one predominant DF sustaining fibrillation ( Figure 3A and Supplementary material online, Figure S6A). In contrast, GJ uncoupling with CBX progressively disorganized fibrillation, with multiple DFs sustaining VF with no clearly organized region of maximum DF ( Figure 3B and Supplementary material online, Figure S6B). The FDI, a measure of global organization derived from DF analysis, increased with RTG (baseline: 0.53 ± 0.04, 50 nM: 0.74 ± 0.04, 80 nM: 0.78 ± 0.03, P < 0.001, Figure 3C) and decreased with CBX (baseline: 0.60 ± 0.05, 30 mM: 0.20 ± 0.03, 50 mM: 0.17 ± 0.03, P < 0.001, Figure 3D). In the control group, no significant changes were observed in fibrillatory mechanism over a time period exceeding GJ coupling experiments in the percentage of VF time sustained by RAs, stability of RAs or changes in the underlying DF spectrum of fibrillation (Supplementary material online, Figures S7 and S8). Other parameters for measuring stability of RAs, including maximum duration, average rotations, total duration/s, and total RAs/s were significantly higher with enhanced GJ coupling in the RTG group compared to baseline (Supplementary material online, Figure S9) and reduced progressively with increasing CBX-mediated GJ uncoupling (Supplementary material online, Figure S10).
The pattern of ventricular fibrosis alters VF ECG
We next studied whether physical heterogeneity in ventricular tissue generated through fibrosis caused a similar spectrum of changes in global VF organization and mechanisms. As with GJ coupling experiments, during validation of the fibrosis models, the periodicity and global organization of VF on the ECG was found to vary between the differing fibrosis groups, with PF hearts demonstrating a relatively stable activation frequency and CF the most variable ( Figure 4A). ECG DF PSD analysis showed that there was a lack of a well-defined DF in the CF group. The PF group demonstrated a single large DF with a high PSD value. The DiF group showed a DF with an intermediate PSD value and a small number of DFs clustered around it (Supplementary material online, Figure S11). Histological validation of the differing fibrosis models showed both differing quantities and complexity of fibrosis patterns. CF demonstrating dense confluent fibrosis with thinned LV anterior wall and the highest quantity of fibrosis, DiF demonstrating interlacing interstitial fibrosis and PF demonstrating islands of fibrosis amongst normal myocardial tissue ( Figure 4B and C). The propensity to induction of sustained VF varied between fibrosis groups and was the highest in the CF group (Supplementary material online, Figure S12). Whilst MRI with LGE reliably detected compact and patchy ventricular fibrosis, the sensitivity was poor for detection of diffuse fibrosis ( Figure 4D and Supplementary material online, Figure S13). Action potential duration (APD) showed no significant differences between the differing fibrosis groups in remote regions. However, APD dispersion was increased in the infarct border zone regions in the LAD-ligation/CF group and ischaemia-reperfusion/PF model (Supplementary material online, Figure S14).
Pattern of fibrosis is a key determinant of underlying fibrillatory mechanism
VF was sustained predominantly by disorganized activity in CF, and was most globally organized in PF, with the DiF group exhibiting an intermediate VF phenotype. Percentage of time VF was sustained by RAs was highest in PF; sham: 36 ± 7% vs. patchy: 75 ± 5%, P < 0.001, compact: 27 ± 7% vs. patchy, P < 0.001, and diffuse: 51 ± 3% vs. patchy, P < 0.001 ( Figure 5A and B). PF frequently sustained spatially stable RAs localized to discrete areas, DiF harboured transient and meandering RAs, whilst in the CF group stable RAs were rarely detected ( Figure 5A). The stability of RAs was influenced by fibrosis patterns, with PF harbouring most stable longest duration RAs; sham: 242 ± 80 ms vs. patchy: 1411 ± 266 ms, P < 0.01 and diffuse: 620 ± 57 ms vs. patchy, P < 0.01 ( Figure 5C). Numerous other parameters for stability of RAs, including maximum rotations, average rotations, and total RAs/s were all highest in the PF group (Supplementary material online, Figure S15).
The DF map in PF showed that VF was predominantly sustained by a single large amplitude DF, comparative to the other two groups where multiple DF were seen in VF ( Figure 6A and Supplementary material online, Figure S16). FDI was highest in the PF group and lowest in the CF group, with DiF displaying intermediate FDI values (sham: 0.36 ± 0.02 vs. patchy: 0.67 ± 0.05, P < 0.001, compact: 0.33 ± 0.03 vs. patchy, P < 0.001) ( Figure 6B). Furthermore, in the PF group, the areas harbouring stable RAs frequently localized to regions where fibrosis interfaced with viable myocardium ( Figure 6C).
Enhanced GJ coupling and PF reduced meander of RAs
We studied the spatial stability of the longest duration RAs (>5 rotation threshold) in VF by tracking their path and quantifying the meander with centre shift measures. Enhanced GJ coupling with RTG stabilized RAs to discrete regions and reduced their meander. The centre shift per rotation of longest duration RAs reduced progressively with RTG is a concentration-dependent manner (baseline: 2.52 ± 0.28, 50 nM: 0.28 ± 0.03, 80 nM: 0.29 ± 0.10 pixels per rotation, P < 0.001) ( Figure 7A). Similarly, with fibrosis, despite PF harbouring the longest duration RAs, they localized to a small area, whereas in the DiF group RAs meandered significantly. Centre shift in the PF groups was significantly lower than DiF (patchy: 0.36 ± 0.08 vs. diffuse: 1.10 ± 0.08 pixels per rotation, P < 0.05, Figure 7B). The CF group harboured only a few RAs that met the five-rotation threshold, however, they demonstrated a higher degree of meander comparative to other fibrosis groups (patchy vs. compact: 1.62 ± 0.30 pixels per rotation, P < 0.001).
Enhanced GJ coupling organized and terminated VF in chronic fibrosis
After determining that VF mechanism were influenced by both GJ coupling and fibrosis, we next studied the effects of maximally enhancing GJ coupling (with 80 nM RTG dose) in VF in hearts with chronic DiF. Maximally enhancing GJ coupling in VF in chronic DiF hearts resulted in VF regularizing in periodicity and organizing to VT before terminating in 5/11 hearts (Supplementary material online, Figure S17A). Enhanced GJ coupling with RTG increased FDI significantly (baseline: 0.47 ± 0.03; 80 nM: 0.72 ± 0.03) and altered the VF mechanism from multiple DFs to a predominant single DF prior to termination. However, equally, the mean DF across the mapped surface was lower after RTG infusion and it is uncertain whether this change leads to VF termination (Supplementary material online, Figure S17B).
Discussion
This study shows that VF is sustained by a continuous spectrum of mechanisms, which range from globally organized fibrillation sustained by stable RAs to disorganized fibrillation without stable RAs. Fibrillatory mechanisms are critically influenced by the changes in the underlying electroarchitectural components, specifically, fibrosis patterns and the degree of GJ coupling. We propose this may provide an explanation for the differing mechanisms previously reported in sustaining fibrillation. We also demonstrate an intrinsic link between temporally and spatially stable RAs with globally organized fibrillation.
Spectrum of fibrillation organization and mechanisms
VF mapping in at-risk patient groups is highly challenging, and the understanding of mechanisms sustaining VF remains poor. Limited insight into VF mechanisms comes from animal studies, where conflicting data have emerged implicating multiple wavelets, focal activation, anatomical and functional re-entrant drivers. 17 The potential existence of stable RAs in VF has led some investigators to explore mechanism-guided ablation as a therapeutic option. 3 In this study, we provide an explanation for the differing mechanisms reported and show that only some forms of VF have stable RAs. Data from this study shows that VF is sustained by a continuous spectrum of global fibrillatory organization and that the level of fibrillatory organization relates specifically to the mechanism sustaining it. Globally organized fibrillation was sustained by a predominant large area of a single DF with stable RAs localized to a small area. Globally disorganized fibrillation, however, had no clearly organized region of maximum DF and was sustained by disorganized activity with no stable RAs. Whilst in between the two ends of the organizational scale, fibrillation was sustained by a mixture of disorganized activity and transient unstable RAs, with multiple small areas with a defined DF. A previous study by Zaitsev et al. 4 also demonstrated a spectrum of complexity of VF in coronaryperfused normal sheep ventricular slabs from endocardial and epicardial mapping studies, although the complexity of the arrhythmia was attributed to the rate of VF (as measured by the mean DF of the recording).
Whilst the FDI quantified global fibrillatory organization in this study, the sites of the highest DF were not considered. The role of the highest DF site and the spatial distribution of DFs in fibrillation is not entirely clear. In atrial fibrillation (AF) ablation of regions with the highest DF, previously thought to be driving the fibrillatory mechanism, was shown not to improve outcomes. 18 However, ablation of the maximal DF regions reduces intra-atrial DF gradients and homogenizes the spatial DF distribution in patients who eventually become AF free. 19 Whilst our work was in a VF model, it may be reasonable to extrapolate our mechanistic findings with caution to AF, where mechanisms remain intensely debated. Whilst the atria and ventricle differ significantly in anatomy, geometry, APD profiles, and the role of certain critical initiation sites, similar mechanisms have been shown to maintain fibrillation in both. As such, common mechanistic considerations may be given to the concept of 'myocardial fibrillation'. In some AF optical mapping studies, stable RAs are rarely seen and 98% of fibrillation has been shown to be sustained by wavelets resulting from the breakup of more organized high frequency organized waves. 20 To the contrary, the Haïssaguerre group reported presence of RAs in 82% of AF patients, some exhibiting up to eight rotations and numerous instances of acute AF termination with RA targeted ablation. 21 In contrast, the endocardial-epicardial hypothesis frames fibrillation as a largely disorganized phenomenon of continuous and chaotic focal breakthroughs that propagate transmurally with few connection and continuous regeneration. 22 In this work, we have systemically shown the presence of a spectrum of fibrillatory organization, its relationship to the mechanism sustaining it, and the underlying electroarchitecture. This may explain these discrepancies in findings in AF, although further work in experimental atrial models is needed to corroborate the findings here.
GJ coupling determines VF organization and mechanism
The degree of GJ coupling was found to determine the degree of fibrillatory organization along a continuous spectrum between globally organized fibrillation sustained by stable RAs and disorganized fibrillation with no RAs in this study. Reduction in GJ function and connexin expression, specifically connexin43, has been implicated in the initiation and maintenance of VF. 23 Our results provide strong, direct evidence for its role by demonstrating a concentration-dependent change in fibrillatory mechanisms with the degree of GJ coupling.
GJ uncoupling with CBX reduces CV and increases conduction heterogeneity without affecting ionic currents or APD. 24 CV slowing is implicated in arrhythmia susceptibility, however, its impact on the mechanism of fibrillation is unknown. In the atria, CV heterogeneity and slowing has been implicated in stabilizing sites of RA. 25 To the contrary, we found that high degrees of uncoupling increased tortuosity of conduction paths and a more disorganized form of fibrillation developed. Similar increases in complexity of fibrillatory activity in cell culture experiments using a GJ uncoupler have been reported. 26 Zlochiver et al. 6 demonstrated that low and heterocellular coupling through silencing of connexin43 in myocyte and myofibroblast cell cultures reduced CV and complexity of wavefront propagation whilst destabilizing RAs.
GJ coupling was enhanced in this work with RTG, which phosphorylates connexin43 serine residues. Enhanced GJ coupling reduces the inducibility of ventricular tachyarrhythmias; 27 and reduces energy needed for VF cardioversion in pre-treated perfused hearts. 28 GJ remodelling has also been described in AF and implicated in increased vulnerability to developing AF. 29 As with VF, pre-treatment with GJ coupling enhancers also reduces inducibility of AF. 30 In a VT/VF mechanistic study in a perfused rabbit model prepared with cryoablation, GJ modulation fundamentally influenced stability of RAs. 31 Whilst the mechanisms defined differed from ours, possibly due to differences in species, dosing and their use of an excitation-contraction uncoupler that alters ionic current (2,3butanedione monoxime), enhanced GJ coupling was also found to terminate VF as per our finding and GJ uncoupling perpetuated it. Myocardial fibrosis is associated with tortuous and slow conduction and disorganized connexin43 distribution. 7 Enhanced GJ coupling is known to normalize CV slowing physiologically stressed states such as ischaemia and disease remodelling. 32 Myocardial metabolic demand during VF is also greatly increased beyond that of a normally beating heat, 33 and VF itself may drive a relatively ischaemic state. This may provide an explanation for the progressive organization of fibrillation, stabilization of RAs to discrete regions, and eventual termination seen with enhanced GJ coupling in fibrotic hearts.
The role of cardiac fibrosis in VF mechanisms
We systematically demonstrated that the fibrosis pattern alters the fibrillatory organization and its mechanism of sustaining VF. The role of cardiac fibrosis is well established, and there is a correlation between the quantity of fibrosis and VF propensity, 34 although it is unclear how differing fibrotic patterns affect the underlying fibrillatory mechanisms. Limited evidence exists to show that RAs localize to areas of greater fibrosis in VF in perfused cardiomyopathic hearts 2 and to low voltage areas, a surrogate for fibrosis. 35 Similarly in AF, RAs localized to areas of MRI identified complex fibrotic patterns in a limited study of perfused human left atria. 36 In keeping with our finding, in silico studies have suggested that anisotropy and CV heterogeneity in PF regions stabilize and anchor RAs, 37 whereas in DiF, RAs are less stable and meander randomly. 38 Our results provide direct experimental evidence showing a direct link between different fibrotic patterns and specific fibrillation mechanisms. CF models exhibited a disorganized fibrillatory pattern and this fibrosis pattern did not harbour any stable RAs, possibly due to the dense non-conductive nature of CF being unable to anchor or sustain stable re-entrant circuits. The sham surgery group, like CF, harboured few RAs and had a VF phenotype predominantly sustained by disorganized activity. In the PF groups, areas harbouring the most stable RAs were frequently localized to areas of fibrosis interspersed amongst viable myocardium. These findings suggest that structural and electrophysiological heterogeneity resulting from the interface between complex fibrosis and viable myocardium, such as in PF and DiF, is a critical substrate for anchoring stable RAs and sustaining a globally organized fibrillation.
Clinical implications
Understanding the organization and underlying mechanism of fibrillation can facilitate a patient-tailored treatment approach towards VF prevention in VF survivors. Organized fibrillation sustained by spatiotemporally stable drivers may be considered for targeted ablation. Disorganized fibrillation dynamics may be better suited for conventional pharmacotherapy. These findings may hold some relevance in AF in selecting patient for pharmacotherapy based on a disorganized mechanism instead of ablation. However, more work is needed in AF to corroborate findings of this study and caution should be exercised in extrapolating the findings here.
Conclusion
In summary, we demonstrated that the underlying fibrillatory organization and its mechanism is influenced by GJ coupling and the pattern of fibrosis, two important electroarchitectural components of the arrhythmic substrate. A continuous spectrum of global fibrillatory organization and its specific relationship to the mechanism sustaining VF was shown in this study, and we propose that this provides an explanation to reconcile hitherto apparently conflicting reports on fibrillation mechanisms in the literature. Fibrillatory mechanisms exist along a continuum between globally organized fibrillation sustained by stable RAs, intermediately organized with a mix of unstable meandering RAs and disorganized activity, through to disorganized with no RAs. Characterizing global fibrillatory organization and mechanism sustaining VF may facilitate a patienttailored treatment approach towards VF prevention in VF survivors.
Data availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2020-05-07T09:03:47.449Z
|
2020-05-13T00:00:00.000
|
{
"year": 2020,
"sha1": "f17d152cce9c3f2faa4f9fc24442cf46f2771db2",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/cardiovascres/article-pdf/117/4/1078/36650513/cvaa141.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "91010c09a732ce6e909c09d428837c8b8eac2a62",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
227154021
|
pes2o/s2orc
|
v3-fos-license
|
A multivariate data analysis approach for investigating daily statistics of countries affected with COVID-19 pandemic
Background To understand the impact and volume of coronavirus (COVID-19) crisis, univariate analysis is tedious for describing the datasets reported daily. However, to capture the full picture and be able to compare situations and consequences for different countries, multivariate analytical models are suggested in order to visualize and compare the situation of different countries more accurately and precisely. Aims We aimed to utilize data analysis tools that display the relative positions of data points in fewer dimensions while keeping the variation of the original data set as much as possible, and cluster countries according to their scores on the formed dimensions. Methods Principal component analysis (PCA) and Partitioning around medoids (PAM) clustering algorithms were used to analyze data of 56 countries, 82 countries and 91 countries with COVID-19 at three time points, eligible countries included in the analysis are those with total cases of 500 or more with no missing data. Results After performing PCA, we generated two scores: Disease Magnitude score that represents total cases, total deaths, total actives cases, and critically ill cases, and Mortality Recovery Ratio score that represents the ratio between total deaths to total recoveries in any given country. Conclusion Accurate multivariate analyses can be of great value as they can simplify difficult concepts, explore and communicate findings from health datasets, and support the decision-making process.
Introduction
On December 31, 2019, the outbreak began in Wuhan, a province in China. Reported cases of "pneumonia of unknown origin" originated from Huanan Seafood Wholesale Market, where some animals like bats, snakes, and rabbits are raised in captivity for consumption by people and are illegally sold. A few days later, the Chinese government confirmed that this outbreak is caused by a novel Coronavirus which was named later by the World Health Organization (WHO), COVID-19 (Bai et al., 2020).
On March 11, 2020, and based on further assessments, WHO Director-General made an announcement that COVID-19 can be characterized as a pandemic (Wu and McGoogan, 2020). By March 16, 2020, the outbreak outside China increased drastically and the number of countries, states, or territories reporting infections to WHO had reached 143 .
As the situation escalates day by day, there is a growing need for a visualization tool to guide better understanding on the disease pandemic nature (Yoo, 2020). Reported data from the affected countries are important to understand the disease risk and guide different preventive measures. The reports include confirmed cases, confirmed deaths, total recoveries, severe cases, and recovered cases ratio. The data show how countries are promptly working to control the pandemic and trying to preserve the resources to fight the disease spread. They are also sharing practices and strategies needed to ensure that patients are best managed (Dey et al., 2020).
It is very important to consistently record and report epidemiological information for better understanding of disease transmission, geographic spread, risk factors for infection, and different routes of transmission. Also, to provide the baseline for various epidemiological modeling that can guide authorities for optimum planning to minimize the disease burden. This detailed and accurate information is very important to decide where surveillance should be prioritized .
To capture more clear information effectively, statistical analyses along with data visualization are needed to serve as applications of the powerful models of data science. The role of data scientists now is more important than ever for identifying different trends, patterns, and outliers to help researchers and decision makers to act in a more effective manner towards medical researches and preventive public health measures (Valdiserri and Sullivan, 2018).
Healthcare professionals have acknowledged for so long the importance of conventional disease mapping and geographic information systems (GIS), as some of the most important tools against the fight of an outbreak. The very first disease map that was drawn to visualize the relationship between a disease and its origin was in 1964 on plague outbreak in Italy (Lipton, 2019). Disease maps would be valued and used over the next 25 years aiming to understand and track most of the infectious diseases such as Yellow fever, Cholera, and Influenza (Lyseen et al., 2014).
There are many clinical outcomes reported from different countries affected with COVID-19, these outcomes are likely to have potential correlations with each other. Multivariate analysis is needed to explain interactions among variables present in the dataset, allowing data dimension reduction for better visualization, better hypothesis testing, and explanation between the dataset, so we can have a better understanding to the data reported from affected countries (Clark, 2013;Riley et al., 2017;Williams and Babbie, 1976).
The current study aims to initiatively utilize the widely applied multivariate statistical procedures, PCA and PAM algorithms, for efficient visualization and comparative inference of COVID-19 status in different countries. PCA is commonly used to reduce the number of variables that exist in many datasets, which indeed exhibit multicollinearity that alters the visualization and the application of many statistical techniques and algorithms. Of the featured advantages of PCA, it results into orthogonal components, i.e. uncorrelated/independent factors. On the other hand, PCA may result in illogical or noninterpretable factors when they are formed of non-homogenous set of variables, as it relies on analyzing the correlation matrix of the variables, the results may not make sense in some cases. Hence, a careful evidencebased naming and interpretation of the formed factors should be conducted. Finally, PCA may lead to losing some information since the resulted factors usually explains a percentage of the variability existing in the original dataset. But usually the cumulative percentage of total variance explained is also used as a criterion to judge on the quality of PCA since acceptable results that explains at least 70% of the total variability (Karamizadeh et al., 2013;Lloret et al., 2017;Stewart et al., 2014).
We performed PCA algorithms on five originally reported variables (Total confirmed cases, Active cases, Total deaths, Critically ill cases, and Mortality recovery ratio). We further performed PAM clustering algorithms on the scores of different countries on the reduced dimensions (PC scores), thus we were able to better visualize, categorize and better describe the status of countries affected with COVID-19 pandemic.
Methods
We captured the available data about Coronavirus statistics from Worldometer website https://www.worldometers.info/coronavirus/ for March 30, April 15, and April 25, 2020. Data were captured on the next day to these specified dates. Countries with COVID-19 total cases less than 500 or countries with missing data were omitted from the analysis to keep good representability of each variable. Number of countries included in the analysis was 56 countries on March 30, 82 countries on April 15, and 91 countries on April 25. Data manipulation and analysis were performed using R software (R Core Team, 2019).
We used the following description for each of the variables included; in any given country, total cases refers to total cases confirmed with COVID-19; active cases refers to total number of open cases (mild, serious, or critical); total deaths refers to total deaths with COVID-19; critically ill cases refers to number of serious/critically ill cases; mortality recovery ratio refers to the ratio between total deaths to total recovered patients.
Correlation matrices were visualized using performance analytics package (Peterson et al., 2014). Principal component analysis (PCA) was performed using FactoMineR package (Lê et al., 2008). Observations within each variable were converted to Z-scores and subjected to PCA at each time point. The main aim of PCA was to summarize patterns of a relatively large number of observed variables into a smaller number of latent factors that should be able to reflect the underlying processes that caused eventually the correlations among the variables. Mathematically, PCA develops linear combinations of observed variables; each of them is a factor, these factors summarize the pattern of correlations in the observed correlation matrix (Tabachnick and Fidell, 2007). Contributions and correlations of variables with the formed factors were determined at each time point.
We performed cluster analysis using cluster and Factoextra R packages (Kassambara et al., 2017;Maechler et al., 2013). Partitioning around medoids (PAM) algorithm was utilized to cluster the countries according to their PC-1 and PC-2 scores on the latest time point (April 25). PAM algorithm is a robust alternative to K-means clustering that is less sensitive to noise and outliers (Salgado et al., 2016). Optimum number of clusters was determined according to the highest average silhouette width (Kaufman and Rousseeuw, 1990). We performed successive waves of removal of noise clusters then reassessed the contributions and correlations of variables with the formed dimensions.
We made projection on March 30 model utilizing data of April 15 and April 25. Initial PC scores on March 30 and projected PC scores at the next time points were compared with Friedman ANOVA for the countries whose data were available at the specified three time points. The experimental methodology and design are summarized in a flow chart form (Supplementary material 1).
Results
Descriptive statistics of the original variables at each time point (30 March, 15 April, and 25 April 2020) are presented in (Table 1). The univariate outlier analysis showed the presence of many outliers across all tested variables. However, removal of univariable outliers would have caused a large portion of data to be excluded; so, we made successive waves of noise removal after performing PCA and cluster analysis. Correlation matrices between variables at each time point are presented in (Figure 1). Total cases, total deaths, active cases, and critically ill cases were consistently strongly correlated. On the other hand, mortality recovery ratio had a unique pattern of variance through the tested time points.
Upon performing multivariate PCA at each of the three time points, the variables (Total cases, total deaths, active cases, and critically ill cases) were formed into one principal component (PC-1), that we called "Disease Magnitude", as they had higher loading scores on this formed factor in the three models; while "Mortality-Recovery Ratio" was formed into another principal component (PC-2), as it had higher score on this formed factor. The percentage of contribution of original variables to the formed factors at each time point are presented in (Table 2). The correlation of the original variables with the formed factors are presented in (Figure 2), which presents prefect correlations of the original variables with their relevant formed factors (r > 0.8). The models retained about 87%, 95% and 95% of the total variance within the original variables at each time point, respectively. The loading scores on PC-1 suggested nearly equal contribution of each variable in forming the principal component. Communalities of PC-1 variables (Percentage of explained variance in each variable by the formed principal component) were consistently above 80%, while PC-2 was explaining about 100% of variance of mortality recovery ratio.
The sign of the loading scores on PC-1 was positive in the three models, so the increment in PC-1 scores indicates higher total cases, total deaths, active cases, and critically ill cases. Sign of PC-2 loading score also indicates that increment in PC-2 refers to higher mortality recovery ratio.
At each time point, each country had two scores for two dimensions, the first score (PC-1 or Disease Magnitude score) simultaneously representing the counts of total cases, total deaths, active cases and critically ill cases, and the other score (PC-2 or Mortality recovery ratio) representing the ratio between total deaths to total recoveries. The two formed variables of PC scores have efficiently stored the information within the original five variables at each time point. The descriptive statistics for both PC scores of countries in the three models are presented in (Table 3). The PC-1 and PC-2 scores of 91 countries on April 25 were subjected to successive waves of cluster analysis utilizing PAM algorithm. Each cluster was represented with one country as a "Medoid". The medoid country had minimal average dissimilarity with the other members of the cluster and was considered as centroid for each cluster. The medoid was presented by the relevant country scores on PC-1 and PC-2. At the first wave of cluster analysis, the highest average silhouette width suggested that optimum number of clusters was four. Hence, the first wave resulted in four clusters, USA (16.263, -0.113) was solely representing cluster 1, Italy (3.416, 0.171) was the medoid of cluster 2 which contained Spain, Italy, France, Germany, and Brazil. Moldova (-0.452, -0.220) was the medoid of cluster 3 which contained 84 countries of 91 countries in total. Finally, Norway (-0.224, 8.546) was representing cluster 4 ( Figure 3).
Our approach builds on the idea of decomposing the biggest cluster produced in former wave into an optimum number of clusters until we get a meaningful endpoint. To achieve that, first we have to perform a new PCA on the original dataset of this cluster in order to detect any new correlation pattern between the tested variables and the subsequent changes in loading scores on the principal component away from the influence of noise clusters. PCA was reperformed on the original dataset of countries in cluster 3 in the previous model and followed by PAM cluster analysis. The optimum number of clusters in the second clustering step was 2. Iran (7.801, -0.823) was the medoid of cluster 1 which contained Turkey, Iran, Russia, and Belgium; while Finland (-0.618, -0.346) was the medoid of the rest 80 countries.
Upon performing the final PCA on the original dataset of the 80 countries in cluster 2 in the previous model, significant weak to moderate correlations between mortality recovery ratio and the rest of variables on PC-1 were observed (Figure 4). Changes were detected accordingly in the correlations between the original variables and the formed factors compared to the model performed initially on 91 countries ( Figure 5).
We were also concerned with tracking changes in PC scores across the 3 time points. This was vital for detecting significant changes in the variables that contribute to each principal component. We made projection on March 30 model with the data of 15 and April 25. PC-1 and PC-2 scores of March 30's model, projected PC scores of 15 and April 25 were tested by Friedman ANOVA test. PC-1 scores were found to be significantly changed (P ¼ 0.000), while PC-2 scores were insignificantly changed (P ¼ 0.946).
Discussion
An overwhelming number of studies shed the light on COVID-19 from various dimensions: medical, biological, and epidemiological dimensions, its social correlates and its implication, its impact on economic status worldwide and even on micro-level. A few studies focused on tracking COVID-19 data, for the purpose of summarizing and organizing these data and to find solutions for how this huge amount of data should be visualized and presented into one or two representative graphs. Among the initial descriptive mathematical models for COVID-19 was that introduced by N. E. Huang and F. Qiao. They aimed at tracking the disease course with detecting the efficacy of the local interventions made for disease containment. Despite being robust, it did not provide realtime comment on the disease burden and progression across countries (Huang and Qiao, 2020). Q. Lin et al. developed a conceptual model based on 1918 influenza pandemic modeling framework in London, UK, taking into consideration the governmental actions and individual reactions trying to to forecast the disease behavior patterns of COVID-19 under different scenarios. The model functioned well in forecasting COVID-19 behavior when applied to data from Wuhan, China, but it was built on a unidimensional dependent variable, total confirmed cases (Lin et al., 2020). Dey and colleagues exerted valuable efforts to gather and analyze epidemiological data on COVID-19 outbreak from many open datasets. They utilized visual exploratory data analysis procedures on the available datasets for certain provinces of China and outside China, from 22 January to 16 February 2020. The datasets contained number of confirmed cases, deaths, and recovered cases. They draw heat-maps and heat-bar graphs for china and outside, this was done for each indicator separately and comparisons were done in a univariate manner of analysis (Dey et al., 2020). Another research aimed to develop predictive model for predicting COVID-19 cases, deaths, and recoveries. The researchers utilize SEIR modelling to forecast COVID-19 outbreak inside and outside China based on the daily observations. According to the developed model, they assumed that the outbreak would reach its peak in late May 2020 and would start to drop around early July 2020. They also found that negative sentiments about the virus are more prevailed than positive ones. Positive sentiments were mainly reflected through articles about "collaboration and strength of individuals in facing this epidemic', while negative articles were related to "uncertainty and poor outcomes of the disease such as deaths" (Binti Hamzah et al., 2020). Another modelling study tried to identify individuals at high risk of severe COVID-19 and how this varies between countries. The identification process was based on individual's age, sex, country-disease prevalence data, multimorbidity fractions, and infection-hospitalization ratios. This study concluded that men are at higher risk compared to women, elder people are at highest risk categories and at the macro-level, the share of the population at highest risk categories in countries with older populations, countries with high prevalence of HIV/AIDS, Chronic kidney disease, Diabetes, Cardiovascular disease, and Chronic respiratory disease (Clark et al., 2020). It is clearly noticeable that all of the previous studies analyzing COVID-19 data items were using univariate analysis techniques in order to forecast future outcomes or relate to any other individual features/variable in a one to one basis. In other words, none of those studies dealt with COVID-19 data items using multivariate analysis techniques.
A real challenge has emerged, which is how to identify the proper time to escalate or deescalate the nationwide intervention measures along the course of the pandemic. A current need for a robust tool incorporating the at-hand variants based on the available data in a one multivariate analysis, our current work presented here is an example of how visual representation can be enhanced using multivariate analysis techniques. The available visual graphs on the websites tracking COVID-19 status utilize the univariate presentation of data, presenting the progression of confirmed cases or deaths as a function of time (CDC, 2020;Worldometer, 2020). Despite being informative in a way, advanced inference for better decision making needs a more advanced methodology to reproduce high dimensional data into less dimensions, which should facilitate description and comparison of countries. Serving that purpose, we developed multivariate models aiming at studying and visualizing the current situation of every affected country by COVID-19 using PCA and cluster analysis. This was in terms of disease burden against mortality/recovery ratio at a certain time point. This will help further inference by governments and non-governmental organizations (NGO's) committed to respond to COVID-19 burden in their countries, to implement priority public health measures to support national plans and interventions.
In the current study, the affected countries had two numerical variables, in which the information within the original five variables are efficiently stored. The PCA algorithms were performed on the calculated Z-scores of the original variables. That is why the averages of the PC scores on the formed dimensions were consistently equal to zero (Table 3). Hence, countries with positive values of disease magnitude score (PC-1 score >0) had relatively higher confirmed cases, deaths, active cases and/or critically ill cases. Similarly, countries with positive values of mortality recovery ratio score had a relatively higher ratio of mortality to recovered cases, while negative values of disease magnitude or mortality recovery ratio scores indicated a relatively controlled status. This can be explained with the PC scores of USA at the first wave of cluster analysis (16.263, -0.113), despite being far in terms of disease magnitude (presented by PC-1 score, 16.263), the mortality recovery ratio was relatively controlled (presented by PC-2 score, -0.113). This is strongly indicating a well-established healthcare system that could absorb the relatively high disease magnitude without increasing the ratio of mortality compared to recovered cases.
On 25 April, the first wave of cluster analysis detected a meaningful number of noise clusters. USA was solely representing cluster 1 with the maximum disease magnitude score, Italy (3.416, 0.171) was the medoid of cluster, having relatively higher disease magnitude score compared to the main cluster 3 (84 countries of 91 countries in total). Norway (-0.224, 8.546) was solely representing cluster 4 by far in terms of high score on mortality recovery ratio (presented on PC-2). Of note, the second cluster whose medoid is Italy represents a group of countries with shared borders between Italy, Germany, France, and Spain, which may partly account for the grouping in one cluster.
Further PCA was performed on data of countries in cluster 3 in the previous model, followed by PAM cluster analysis. The detected changes in the correlations between the tested variables and the subsequent changes in loading scores on the principal component denoted that noise reduction was needed to extract more data overlapped by the noise clusters in the previous PCA. The number of clusters in this step was 2. Iran (7.801, -0.823) was medoid of cluster 1 which contained Turkey, Iran, Russia, and Belgium while Finland (-0.618, -0.346) was medoid of the rest of 80 countries. Again, geographical proximity does appear to contribute to data explanation by our model. The final multivariate analysis for data of the 80 countries in cluster 2 of the previous model showed significant weak to moderate correlations between mortality recovery ratio and rest of variables on PC-1, it also showed a subsequent changes in contributions to each PC; denoting changes compared to the model performed initially on 91 countries. The 80 countries were further optimally clustered into 2 groups. Romania (1.249, 0.165) was medoid of the first group which contained 24 countries, Cameroon (-1.015, -0.184) was medoid of 56 countries. The change in correlations between mortality recovery ratio and variables on PC-1 along with an encountered pattern of signal homogeneity in both PC-1 and PC-2 simultaneously and reciprocally in cluster 1 and cluster 2 in this wave of multivariate analysis revealed that our model has reached a logical outreach point. Each cluster finally represents a disease pattern where PC-1 representing disease magnitude is changing in the same direction of PC-2 representing mortality recovery ratio. This means that successive waves of PCA and cluster analysis were needed to properly group countries with similar disease patterns for better visualization and subsequent data extraction and projection. Moreover, results interpretation in this last step that showed significant weak to moderate correlations between mortality recovery ratio on PC-2 and rest of variables on PC-1. This may indicate that mortality recovery ratio is more influenced by the disease magnitude in the major 80 country cluster. Meaning that health care systems in these countries are beginning to be inadequately accomodative to the increase in disease magnitude, or may mean that these countries need augmentation of their capacity to regain independence of PC-2 from PC-1 and subsequently more disease control. The methodology of multivariate analysis utilized in this study represents a powerful tool to describe and visualize data at certain time points to study the disease burden in terms of disease magnitude and outcome in each country by terms of readily available data in the light of the dynamic disease attributes. The formed PCs are more convenient and informative upon proper utilization as dependent variables in further predictive regression models. Using this methodology will enable both the scientific and the policy making communities to better organize, analyze, and visualize these growing data.
Strengths and limitations
The presented multivariate data analysis approach was quite powerful for storing the information within the daily reported COVID-19 statistics in a lower number of dimensions/variables, resulting in better visualization and enhanced comparative inference. However, the variance pattern of the original variables is changing day to day. Hence, the quality and appropriateness of these multivariate procedures should be tested at single day level. The correlation between the original variables should be strong enough for performing dimension reduction procedures.
Conclusion
Using multivariate analysis techniques, we were able to develop models and simple data visualization tools that can help in interpreting the status of a given country or cluster of countries. COVID-19 daily published statistics were summarized by two scores, disease magnitude score and mortality recovery ratio score, where these reduced dimensions were efficiently able to store the information within the original datasets. Significant correlations detected between both scores in some countries is a warning alarm for saturation of healthcare systems.
Declarations
Author contribution statement Ahmed Ramadan: Conceived and designed the experiments; Analyzed and interpreted the data.
Ahmed Kamel, Alaa Taha: Analyzed and interpreted the data; Wrote the paper.
Abdelhamid El-Shabrawy, Noura Anwar Abdel-Fatah: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Funding statement
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
|
2020-11-25T14:07:13.476Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "fd176ef71afea851a5364f2eb61126f571da7340",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S240584402032418X/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "090df0d108479ce35ada7300b6291ede38109104",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
251631304
|
pes2o/s2orc
|
v3-fos-license
|
Urbanization and poverty in Sub-Saharan Africa: evidence from dynamic panel data analysis of selected urbanizing countries
Abstract Urbanization in Sub-Saharan Africa (SSA) is generally highlighted as a puzzle that deviates from the stylized facts in the literature. Using data from a panel of 29 urbanizing countries in SSA from 1985 to 2019, the study employs the two-step system generalized methods of moments to investigate the effect of urbanization on the Poverty Headcount ratio and Poverty Gap. The estimated urbanization elasticities of poverty indicate that at growth rates, a 1 percentage point increase in urbanization rate induces 0.04 and 0.05 (0.07 and 0.09) percentage points decrease in the Poverty Headcount ratio and Poverty Gap in the short-run (long-run), respectively. Similarly, at levels, a 1 percent increase in urbanization level induces 0.22 and 0.32 (0.60 and 0.68) percent decrease in the Poverty Headcount ratio and Poverty Gap in the short-run (long-run), respectively. Consistently, these results show stronger effect of urbanization on the depth of poverty relative to the incidence of poverty. These findings reappraise the literature on the urbanization of poverty in SSA as well as provide a nuanced understanding of the effect of urbanization on the different class of poverty measures. Notwithstanding, the poverty reduction potential of urbanization is not automatic and requires enormous investment in public infrastructure to achieve.
Introduction
The first target of the Sustainable Development Goals 1 (SDG1) is to end all forms of extreme poverty worldwide by 2030 (United Nations, 2015c). The potential of the urbanization process towards attaining this foremost SDG is widely recognized (Christiaensen & Weerdt, 2017;Glaeser, 2013;World Bank, 2009). Over the past century, urbanization has been acknowledged as one of the most important demographic mega-trends and the primary determinant of the spatial distribution of global population. Sustainable urbanization (SDG11) is also closely connected to the economic, social, political and environmental dimensions of sustainable development (Rudd et al., 2018;United Nations, 2015c).
The stylized fact in the urban economics and development literature is that the urbanization process through agglomeration economies and scale economies induces significant increases in income and/or consumption for a large number of both rural and urban inhabitants through the creation of relatively higher productivity and correspondingly higher paying non-farm employment opportunities in both urban and rural areas (Collier, 2017;Collier & Venables, 2017;Gollin, 2018;World Bank, 2009). This has been the experiences of the old urbanizations of Europe and North America and the new urbanization of Asia which were particularly associated with industrial revolution and agricultural green revolution respectively, thus leading to rapid economic growth, reduction in inequality and poverty reduction (Gollin et al., 2021(Gollin et al., , 2016Henderson & Kriticos, 2017).
However, the urbanization process in SSA is largely seen to deviate from the stylized facts in the literature due to its association with growing inequality and worsening poverty (Castells-Quintana & Wenban-Smith, 2020;Collier, 2006;Glaeser & Henderson, 2017). For instance, SSA is the only region in the world which experienced substantial growth in the number of extreme poor from 277.5 million in 1990 to 413.3 million in 2015 (World Bank, 2018). Particularly, 3 out of the top 5 countries that accounted for 50% of the World's extreme poor in 2015 are in SSA, namely Nigeria, Democratic Republic of Congo, and Ethiopia, and are forecasted to be the top 3 countries by 2030 (Ibid). Further, extreme poverty is projected to increase due to COVID-19 pandemic induced income losses in the large informal sector in many SSA countries (UN-Habitat, 2020;World Bank, 2022).
As extreme poverty continues to become increasingly an SSA burden, it is rightly recognized that it is in this same region that the battle for reducing global extreme poverty to less than 3% by 2030 will be won or lost (World Bank, 2018, 2019. Therefore, in piecing together the poverty puzzle, the potential of the urbanization process for poverty reduction in SSA has become a key research focus and policy priority (Rudd et al., 2018;UN-Habitat, 2016).
Generally, the urbanization-poverty nexus in SSA has been described largely as a puzzle and highlighted variously as urbanization without growth (Fay & Opal, 2000), urbanization of poverty (Ravallion et al., 2007), pathological urbanization (Annez & Buckley, 2009), poor country urbanization (Glaeser, 2013) and dysfunctional urbanization (Collier & Venables, 2017). However, these popular perceptions which are extrapolated through a comparison of the urbanization experience in SSA with Europe, North America and Asia show that an understanding of the urbanization process and its economic ramifications in SSA is nascent (Glaeser & Henderson, 2017;Turok & McGranahan, 2013).
Furthermore, the paucity of literature on the poverty reduction effect of urbanization in SSA is evidenced by the relatively limited number and avenues of studies on same. To our knowledge, few recent studies on this subject matter (Castells-Quintana & Wenban-Smith, 2020;Christiaensen & Weerdt, 2017;Mahumane & Mulder, 2022) focus exclusively on the region and/or countries within SSA. Moreover, these studies mainly focus on a single measure of poverty and do not compare the effect of urbanization on different poverty measures.
This study contributes to the literature in three ways. First, it aims to address the knowledge gap on the urbanization-poverty nexus in SSA. Second, it reappraises the urbanization-poverty puzzle in SSA. Third, it provides a nuanced understanding of the effect of urbanization on the different class of poverty measures namely the poverty Headcount ratio (P 0 ) and the poverty Gap (P 1 ) to aid policy focus in SSA. The study employs the system generalized methods of moments (SYS-GMM) methodology to estimate and compare the urbanization elasticities for the poverty Headcount ratio (incidence of poverty) and the Poverty Gap (depth of poverty) at both levels and growth rates, to ascertain which effect is stronger in the short-run vis-à-vis the long-run or both. 2 The rest of the paper is organized as follows. The related literature is reviewed in Section 2. The data sources, definitions and empirical strategy employed are discussed in Section 3. The results of the study are presented and discussed in Section 4. The conclusions and recommendations for policy considerations are presented in Section 5.
Related literature
Generally, the spatial distribution of poverty worldwide shows two main distinctive patterns. Firstly, poverty is overwhelmingly a rural phenomenon (Nguyen, 2014;World Bank, 2011;World Bank & IMF, 2013). For instance, the global incidence of poverty in rural areas is 17.2% as compared to 5.3% in the urban areas and despite the increasing share of poverty in urban areas, caused mainly by the poor being the most rapidly urbanizing segment of the population, it will not be until the middle of the century that the rural and urban shares of poverty will converge (McGranahan, 2017;Ravallion et al., 2007).
Secondly, the incidence of poverty declines steadily from rural areas to smaller towns and cities to metropolitan areas (Ferre et al., 2012;Lanjouw & Marra, 2018;Tripathi, 2013b;World Bank & IMF, 2013). This poverty city-size gradient results from the lower per-capita provision of public infrastructure and basic services in smaller towns and cities relative to big cities and metropolitan areas (Castells-Quintana & Wenban-Smith, 2020). Also, the rural poor overwhelmingly migrate to nearby smaller towns and cities thereby resulting in declining per-capita access to basic public services (World Bank & IMF, 2013).
In general, the impact of urbanization on poverty can be categorized under two-rounds effects. The first-round effects occur in the urban areas and are manifested in several folds. One, is the provision of employment opportunities in urban areas for the usually abundant low and unskilled labour from rural areas at comparatively higher levels of productivity and remuneration (Christiaensen & Weerdt, 2017;Liddle, 2017;UN-Habitat, 2016). Two, the rural poor now living in urban areas are able to access the essential public services and infrastructure such as education, electricity, healthcare, portable water, sanitation, housing, transport, capital and others required to improve living standards which are not adequately and affordably supplied in the rural areas (Liddle, 2017;UN-Habitat, 2020;World Bank, 2009). Three, surrounding rural areas provide market for urban products (Da Mata et al., 2015) and a significant proportion of urban food needs and cooking fuel such as fuel wood and charcoal (Broto et al., 2020;Mahumane & Mulder, 2022).
The second-round impact of urbanization on poverty occur in the rural areas through several channels. One, improved urban-rural linkages result in increased urban market for rural products leading to increased rural income and agricultural productivity via specialization and scale economies (Emran & Shilpi, 2012;UN-Habitat, 2016, 2020. Two, urbanization induces increased rural non-farm employment opportunities which are associated with higher returns to labour and lower incidence of poverty as compared to rural agriculture (Deichmann et al., 2009;Fafchamps & Shilpi, 2005;Foster & Rosenzweig, 2004). Three, remittances from urban to rural areas increase rural income and consumption (Cali & Menon, 2013;UN-Habitat, 2016). Four, return migration by those who have acquired capital and skills in the urban areas increases the productivity of the rural economy (UN-Habitat, 2020;World Bank, 2009).
The results from several empirical studies confirm the poverty reduction effects of urbanization. In their study on the urbanization of poverty for 87 developing countries over the period 1993, Ravallion et al. (2007 found that of the 5.2% decline in aggregate poverty during the period, urbanization accounted for 1.04%. The study by Nguyen (2014) in Vietnam over the period [2006][2007][2008] showed that a 1% increase in urbanization resulted in a rise in both rural households' percapita income and per-capita consumption expenditure by 0.54% and a 0.39%, respectively, and led to a reduction in rural household poverty rate by 0.17%. Also, the study by Datt and Ravallion (2009) in India from 1951 to 2006 showed that the poverty reduction potential of urbanization is unmatched by any productivity increase in the rural sector. The study found that the poverty reduction impact of urban economic growth far exceeded that of rural economic growth for all the three class of FTG poverty measures at the national, urban and rural levels.
The study by Tripathi (2013a) for 52 large Indian cities with 750,000 or more inhabitants between 1950 and 2025 found that urban economic growth significantly reduces urban poverty headcount ratio growth. In a similar study using data from the 61st Round of the Indian National Sample Survey, Tripathi (2013b) found that large urban population and higher city economic growth each induces a reduction in all three FGT class of poverty measures.
In SSA, the findings from the longitudinal study by Christiaensen and Weerdt (2017) in Tanzania between 1991 and 2010 found extreme poverty to be virtually non-existent among city migrants, 16% for town migrants, 30% for off-farm migrants and 42% for non-migrant rural farmers. Altogether, the average income of migrants to cities increased by 206% as compared to 36% for non-migrant rural farmers.
Additionally, several recent studies indicate non-linear effect of urbanization on poverty. The study by Ha et al. (2021) in Vietnam using data from 2006 to 2016 showed a U-shaped effect of urbanization on the poverty headcount ratio, with the estimated urbanization thresholds being 43.68% and 40.19% in the static and dynamic models, respectively. Also, the study by Wang et al. (2022) on the effect of urbanization on rural and urban poverty using data from up to 19 provinces in China from 2000 to 2017 found a U-shaped relationship for the poverty headcount, poverty gap and poverty intensity for both rural and urban areas. Furthermore, the study by Mahumane and Mulder (2022) on the effect of urbanization on household energy poverty in Mozambique between 2003 and 2015 showed that the effect for energy consumption poverty is U-shaped and that for energy expenditure poverty is N-shaped.
Data
The data for the study are sourced from three main online databases namely Penn World Tables Version 10. Three main data sampling criteria are adopted. First, the study follows Henderson (2003a) and adopts the urbanization criterion which restricts the sample to only 38 positively urbanizing countries throughout the study period. Next, in line with prior literature (Ferre et al., 2012;Henderson et al., 2013; UN-DESA, 2019a) a population criterion is employed which considers only 34 countries with at least 300,000 inhabitants in 1960. The raison d'etre for this criterion is that urban agglomeration economies are far less pronounced in countries with lower population. Third is data availability/quality criterion which restricts the sample to only 29 countries. 3 In line with prior literature, the data is sub-divided into five-year intervals to purge the variables from short term wide fluctuations and cyclical effects (Brülhart & Sbergami, 2009;Castells-Quintana, 2017;Chauvin et al., 2017;Fay & Opal, 2000;Henderson, 2000;Sulemana et al., 2019) as well as to capture sufficient variations (Henderson, 2003a(Henderson, , 2003b. The Foster et al. (1984) class of decomposable poverty measures (FGT) covering the Poverty Incidence (P 0 ) and the Poverty Gap (P 1 ) are used to measure, respectively, the breadth and depth of poverty. Table A presents the definitions, expected signs and the sources of data for the variables of the study.
Descriptive statistics
The summary statistics of the key variables as presented in Table 1 show considerable variations within and among countries. Noteworthy, the Poverty Headcount ratio (Poverty Gap) ranges from a minimum of 3% (1%) to 95% (65%) with a mean value of 54% (24%). Also, urbanization level (rate) with a mean of 34% (2%) ranges from a minimum of 5% (0.03%) to a maximum of 89% (12%).
Empirical model
The study empirically investigates both the short-run and long-run effects of urbanization on poverty in SSA. The urban economics and new economic geography literature considers the existence of a large variety of agglomeration economies as the most important feature of the urban spatial economy (Fujita et al., 2003). Consequently, the study follows prior studies (Castells-Quintana, 2017;Fay & Opal, 2000;Henderson & Kriticos, 2017;Nguyen & Nguyen, 2018) and adopts urbanization variable as a proxy for urban agglomeration economies. Particularly, the proportion of a country's population living in areas described as cities by national statistics (urbanization level) and the changes in urbanization level (urbanization rate) are used exclusively of each other as the proxy measures of urban agglomeration economies.
In line with the standard approach in the literature where both initial conditions and interaction effects are considered (Bourguignon, 2003;Christiaensen et al., 2013;Fosu, 2009;Kalwij & Verschoor, 2007), a Cobb-Douglas expenditure function is specified of the form: The hypothesized relationship in Equation 1 is that the poverty index of country i over period t, P it is a function of the urbanization rate (level) U it ; a vector of control variables K it ; and the set of interaction terms R it . The initial levels of per-capita GDP and inequality and the changes in percapita GDP and inequality are used as the set of control variables (Dollar et al., 2016;Dollar & Kraay, 2002;Fosu, 2017;Kanbur, 2005). For the interaction terms, the level of urbanization is Wang et al., 2022). Figure 1 presents the analytical framework of the study.
The Cobb-Douglas functional specification of Equation 1 is to make it easier to log-transform it to obtain the urbanization elasticity parameters for estimation. The log-linearization provides additional estimation benefits. First, it transforms the non-linear equation into a linear model to enable the parameters to be estimated using linear regression methods for easy interpretation. Second, the log-transformation reduces the skewness in the data which may be caused by outliers that may bias the estimated results. Third, it eliminates any possible existence of heteroscedasticity to make the error terms homoscedastic, uncorrelated and normally distributed.
Accordingly, the natural logarithm is taken on both sides of Equation 1 and rewritten in a dynamic form to yield a first order autoregressive [AR (1)] model to be estimated as: where i = 1, ?up>. . ., N, t = 1, ?up>. . .
The random disturbance term � it in the dynamic panel data (DPD) model of Equation 2 is a oneway error component model of the form: where υ i denotes the country-specific effects and ε it is the usual stochastic error term. Equation 3 is a random model, the error terms υ i ?up>∼IID (0, σ 2 υ i ), ε it ?up>∼IID (0, σ 2 ε it Þ and are all independent such that E(υ i ) = 0, E (ε it ) = 0 and E (υ i ε it ) = 0. Also, the explanatory variables (X it � ) in Equation 2 are all orthogonal to the error terms υ i and ε it for all i and t such that Since both the dependent and the main independent variables in Equation 2 are in natural logarithms, it implies that the coefficient of the main independent variables namely β 1 is the urbanization elasticity of poverty.
The case for generalized methods of moments
The application of the GMM methodology for this study is based on four principal reasons. First, the primary condition for the use of GMM exists since the number of countries (N = 29) is considerably higher than the number of time periods in each cross section (T = 7). Thus N >T. Second, the poverty indices are persistent. In particular, the correlation between the Poverty Headcount ratio (Poverty Gap) and its first lag is 0.8713 (0.8627) which is significant at 1% level. These coefficients are above the threshold level of 0.8000 required to establish the persistence of a variable (Asongu & Acha-Anyi, 2019;Tchamyou & Asongu, 2017). Third, the GMM preserves the cross-country variations in the panel data.
Fourth, there is a problem of endogeneity in Equation 2 since p it as a function of υ i implies that p i;tÀ 1 is also related to υ i and therefore, using p i;tÀ 1 as a separate regressor will be correlated with the disturbance term � it . GMM addresses this endogeneity issue in several ways. It mitigates both the unmeasured and time-invariant individual country specific and unobserved heterogeneity effects (Asongu et al., 2020). It also accounts for simultaneity in the explanatory variables via the use of the lagged values of the dependent variable and the regressors as instruments in differences or both differences and levels (Bond & Windmeijer, 2002;Brülhart & Sbergami, 2009;Tchamyou et al., 2019). The GMM also uses the orthogonality conditions to obtain efficient and consistent estimates even when heteroskedasticity exists in an arbitrary form (Baum et al., 2003).
To illustrate the GMM procedure, consider Equation 2 in level given in a general form as: where, p it � represents the dependent variable and x it � the right-hand variables in Equation 2, with a0 and β 0 being parameters. The difference GMM (DIF-GMM) involves taking the first difference of Equation (5) as: Which can be rewritten in the form: where Δ is the difference operator. The first differencing eliminates the country-specific effects term υ i which may result in incorrect model specification. Also, Δp it � is correlated with Δε it . The system GMM (SYS-GMM) is proposed to address the weak instrumentation problem of the DIF-GMM by combining instruments in first differences and levels (Bowsher, 2002;Judson & Owen, 1999;Roodman, 2009a). Also, the GMM procedure addresses the serial correlation and endogeneity issues through the use of sufficient lags of the dependent variable and the first differenced errors (Arellano & Bond, 1991;Arellano & Bover, 1995;Blundell & Bond, 1998).
Choosing between the difference and system GMM
In choosing between the DIF-GMM and SYS-GMM, the study follows the methodology outlined by Bond (2002). It involves estimating Equation 2 using the pooled OLS, Fixed Effects (FE) and DIF-GMM and comparing the respective values of α. The OLS and the FE are considered, respectively, as an upper-bound estimate and lower-bound estimate. Since the a priori expectation is that α is positively correlated with � it , the OLS will bias its value upward whereas the FE will bias it downward so the estimated value of the true parameter should lie in or close to this range (Bond, 2002;Roodman, 2009b).
The results from the alternative estimations of Equation 2 for the Poverty Headcount ratio and Poverty Gap as the respective dependent variables for the rates and levels of urbanization are presented in Table 2. From the Table, the coefficients of the respective lagged dependent variables from the DIF-GMM1 estimations are closer to that of the FE estimations, implying that the DIF-GMM estimator is biased downward and hence the SYS-GMM estimator is preferable in all cases.
Furthermore, in line with the convention in most applied work using GMM estimations, this study estimates and interprets the results of Equation (2) using the two-step system-GMM (SYS-GMM2). Several simulation studies have shown the efficiency gains from using SYS-GMM2 including controlling for heteroskedasticity and cross correlation (Bond & Windmeijer, 2002;Roodman, 2009aRoodman, , 2009bWindmeijer, 2005).
GMM specification tests
Following the convention in the literature, the study employs two main GMM standard specification tests. These are serial correlation tests, namely the first order [AR(1)] and the second order [AR(2)]; and the test on the validity of instruments (Arellano & Bond, 1991). Generally, using lagged variables as moment conditions can lead to bias due to the possibility of over-fitting the endogenous regressors (Baltagi, 2005). Consequently, the study follows (Bond, 2002;Roodman, 2009aRoodman, , 2009b and uses both the Sargan test and the Hansen test as complementary test statistics of full instrument validity as well as the structural specification of the model. Additionally, the collapsed approach of Roodman (2009aRoodman ( , 2009b) is adopted to account for cross-sectional dependence (Asongu & Acha-Anyi, 2019;Tchamyou et al., 2019) and to prevent instrument proliferation that weakens the Hansen J-test (Andersen & Sørensen, 1996;Bowsher, 2002).
GMM identification, simultaneity, and exclusive restrictions
Fundamental to the GMM strategy is the identification, simultaneity and exclusion restrictions. First, identification involves defining the three variable categories, namely the dependent variable, the endogenous explanatory variables and strictly exogenous variables (Asongu et al., 2020;Tchamyou et al., 2019). The outcome variables are the Poverty Headcount ratio and the Poverty Gap. The identified strictly exogenous variables are years whereas the explanatory variables, namely urbanization (rate and level) and the control variables are the endogenous variables (Asongu & Acha-Anyi, 2019;Tchamyou, 2019). Implicitly, the strictly exogenous variables are assumed to affect the outcome variable through the endogenous variables (Asongu et al., 2020).
Secondly, the issue of simultaneity such as the inclusion of both the urbanization term (u it ) and the poverty term (p it ) in Equation 2 is addressed through the instrumentation process of the SYS-GMM (Arellano & Bover, 1995;Bond & Windmeijer, 2002;Roodman, 2009b). Third, the exclusive restrictions involve checking the validity of a subset of instruments (Roodman, 2009a(Roodman, , 2009b. Within the GMM strategy, the Difference in Hansen Test is applied and the validity of the exclusion restrictions is confirmed when the null hypothesis in relation to the instrumental variable is not rejected (Asongu & Acha-Anyi, 2019;Asongu et al., 2020). Note: */** /*** indicate statistically significant levels at, respectively, 10%/5%/1%.
Correlations among key variables
The partial correlation matrix among the key variables is reported in Table 3. The poverty reduction effect of urbanization as indicated by the negative correlation coefficients between urbanization level and the Poverty Headcount ratio (Poverty Gap) is −0.543 (−0.431) and significant at 1% level. Similarly, the correlation between GDP per-capita and the Poverty Headcount ratio (Poverty Gap) is −0.603 (−0.495) and significant at 1% level. The correlation coefficients between the Gini indices and the poverty indices are all positive, albeit only that between the Gini Index and the Poverty Gap is significant at 1% level.
Furthermore, the scatterplot in Figures 2 and 3 show strong negative correlations between the level of urbanization and the poverty indices in SSA. This attests to the poverty reduction effect of urbanization. However, due to the absence of control mechanisms and diagnostic tests not much inference is made from these graphical results.
Effects of urbanization on poverty
The results for the SYS-GMM2 estimations of Equation 2 are presented in Tables 4, 5 and Table 6. All estimations were conducted at 95% confidence interval and the maximum lag length of the variables and instruments are restricted to three (3) which according to the simulation studies of Bowsher (2002) maximizes the power of the Sargan test. (2) = long-run estimates. Four criteria are used to evaluate the validity of each estimation (Asongu et al., 2020;Tchamyou et al., 2019). First is autocorrelation test. The AR (1) has the preferred negative sign and most importantly, the AR (2) is not significant in all estimations. This implies that the lag terms of the respective dependent variables used as instruments are exogenous and therefore valid instruments. It also confirms the appropriateness of the DPD models used for this study. Second is the test of full instrument validity. The p-values associated with the Sargan and Hansen tests for over-identification restrictions are all not significant. These confirm the validity of the full instruments used in each estimation. Particularly, the p-values for the J-statistics are within the generally acceptable range of 0.10-0.60 with that for the Poverty Gap being within the "Goldilocks range" of 0.10-0.25 (Roodman, 2009a). Third is the test of validity of instrument subset. The p-values for the Difference in Hansen Test are all not significant and confirm the exogeneity of the subsets of the instruments used. Fourth, the overall significance of each regression as indicated by the F-test statistic is significant at 1% level. The results from these standard diagnostic tests confirm the validity of the structural specifications and moment conditions used in estimating Equation (2). Notes: */** /*** indicate significance levels at, respectively, 10%/5%/1%. The standard errors for the estimated parameters are in parenthesis. For the F, AR(1), AR(2), Sargan, Hansen and Difference in Hansen tests, the p-values are in parenthesis. The panel data cover the period from 1985-2019 and the variables are calculated over 5-year intervals. Also, (1) = short-run estimates;
The estimated results from Tables 4 and 5 indicate the poverty reduction effect of urbanization in SSA. One, as indicated by the urbanization elasticities of poverty, the poverty reduction effect of urbanization is stronger in both magnitude and significance for the level of urbanization as compared to the rate of urbanization for the same poverty index. For example, for P 1 in Table 5, the respective estimated short-run and long-run urbanization level (rate) elasticities are −0.32 (−0.05) and −0.68 (−0.09) at corresponding 1% (5%) and 1% (5%) significant levels. 4 Two, the poverty reduction effect of urbanization is stronger in the long-run as compared to the short-run for the same poverty index. For instance, for P 0 in Table 4 the long-run (short-run) magnitudes of the urbanization elasticity variables ln(Urbanization level) and ln(Urbanization rate) are, respectively, -0.60 (-0.22) and -0.07 (-0.04). A similar observation pertains to P 1 in Table 5. These elasticities imply that the poverty reduction effect of urbanization amplifies with time.
Three, in general, both the growth rate and initial level of per-capita GDP have significant poverty reduction effects. Particularly, from Table 5, the coefficients of the variables ln(per-capita GDP growth) and ln(Initial per-capita GDP) are negative and significant in both the short-run and long-run for P 1 . These results are in line with the literature and specifically support the findings of (Bourguignon, 2003;Dollar et al., 2016;Dollar & Kraay, 2002;Fosu, 2009Fosu, , 2017b) that high level of per-capita GDP and/or the growth rate of per-capita GDP is a boon to poverty reduction. More so, the general significance of the variable Squared(per-capita GDP growth) confirm the existence of a non-linear relationship between GDP per-capita and poverty. Further, the (absolute) magnitude of the growth elasticity of poverty ln(per-capita GDP growth) increases with time. For P 1 in Table 5, it increases from −1.48 (−1.13) in the short-run to −2.67 (−2.36) in the long-run for the urbanization rate (level).
Four, the results generally confirm the deleterious effect of income inequality on poverty. Particularly, the variable ln(Inequality growth) is significant throughout for P 1 in Table 5. However, the results for the initial level of inequality, although with the right positive coefficients, are only significant for P 0 in the long-run. On the whole, these results support the findings of Fosu (2009Fosu ( , 2017 and Kalwij and Verschoor (2007) that initial and/or growing inequality hurt poverty reduction efforts and converse to the findings of Dollar and Kraay (2002) and Dollar et al. (2016) that growth in income of the poor are uncorrelated with both the initial and growth in income distribution. Furthermore, the variable Squared (Inequality growth) being generally significant for both P 0 and P 1 in Tables 4 and 5 confirm the non-linear relationship between inequality and poverty.
Five, the respective roles of GDP per-capita and Inequality levels in moderating the effect of urbanization on poverty are as expected. The significance of respective positive and negative coefficients of the interaction effects variables, namely (Urbanization level*per-capita GDP) and (Urbanization level*Inequality Level) in both Tables 4 and 5 show that the poverty reduction effect of urbanization is amplified by the level of GDP per-capita and attenuated by the level of Inequality. The former results confirm the synergistic complementary relationship between the spatial agglomeration of economic activities and economic growth.
Six, Time effects are significant and increase in (absolute) magnitude for both poverty indices, a result that corroborates with the generally observable increasing poverty reduction effects of the significant variables in the long-run. Table 6 presents a summary of the urbanization elasticities of poverty estimated from Equation 2. Estimations at growth rates indicate that a 1 percentage point increase in urbanization rate induces 0.04 (0.05) and 0.07 (0.09) percentage points decrease in the Poverty Headcount (Poverty Gap) in the short-run and the long-run, respectively. Similarly, estimation at levels indicate that a 1 percent increase in urbanization level induces 0.22 (0.32) and 0.60 (0.68) percent decrease in the Poverty Headcount (Poverty Gap) in the short-run and the long-run, respectively. Clearly, urbanization has a stronger effect in reducing the depth of poverty (P 1 ) relative to the incidence of poverty (P 0 ) in both the short-run and the long-run. Furthermore, the poverty reduction effect of urbanization at both growth rates and levels of urbanization are far more pronounced in the long-run relative to the short-run.
Summary and conclusions
The study investigated the poverty reduction effect of urbanization for a panel of 29 urbanizing countries in SSA from 1985 to 2019. The study employed the SYS-GMM2 to estimate the growth rates and levels of urbanization elasticities of poverty. The results show that urbanization within the selected SSA countries has a significant effect in reducing both the incidence of poverty (Poverty Headcount ratio) and depth of poverty (Poverty Gap) with the latter effect being consistently stronger than the former at both growth rates and levels in the short-run and long-run. Overall, the findings of this study reappraise the literature on the urbanization of poverty in SSA as well as provide a nuanced understanding of the effect of urbanization on the different class of poverty measures.
The findings of this study have several policy implications. First, due to its potential for poverty reduction, policy makers in SSA should fully embrace urbanization rather than adopt partial exclusionary measures to prevent it. Second, the full benefits of the urbanization process cannot be reaped automatically. This calls for long-term urban planning and substantial investment in the provision of urban public infrastructure and services such as roads, water, health, education, telecommunication, and others that are mostly lacking in the newly emerging and contiguous urban areas in SSA. Third, promoting (sustainable) urbanization must be made part and parcel of the process of nurturing economic growth and eradicating poverty in SSA. Four, to successfully manage the urbanization and its economic consequences in SSA, there is the need for continuous policy coordination across national and sub-regional borders in SSA. Five, promoting sustainable urbanization in SSA requires the provision of legal and effective enforcement of private property rights over land and buildings that constitute the urban built environment.
An obvious weakness of this study is its limited scope. For instance, the stylized facts of the spatial distribution of poverty worldwide show a declining incidence from rural areas to smaller towns and cities to metropolitan areas, however, urban poverty in many SSA countries is disproportionately concentrated in the largest cities (World Bank, 2011;World Bank & IMF, 2013). This phenomenon which was not examined in this study presents avenue for future research.
Funding
The authors received no direct funding for this research.
|
2022-08-18T15:15:27.588Z
|
2022-08-16T00:00:00.000
|
{
"year": 2022,
"sha1": "fdca04aae0b1bfd54fd7ce5e9ead630e5982ff60",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23322039.2022.2109282?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "fd7f5fd22c5370d167e1d8e3458f921f1a2da51e",
"s2fieldsofstudy": [
"Economics",
"Geography"
],
"extfieldsofstudy": []
}
|
46890370
|
pes2o/s2orc
|
v3-fos-license
|
A Heuristic Method for Measurement Site Selection in Sewer Systems
Although calibration of a hydrodynamic model depends on the availability of measurement data representing the system behavior, advice for the planning of necessary measurement campaigns for model calibration is scarce. This work tries to address this question of efficient measurement site selection on a network scale for the objective of calibrating a hydrodynamic model case study in Austria. For this, a model-based approach is chosen, as the method should be able to be used before measurement data is available. An existing model is assumed to represent the real system behavior. Based on this extended availability of “measurement data” in every point of the system, different approaches are established to heuristically assess the suitability of one or more pipes in combination as calibration point(s). These approaches intend to find suitable answers to the question of measurement site selection for this specific case study within a relatively short time and with a reasonable computational effort. As a result, the relevance of the spatial distribution of calibration points is highlighted. Furthermore, particular efficient calibration points are identified and further measurement sites in the underlying network are recommended.
Introduction
The rise in complexity of our models over recent decades also increases the difficulty in assessing their accuracy. This holds also true for calibrating and validating hydrodynamic sewer models. The need for calibration of these models, which are applied to predict the behavior of urban drainage systems, is undisputed in science. In modeling practice however, calibration is still often neglected. In particular, data availability and quality tend to be limiting factors [1,2] and the consequently required sampling campaigns for calibration can increase the economic costs of projects up to an unachievable level for many operators [3].
The model calibration process was subject to manifold studies, e.g., discussing underlying calibration algorithms [4,5], the choice of the calibration variable [6,7], the objective functions used for calibration [8,9], varying model input data [10,11] or uncertainties of various sources and their propagation throughout the model [12,13]. However, guidance regarding the question of measurement site selection for the calibration of hydrodynamic sewer models is still scarce. In other contexts, the question of optimal sensor locations has already been discussed in the early 1980s, for example by Walski [14] for the calibration of water distribution networks.
Regarding more practical aspects of sewer system management, the general conduction of measurements for modeling is discussed by several authors [15,16]. Multi-purpose advice for optimal sensor placement in any kind of urban water system network are presented in the PREPARED project [17], where also detailed locations for pollutant measures are considered, e.g., the position within the cross-section in dependence to the flow distribution. The majority of these studies however concentrate on conceptual drainage models [1], pluvial flood models [16] or pollutant models [18][19][20], respectively. Focus is laid therein on the quantification of mass fluxes, either to the receiving water bodies via combined sewer overflows or to the wastewater treatment plants. Research activities having the calibration of hydrodynamic models to predict flow conditions (velocity, water levels, peak flows, etc.) as the objective function for sensor placement are still limited. In particular hydrodynamic drainage models require a more elaborate process for setting up a model compared to hydrological models. In return, they are able to provide an increased amount of possible output information.
Installing and operating of measurement devices in a drainage system is a cost-intensive endeavor that requires careful planning to deal with different and sometimes contradictory requirements. For example, measurement sites in the system's periphery result in data that can differentiate involved subareas accurately, but do not provide information about the more downstream behavior. In contrast, collecting data near outlets and overflows provides a summarizing signal for large parts of the system but obscures detailed spatial information due to compensation effects of the different substreams. However, the measurement campaign should meet the challenge to provide a sufficient amount of data for a specified task, while being economically viable by using the minimum amount of measurement stations to fulfill this task. Only few existing publications questioned the optimal measurement site selection in sewer-systems for the objective of calibration so far. A possible solution for finding optimal measuring locations was investigated by Clemens [21], who performed a mathematical analysis of the model parameterization and information content of potential measuring locations. Heuristic algorithms to find close-to-optimal results are established for the design of a wastewater monitoring network for water quality aspects [19,20]. General advice for measurement sites to use for calibration can be found in the PREPARED project [17], which are derived from other contexts of sensor placements in urban water systems.
The presented research contributes to the problem of identifying feasible measurement sites for the calibration of a hydrodynamic sewer model. As such, this resembles an "experimental design" problem, where the conducting of a measurement campaign represents the actual experiment. The question of how to design this experiment is addressed by regarding a model's reliability with testing different scenarios of underlying calibration data sets.
Because the objective function surface of a calibration is a function of numerous inputs for extensive and even medium sized sewer networks, the global optimum of this function cannot be determined most definitely, unless infinite data availability and computational resources are given. Therefore, we used a heuristic model-based approach to enable this analysis prior to the execution of a measurement and to keep the computational efforts within reasonable margins.
Furthermore, depending on where calibration data is available, differences occur in the resulting sets of calibration parameters to fit the model results to those measurements. Therefore, the spatial distribution of the calibration measurements in the network topology introduces some amount of uncertainty for the model parameters and consequently results. In order to highlight this effect, different scenarios of data availability for calibration are simulated and sensitivity analyses are carried out. The method is applied to a real world case study in Tyrol, Austria, aiming to improve the planning of measurement campaigns for model calibration.
Materials and Methods
The developed approach requires a baseline system as benchmark to compare different measurement layouts and their effect on the calibration performance. However, as a measurement campaign is planned before a detailed model calibration can be executed, measured values are not available at this early stage and a model-based approach is necessary. For the sake of exemplifying the methodology, an existing model is assumed to represent the real system behavior. Based on this extensive availability of "measurement data" in every point of the system, different heuristic approaches are established to assess the suitability of one or more pipes in combination as calibration point(s). These approaches will be described in more detail in the following Section 2.3.
The existing model is created from the available network and surface data. It was calibrated with a one year measurement series of the water level at the catchment's outlet and tested for plausibility [22].
Case Study
The methodology is demonstrated using an existing hydrodynamic model of Telfs in Tyrol, Austria. Telfs is a municipality about 27 km west from Innsbruck in Tyrol, Austria. Located at an altitude of about 634 m above sea level in the valley of the river Inn, Telfs is reaching from the river up to the footlets of the mountain chain Karwendel. It can be designated as a typical Tyrolean urban settlement. By September 2017, the population of Telfs is about 15,781 inhabitants [23]. The average annual rainfall is at about 1000 mm [24].
Telfs has one remote parcel, called Mösern. This part is about 3 km away and connected in the east of the system. For a better depiction, Mösern is shown in a separate image section in all figures of this paper showing the drainage system.
During a previous measurement campaign in 2014, three rain gauges were installed spatially distributed all over the area of Telfs. Their recordings are now used for this study as model input data. A model has been established and calibrated on measured water levels using the genetic calibration tool of PCSWMM [25]. It has then been tested for plausibility by comparing its simulated discharged volumes to the wastewater treatment plant to measurement data of the plant, further by comparing the simulated to the measured total pumping durations, and also through considering the operator's assessment of the system's behavior. An area of approximately 73 ha in total is connected to the sewer system in the model, with an average imperviousness of 58% [22]. This model is used as a reference system.
The only wastewater treatment plant (WWTP) is located southeast of the town. This plant treats the wastewater of five nearby communities (including Telfs) and its capacity is designed for 40,000 population equivalents. Accordingly, the drainage network of Telfs has to cope with draining also the wastewater of the other association members to the WWTP. Historically the drainage system is a combined system, which was adapted over time by disconnecting several settlements (i.e., subcatchments) for alternative drainage options. Such options include e.g., the direct discharge of stormwater into the river Inn and decentralized methods such as local infiltration. This study focuses on sensor placement for calibrating the combined system. Only noticeably complex parts of the separate system are considered additionally. System parts are considered as noticeably complex when in order to model their hydraulic behavior anything other than conduits and junctions are necessary in the hydrodynamic model (e.g., storages or weirs).
A more detailed description of the model is provided by Tscheikner-Gratl et al. [10].
Implementation and Automation
For modeling and hydrodynamic simulation, the software SWMM [26,27] is used. The investigations of this study are based on the variation of calibration parameters controlling the subcatchment's runoff concentrations and their imperviousness. In SWMM, these attributes are expressed for each subcatchment by a value for the subcatchment width and the imperviousness, respectively. SWMM represents the subcatchments in a rectangular shape and therefore the width influences the flow time on the surface and in consequence the shape of the hydrograph on each subcatchment. In this work, we clustered subcatchments to assign them to the same calibration parameter. This reduces the total amount of calibration parameters significantly. Subcatchments are clustered according to their deviation from a quadratic shape taken from GIS data (a ratio between the coextensive square side length and the width, see Figure 1) and their land-use (imperviousness, see Figure 2) in the uncalibrated model. The initial values of all subcatchments with the same color are further multiplied with the same respective parameter. We used four factors (parameters) for the width and three factors for the The initial values of all subcatchments with the same color are further multiplied with the same respective parameter. We used four factors (parameters) for the width and three factors for the The initial values of all subcatchments with the same color are further multiplied with the same respective parameter. We used four factors (parameters) for the width and three factors for the imperviousness. The group with the lowest imperviousness consists of only one subcatchment and is therefore assumed to have a neglectable impact on calibration. A change in one parameter consequently has an impact on variously spread subcatchments simultaneously.
Calibration parameter assignment as well as the subsequently performed calibrations and sensitivity analyses were automated by using the programming language R [28].
To calibrate the model and find a suitable set of parameters, an optimization algorithm based on a Nelder-Mead simplex [29] is used with the objective function of maximizing the Nash-Sutcliffe Efficiency in the regarded calibration point. This optimization algorithm is a derivative free numerical method for nonlinear optimization problems. This algorithm is applicable for the heuristic approach used here, as the calculation of derivatives for this multidimensional problem requires unreasonably large computational efforts and an analytical solution is unavailable.
Approaches to Assess the Suitability of Pipes as Measurement Sites
The identification of suitable locations to conduct measurements can be performed in various ways. This paper presents two main approaches to identify such suitable locations for measurements:
1.
Calibration to specific measurement layouts: At first, a calibration for each possible measurement site is executed separately. In a next step, combinations of different measurement locations based on the established results are tested for calibration.
2.
Sensitivity analyses: Local sensitivity analyses are carried out to assess which pipes are most sensitive to changes in the input parameter.
These two approaches are explained in more detail in the following subsections. They both rely on the same previously described parameter assignment as well as a precedent selection process for possible measurement sites. This selection is a decision process based on changes in the total inflow to the sewer network in order to restrict the possible measurement sites.
Although the total catchment area is relatively small, the model consists of over 3000 pipes and all pipes can theoretically be used individually as a measurement site. A first restriction in the choice of potential measurement locations is set in order not to test every single pipe section of the model as a calibration point, and thereby lower the necessary computational effort. The decision, if a pipe is considered to be a potential measurement location or not, is based on changes in the total inflow. A pipe is specified as a potential measurement location, if a subsequent change in the total inflow occurs. This is exemplarily depicted by means of the encircled conduits in Figure 3: • before a junction, which is determined as an inflow node of a subcatchment • before a junction, which has more than one incoming or outgoing connected conduits • in conduits leading to an outfall imperviousness. The group with the lowest imperviousness consists of only one subcatchment and is therefore assumed to have a neglectable impact on calibration. A change in one parameter consequently has an impact on variously spread subcatchments simultaneously. Calibration parameter assignment as well as the subsequently performed calibrations and sensitivity analyses were automated by using the programming language R [28].
To calibrate the model and find a suitable set of parameters, an optimization algorithm based on a Nelder-Mead simplex [29] is used with the objective function of maximizing the Nash-Sutcliffe Efficiency in the regarded calibration point. This optimization algorithm is a derivative free numerical method for nonlinear optimization problems. This algorithm is applicable for the heuristic approach used here, as the calculation of derivatives for this multidimensional problem requires unreasonably large computational efforts and an analytical solution is unavailable.
Approaches to Assess the Suitability of Pipes as Measurement Sites
The identification of suitable locations to conduct measurements can be performed in various ways. This paper presents two main approaches to identify such suitable locations for measurements: 1. Calibration to specific measurement layouts: At first, a calibration for each possible measurement site is executed separately. In a next step, combinations of different measurement locations based on the established results are tested for calibration. 2. Sensitivity analyses: Local sensitivity analyses are carried out to assess which pipes are most sensitive to changes in the input parameter.
These two approaches are explained in more detail in the following subsections. They both rely on the same previously described parameter assignment as well as a precedent selection process for possible measurement sites. This selection is a decision process based on changes in the total inflow to the sewer network in order to restrict the possible measurement sites.
Although the total catchment area is relatively small, the model consists of over 3000 pipes and all pipes can theoretically be used individually as a measurement site. A first restriction in the choice of potential measurement locations is set in order not to test every single pipe section of the model as a calibration point, and thereby lower the necessary computational effort. The decision, if a pipe is considered to be a potential measurement location or not, is based on changes in the total inflow. A pipe is specified as a potential measurement location, if a subsequent change in the total inflow occurs. This is exemplarily depicted by means of the encircled conduits in Figure 3: • before a junction, which is determined as an inflow node of a subcatchment • before a junction, which has more than one incoming or outgoing connected conduits • in conduits leading to an outfall This classification leads to a reduction from over 3000 available pipes to 1094 potential measurement locations. Still, this amount would demand unreasonably large computational efforts for a mathematical solution of optimum measurement site selection, as it is used in Clemens [21] with 295 potential measurement locations. Only the simulation results of these 1094 pipes are then This classification leads to a reduction from over 3000 available pipes to 1094 potential measurement locations. Still, this amount would demand unreasonably large computational efforts for a mathematical solution of optimum measurement site selection, as it is used in Clemens [21] with 295 potential measurement locations. Only the simulation results of these 1094 pipes are then compared to their reference values. This comparison is mainly based on the evaluation of the Nash-Sutcliffe-Efficiency (NSE) [30] of the water level time series in each pipe. Additionally, other performance indicators are evaluated in order to increase the significance of the results. These include the Index of Agreement (d) [31], the correlation coefficient (r), the root mean square error (rmse) as well as the sum of the squared residuals (ssq). Differences of these values as well as their advantages and disadvantages for calibration can be found in Krause et al. [8].
In order to verify the model behavior and the algorithm for automatic calibration, the procedure of calibration with one calibration point each and the sensitivity analyses are executed with two different rainfall inputs. At first, a consolidated rain series [32] consisting of three real measured rain events, with peaks of 5.4, 9.2 and 4.8 mm/5 min respectively, taken from the existing rain gauges is applied. Then, the procedures are repeated for a design storm event type Euler II with a return period of 5 years, prepared according to Austrian design guidelines [33] with a peak of 12.2 mm/5 min.
Calibration to Specific Measurement Layouts
As a first approach, all of the 1094 remaining potential measurement locations are used to simulate separate calibration scenarios. In each scenario, only the dataset from one measurement site is used for calibration. Figure 4 shows the general scheme of this approach, as well as the way of abstracting the reality to the used benchmark system.
Water 2018, 10, x FOR PEER REVIEW 6 of 16 compared to their reference values. This comparison is mainly based on the evaluation of the Nash-Sutcliffe-Efficiency (NSE) [30] of the water level time series in each pipe. Additionally, other performance indicators are evaluated in order to increase the significance of the results. These include the Index of Agreement (d) [31], the correlation coefficient (r), the root mean square error (rmse) as well as the sum of the squared residuals (ssq). Differences of these values as well as their advantages and disadvantages for calibration can be found in Krause et al. [8].
In order to verify the model behavior and the algorithm for automatic calibration, the procedure of calibration with one calibration point each and the sensitivity analyses are executed with two different rainfall inputs. At first, a consolidated rain series [32] consisting of three real measured rain events, with peaks of 5.4, 9.2 and 4.8 mm/5 min respectively, taken from the existing rain gauges is applied. Then, the procedures are repeated for a design storm event type Euler II with a return period of 5 years, prepared according to Austrian design guidelines [33] with a peak of 12.2 mm/5 min.
Calibration to Specific Measurement Layouts
As a first approach, all of the 1094 remaining potential measurement locations are used to simulate separate calibration scenarios. In each scenario, only the dataset from one measurement site is used for calibration. Figure 4 shows the general scheme of this approach, as well as the way of abstracting the reality to the used benchmark system. The method works as follows (enumerations corresponding to the numbering in Figure 4): 1. The basis is an existing and for plausibility tested hydrodynamic model of the case study's urban drainage network. Its simulation results are assumed to represent the reality i.e., to be measurement data of the system behavior. It serves as a reference system for all further investigations and benchmarks in this study. 2. A new uncalibrated model is created from the existing reference model. This model is created by setting model parameters to typical values based on the analysis of available orthophotos while applying the same clustering as described in Section 2.3. It is then used to test different scenarios of measurement layouts for calibration. For each scenario, a calibration is carried out using the reference model as measurement. Each scenario results therefore in a newly calibrated model. 3. Simulated water level time series from the newly calibrated models for each scenario are compared to their (assumed) measurements in the benchmark system. In order to ensure a multi- The method works as follows (enumerations corresponding to the numbering in Figure 4): 1.
The basis is an existing and for plausibility tested hydrodynamic model of the case study's urban drainage network. Its simulation results are assumed to represent the reality i.e., to be measurement data of the system behavior. It serves as a reference system for all further investigations and benchmarks in this study.
2.
A new uncalibrated model is created from the existing reference model. This model is created by setting model parameters to typical values based on the analysis of available orthophotos while applying the same clustering as described in Section 2.3. It is then used to test different scenarios of measurement layouts for calibration. For each scenario, a calibration is carried out using the reference model as measurement. Each scenario results therefore in a newly calibrated model.
3.
Simulated water level time series from the newly calibrated models for each scenario are compared to their (assumed) measurements in the benchmark system. In order to ensure a multi-perspective view on the calibration results [9], different objective functions (NSE, d, r, rmse, ssq) are evaluated to compare the simulated with the reference water level time series and thus assess each calibration performance.
4.
This approach results in assessing the calibration performance of each calibration point based on the comparisons between model results and assumed system behavior (which are the reference model's simulation results) in all pipes.
As a criterion for sufficient calibration, a threshold for the NSE at the respective measurement station of 0.9 is chosen (1.0 would represent a perfect fit). In a first run, over 1000 model calibrations are proceeded with one calibration point each. Then, the model performance of each calibrated model is evaluated.
The NSE values of all pipes in the network are evaluated statistically for each calibration scenario to assess the individual suitability of a pipe as a measurement station. These evaluations contain eleven values, i.e., the 10%, the 25%, the 50% (median), the 75% and the 90%-quantile and minimum, maximum and mean values of the NSEs as well as the standard deviation over the network. Furthermore, the mean absolute change in the NSEs and the number of pipes with a NSE > 0.9 are calculated. The median, the mean absolute change and the number of pipes with a NSE > 0.9 are regarded as the most meaningful values. These values are used for the assignment of an individual prospect of success for an accurate calibration to each measurement station.
To combine the advantages of different measurement sites, calibration to combined measurement stations is additionally performed by using the results of the first calibration procedure to single pipe measurements. Combinations of different calibration points are assumed as a measurement campaign and calibration is proceeded until the NSE exceeds a value of 0.9 in all of these pipes.
With 1094 individual potential measurement locations, a total number of ∑ 1094 k=2 1094! (1094−k)! = 3.12 * 10 2850 combinations are possible. As this represents an unrealistic computational effort, a second restriction in the selection of potential measurement campaigns is made in order to keep the computational effort within reasonable limits. Only systematically sampled measurement combinations are tested for calibration. The combinations are identified according to the following procedure:
1.
A first pipe a is chosen as a measurement point from the foregoing ranking of calibration to single pipes. Therefore, calibration points with a resulting high calibration performance are taken into consideration.
2.
A second pipe b is selected to enhance the model's agreement to the reference model after it was calibrated to pipe a. For this, attention is paid to pipes showing a poor fit after the calibration to pipe a. Out of the foregoing calibrations to single pipes, a scenario is looked up that results in a good fit for exactly those pipes. The underlying calibration point of this detected scenario is then selected to be pipe b, the second measurement station.
3.
An automated calibration to those two measurements is executed. The same threshold of NSE > 0.9 has to be fulfilled for both hydrographs.
4.
A third suitable pipe c for an additional measurement is determined by regarding the NSE values calculated after the calibration for pipes a and b together. The selection of pipe c now follows the same rules as the selection of pipe b.
5.
Again, a calibration for pipes a, b and c is performed, until all of the three reach a NSE of at least 0.9.
6.
This scheme can be applied repeatedly to add more calibration points. The stop criterion is the number of measurement sites planned. In this work, we sampled from two up to a maximum of six pipes, which represents the financial constraint in the number of measurement sites. Figure 5 exemplifies the procedure explained above.
Step 1 (Figure 5a): As a first measurement point the pipe C211 (encircled) is chosen, due to a good fit to the overall network. This calibration result is highly rated, because 824 out of 1094 pipes result in a NSE > 0.9 with a median value of 0.944. However good the fit, there are still negative NSEs occurring in the northern part of the network.
Step 2 (Figure 5b): The choice of an additional measurement site is thus focused on a resulting good fit in those parts, regardless of the fit in the rest of the model. There, the best agreement can be reached by calibrating for pipe 206010.
Step 3 (Figure 5c): A new calibration for C211 as well as for 206010 is performed and results in NSEs shown in Figure 5c. Again, system parts with non-sufficient agreements are identified in the northern part.
Step 4 (Figure 5d): An additional measurement station 206155 (with original results according to Figure 5d) is determined in order to improve those links in particular.
Step 5: A calibration for C211, 206010 and 206155 is conducted and the resulting NSEs are evaluated.
Step 6: Another pipe can be chosen and added to the measurement campaign to improve specific system parts with occurring low NSEs. This procedure is continuously repeated until the wanted number of measurement sites is reached.
Water 2018, 10, x FOR PEER REVIEW 8 of 16 Step 2 (Figure 5b): The choice of an additional measurement site is thus focused on a resulting good fit in those parts, regardless of the fit in the rest of the model. There, the best agreement can be reached by calibrating for pipe 206010.
Step 3 (Figure 5c): A new calibration for C211 as well as for 206010 is performed and results in NSEs shown in Figure 5c. Again, system parts with non-sufficient agreements are identified in the northern part.
Step 4 (Figure 5d): An additional measurement station 206155 (with original results according to Figure 5d) is determined in order to improve those links in particular.
Step 5: A calibration for C211, 206010 and 206155 is conducted and the resulting NSEs are evaluated.
Step 6: Another pipe can be chosen and added to the measurement campaign to improve specific system parts with occurring low NSEs. This procedure is continuously repeated until the wanted number of measurement sites is reached.
Sensitivity Analyses
Independent of the straight-forward calibrations described above, sensitivity analyses to the same calibration parameters as in the previous approach are performed [34]. Therefore, 1000 models are created and simulated with random parameter sets complying with set boundary conditions. Then, the simulation results of the randomly created models are compared to the results of the reference model. For this comparison, each pipe is regarded individually, unrelated to the behavior of other pipes. The idea behind is the assumption that pipes that clearly respond to changes in the input parameters are good measurement locations to determine those parameters. Pipes with nearly steady values do not respond to changes in the model parameters and are not suitable for calibration.
Sensitivity Analyses
Independent of the straight-forward calibrations described above, sensitivity analyses to the same calibration parameters as in the previous approach are performed [34]. Therefore, 1000 models are created and simulated with random parameter sets complying with set boundary conditions. Then, the simulation results of the randomly created models are compared to the results of the reference model. For this comparison, each pipe is regarded individually, unrelated to the behavior of other pipes. The idea behind is the assumption that pipes that clearly respond to changes in the input parameters are good measurement locations to determine those parameters. Pipes with nearly steady values do not respond to changes in the model parameters and are not suitable for calibration. So the amount of information in a measurement is assumed to increase with the sensitivity of the pipe [35].
Results and Discussion
The following results are based on the evaluation of over 20,000 simulation runs. Each previously described approach will be discussed separately with its respective results. Further comparisons and integrated considerations are also given at the end of this chapter.
Calibration to Specific Measurement Layouts
To compare the results of calibrations with scenarios of varying measurement data samples (single as well as combined calibration points), different common statistical values for the NSEs in all pipes after calibration are evaluated to assess the calibration performance.
This computational run was the most CPU-intensive as over 7500 simulations were executed during the optimization procedures. Even though, for the 1094 calibration scenarios to single pipe measurements with the measured rainfall, only 299 succeeded in a NSE > 0.9 at the regarded calibration point. For the calibration scenarios with the design storm, only 206 out of 1094 calibrations were successful. Figure 6 shows the network with each pipe colored according to the calibration performance after the model has been successfully calibrated to this pipe, using the measured rain events as input. To represent the calibration performance, the number of pipes that exceed a NSE of 0.9 after model calibration is the chosen statistic presented in Figure 6. So the amount of information in a measurement is assumed to increase with the sensitivity of the pipe [35].
Results and Discussion
The following results are based on the evaluation of over 20,000 simulation runs. Each previously described approach will be discussed separately with its respective results. Further comparisons and integrated considerations are also given at the end of this chapter.
Calibration to Specific Measurement Layouts
To compare the results of calibrations with scenarios of varying measurement data samples (single as well as combined calibration points), different common statistical values for the NSEs in all pipes after calibration are evaluated to assess the calibration performance.
This computational run was the most CPU-intensive as over 7500 simulations were executed during the optimization procedures. Even though, for the 1094 calibration scenarios to single pipe measurements with the measured rainfall, only 299 succeeded in a NSE > 0.9 at the regarded calibration point. For the calibration scenarios with the design storm, only 206 out of 1094 calibrations were successful. Figure 6 shows the network with each pipe colored according to the calibration performance after the model has been successfully calibrated to this pipe, using the measured rain events as input. To represent the calibration performance, the number of pipes that exceed a NSE of 0.9 after model calibration is the chosen statistic presented in Figure 6. Figure 6. Number of pipes with NSE > 0.9 after calibration to the respective pipe (1094 pipes are maximum possible; pipes, which already exceeded a NSE of 0.9 in the uncalibrated model are depicted as "cal. not necessary"; calibration runs that did not reach the threshold of 0.9 at the calibration point are depicted as "cal. not poss."; pipes colored according to "no calibration" are not considered as possible measurement location and therefore not evaluated). Figure 6. Number of pipes with NSE > 0.9 after calibration to the respective pipe (1094 pipes are maximum possible; pipes, which already exceeded a NSE of 0.9 in the uncalibrated model are depicted as "cal. not necessary"; calibration runs that did not reach the threshold of 0.9 at the calibration point are depicted as "cal. not poss."; pipes colored according to "no calibration" are not considered as possible measurement location and therefore not evaluated). Figure 6 also shows where calibration is not necessary or not possible. The uncalibrated model has been a rough estimation of the model parameters. It already showed up a good agreement (NSE > 0.9) for those pipes depicted as "cal. not necessary". Therefore, calibrations with the aim of improving the agreement in those pipes are not necessary. Further pipes are depicted as "cal. not poss.", which means that a calibration to these pipes was not possible. In the cases of considering these pipes as calibration points, the optimization algorithm could not succeed in reaching the determined threshold and thus could not meet the calibration criteria.
For a better understanding of Figure 6, two points of the network are discussed exemplarily in the following. The two outlets to the wastewater treatment plant are colored in light green and blue, respectively. Considering the northern (light green) pipe as a single measurement station, between 718 and 876 pipes result in a NSE > 0.9 when comparing the calibrated model to the reference model. The other (blue) outlet has not been considered as a calibration point, because this pipe already exceeded the threshold of NSE = 0.9 when comparing the uncalibrated model to the reference model. Consequently, an optimization aiming at maximizing the NSE at this point is not necessary. Figure 6 does not allow drawing significant conclusions about the general location of such sites. A slight tendency can be made out for efficient calibration points to be located downstream. Nevertheless, high performances are also indicated occasionally with calibration points at upstream ends of pipe branches.
In Figure 7a, the number of pipes with a NSE > 0.9 is plotted. Figure 7b shows the median of all evaluated NSE values after a calibration, depending on the connected impervious area to the applied calibration point. In Figure 7c, the averaged absolute change in the evaluated NSE compared to the uncalibrated model is shown.
Water 2018, 10, x FOR PEER REVIEW 10 of 16 Figure 6 also shows where calibration is not necessary or not possible. The uncalibrated model has been a rough estimation of the model parameters. It already showed up a good agreement (NSE > 0.9) for those pipes depicted as "cal. not necessary". Therefore, calibrations with the aim of improving the agreement in those pipes are not necessary. Further pipes are depicted as "cal. not poss.", which means that a calibration to these pipes was not possible. In the cases of considering these pipes as calibration points, the optimization algorithm could not succeed in reaching the determined threshold and thus could not meet the calibration criteria.
For a better understanding of Figure 6, two points of the network are discussed exemplarily in the following. The two outlets to the wastewater treatment plant are colored in light green and blue, respectively. Considering the northern (light green) pipe as a single measurement station, between 718 and 876 pipes result in a NSE > 0.9 when comparing the calibrated model to the reference model. The other (blue) outlet has not been considered as a calibration point, because this pipe already exceeded the threshold of NSE = 0.9 when comparing the uncalibrated model to the reference model. Consequently, an optimization aiming at maximizing the NSE at this point is not necessary. Figure 6 does not allow drawing significant conclusions about the general location of such sites. A slight tendency can be made out for efficient calibration points to be located downstream. Nevertheless, high performances are also indicated occasionally with calibration points at upstream ends of pipe branches.
In Figure 7a, the number of pipes with a NSE > 0.9 is plotted. Figure 7b shows the median of all evaluated NSE values after a calibration, depending on the connected impervious area to the applied calibration point. In Figure 7c, the averaged absolute change in the evaluated NSE compared to the uncalibrated model is shown. Also other evaluations of relationships in the calibration performance, e.g., dependencies on the diameter or the stream hierarchy of the measured pipe (the calibration point) etc., show similar scattered values.
The calibration approach is continued with the sampling of different pipes to a measurement campaign. The presented results of the calibrations using multiple calibration points simultaneously are restricted to the results of one of the established combinations. This measurement campaign consists of five calibration points and resulted in the best model performance compared to the other investigated combinations (11 in total). The resulting NSEs in all pipes are shown in Figure 8.
1025 out of 1094 evaluated links (93.7%) result in a NSE > 0.9 with a median value of 0.97. Only six pipes result in a NSE < 0, indicating that the mean value of their simulated water level time series would provide a higher NSE than the predicted values. These numbers represent a nearly perfect agreement of the model to the reference model after calibration. Therefore, this measurement campaign is highly rated to apply the here used calibration procedure to. Also other evaluations of relationships in the calibration performance, e.g., dependencies on the diameter or the stream hierarchy of the measured pipe (the calibration point) etc., show similar scattered values.
The calibration approach is continued with the sampling of different pipes to a measurement campaign. The presented results of the calibrations using multiple calibration points simultaneously are restricted to the results of one of the established combinations. This measurement campaign consists of five calibration points and resulted in the best model performance compared to the other investigated combinations (11 in total). The resulting NSEs in all pipes are shown in Figure 8.
1025 out of 1094 evaluated links (93.7%) result in a NSE > 0.9 with a median value of 0.97. Only six pipes result in a NSE < 0, indicating that the mean value of their simulated water level time series would provide a higher NSE than the predicted values. These numbers represent a nearly perfect agreement of the model to the reference model after calibration. Therefore, this measurement campaign is highly rated to apply the here used calibration procedure to.
Sensitivity Analyses
The sensitivity analyses provide recognizable tendencies, where data collection for calibration appears to be efficient. Figure 9 shows the network with each pipe colored according to the resulting ranges of five different performance indicators. The innermost color represents the range of the NSE, the outermost stands for the range of d. The closer to the red end of the spectrum that a pipe is colored, the more sensitive it is to changes in the calibration parameters. Pipes directly connected to high inflow rates (e.g., from large subcatchments) and pipes lying more downstream and/or connected to outfalls show high sensitivities to random parameter changes. Therefore, they are considered as recommended measurement locations. Their sensitivities indicate a high calibration performance and model performance if they are used as calibration points. The two pipes with the highest (206196) and lowest (205050) occurring range are highlighted in Figure 9.
For a more detailed depiction, Figure 10 shows the resulting water level time series of the reference model compared to the variations of the random models for these pipes. Periods with dry weather flow (values below the horizontal mark) are neglected when calculating the NSE (and all other objective functions) in order to prevent biased results caused by a good data fitting during quasi-static low flow periods.
The highest range of the NSE due to a random parameter variation occurs in pipe 206196 ( Figure 10, upper graph). The accordance between random models and the reference model ranges from values of −8.52 to 0.21. Conversely, pipe 205050 ( Figure 10, lower graph) shows the lowest sensitivity to parameter changes. Thus, the resulting flow depth course in pipe 205050 is rather independent of the calibration parameters whereas results for pipe 206196 strongly depend on the parameter choice. Both pipes are located upstream at the very beginning of a branch. Pipe 206196 drains a large subcatchment with higher inflow rates while only a small area is connected to pipe 205050.
Sensitivity Analyses
The sensitivity analyses provide recognizable tendencies, where data collection for calibration appears to be efficient. Figure 9 shows the network with each pipe colored according to the resulting ranges of five different performance indicators. The innermost color represents the range of the NSE, the outermost stands for the range of d. The closer to the red end of the spectrum that a pipe is colored, the more sensitive it is to changes in the calibration parameters. Pipes directly connected to high inflow rates (e.g., from large subcatchments) and pipes lying more downstream and/or connected to outfalls show high sensitivities to random parameter changes. Therefore, they are considered as recommended measurement locations. Their sensitivities indicate a high calibration performance and model performance if they are used as calibration points. The two pipes with the highest (206196) and lowest (205050) occurring range are highlighted in Figure 9.
For a more detailed depiction, Figure 10 shows the resulting water level time series of the reference model compared to the variations of the random models for these pipes. Periods with dry weather flow (values below the horizontal mark) are neglected when calculating the NSE (and all other objective functions) in order to prevent biased results caused by a good data fitting during quasi-static low flow periods.
The highest range of the NSE due to a random parameter variation occurs in pipe 206196 ( Figure 10, upper graph). The accordance between random models and the reference model ranges from values of −8.52 to 0.21. Conversely, pipe 205050 ( Figure 10, lower graph) shows the lowest sensitivity to parameter changes. Thus, the resulting flow depth course in pipe 205050 is rather independent of the calibration parameters whereas results for pipe 206196 strongly depend on the parameter choice. Both pipes are located upstream at the very beginning of a branch. Pipe 206196 drains a large subcatchment with higher inflow rates while only a small area is connected to pipe 205050.
Final Recommendations for Measurement Sites
Finally, several efficient calibration points are identified for this specific case study. The final recommendations are shown in Figure 11, where the results from the different approaches (calibrations and sensitivity analyses) are combined.
Water 2018, 10, x FOR PEER REVIEW 13 of 16
Final Recommendations for Measurement Sites
Finally, several efficient calibration points are identified for this specific case study. The final recommendations are shown in Figure 11, where the results from the different approaches (calibrations and sensitivity analyses) are combined. They are mainly located at the end of collector branches of the combined sewer system (MT20270, 161004-N, OA3010, C62, OA2310). In separated drainage networks, larger subcatchments should be monitored at locations prior to outlets discharging into receiving water bodies (512030,194010). However, it is considered to be important not only to measure right before outlets, but also that calibration data should be collected in collector pipes spread over the entire network. This allows the identification of significant model parameters in terms of influential subcatchments on a more detailed level [36].
Apart from the results of the previously explained approaches, external inflows should be quantified for additional reasons of economy. This includes known discharges from industrial companies (226120) as well as inflows from external catchments, i.e., adjacent municipalities (here inflow_WM, inflow_Oberhofen) into the network. As the wastewater treatment plant is operated jointly, related expenses can be allocated to each of the municipalities depending on those measurements.
Additionally, the case study's operator roughly estimated favorable locations of measurement sites based on his empirical knowledge of the drainage network. There is a good agreement between the pipes theoretically recommended as measurement sites by means of the method described above and those suggested by the operator. This agreement confirms the plausibility of the approach. However, additional suggestions for measurements are locations with known operational problems (e.g., capacity overload).
All of the approaches use seven subcatchment-related parameters for calibration. This reduces the degrees of freedom of the automatic calibration algorithm to a reasonable extent. Consequently, the change of one parameter affects various subcatchments all over the catchment simultaneously They are mainly located at the end of collector branches of the combined sewer system (MT20270, 161004-N, OA3010, C62, OA2310). In separated drainage networks, larger subcatchments should be monitored at locations prior to outlets discharging into receiving water bodies (512030, 194010). However, it is considered to be important not only to measure right before outlets, but also that calibration data should be collected in collector pipes spread over the entire network. This allows the identification of significant model parameters in terms of influential subcatchments on a more detailed level [36].
Apart from the results of the previously explained approaches, external inflows should be quantified for additional reasons of economy. This includes known discharges from industrial companies (226120) as well as inflows from external catchments, i.e., adjacent municipalities (here inflow_WM, inflow_Oberhofen) into the network. As the wastewater treatment plant is operated jointly, related expenses can be allocated to each of the municipalities depending on those measurements.
Additionally, the case study's operator roughly estimated favorable locations of measurement sites based on his empirical knowledge of the drainage network. There is a good agreement between the pipes theoretically recommended as measurement sites by means of the method described above and those suggested by the operator. This agreement confirms the plausibility of the approach. However, additional suggestions for measurements are locations with known operational problems (e.g., capacity overload).
All of the approaches use seven subcatchment-related parameters for calibration. This reduces the degrees of freedom of the automatic calibration algorithm to a reasonable extent. Consequently, the change of one parameter affects various subcatchments all over the catchment simultaneously and a differentiation between subcatchments lying upstream and downstream to the measurement site lapses. The results show a certain independence of the consequences of parameter changes from the actual location of the measurement. Thus, no correlation between characteristics of the measurement site (e.g., location, diameter, flow, etc.) and calibration performance could be found.
Furthermore, although the used numerical optimization algorithm for calibration may find suitable solutions efficiently, it is likely to converge at a local optimum and to miss the global optimum. As such, it is advisable to perform the calibration with different initial conditions. However, the performed sensitivity analyses allow the derivation of systematic trends. High sensitivities can be seen in collector sewers and pipes with high inflow rates, meaning that those pipes are good locations for calibration. This observation corresponds to the advices for sensor placement on a network scale given in Skjetne et al. [17].
Conclusions and Outlook
This paper presents a heuristic approach for an experimental design of measurement campaigns and allows the identification of measurement sites for an efficient model calibration. The testing of different measurement layouts, the evaluation of the calibration performance as well as the impacts of parameter variations on the water level time series in every pipe is enabled by a model-based methodology. This makes it especially useful for setting up a completely new measurement campaign with no or few existing previous measurement data.
Even though this case study is a small catchment, its model contains over 3000 pipes and computational effort had to be kept manageable by setting some restrictions. Firstly, focus was led on the decisive network parts, i.e., by neglecting most of the independent sewer branches without a connection to the WWTP (stormwater sewers). Secondly, the here used calibration algorithm was implemented with seven calibration parameters for the whole model, of which all are adapting subcatchment-related characteristics of multiple subcatchments simultaneously. Thirdly, neither every single pipe nor every possible combination of pipes was tested as a calibration scenario. Only systematically determined calibration points and combinations are considered in the evaluations.
Regarding the presented heuristic approach, it may not be able to show up the optimal solutions for measurement campaigns. Nevertheless, it is sufficient to compare different possible measurement sites for calibration. This further allows finding a suitable solution for the question of measurement site selection in a relatively short time frame and with an appropriate computational effort.
As a result, a number of 10 pipe sections are provided for recommendable measurement sites. They do not only meet the criteria of efficiency for calibration (i.e., the six measurement sites established with the here presented methodology), but also cover operational and economic aspects (four more measurements). Thus, this study emphasizes a calibration to strategically distributed measurements within the network topology in contrast to favor data collection in pipes where operational problems occur. The results indicate an increased model performance when calibration data is available for collector sewers and sewers placed immediately after inlets with high inflow rates. This exemplifies the crucial role of high inflow rates to hydrodynamic model predictions (e.g., flooding incident, combined sewer overflow).
As an outlook, further studies could enhance the methodology by increasing the number of calibration parameters by considering the subcatchment's location within the network (e.g., upstream or downstream of the regarded calibration point) or the inclusion of redundant measurements in order to cope with possible failures of sensors or errors in the measurements.
Research of the University of Innsbruck. Bmst. Ing. Martin Riedl, divisional head of urban water infrastructure of Telfs enabled the cooperation with the network operator of the case study.
Author Contributions: All authors substantially contributed in conceiving and designing of the approach and realizing this manuscript. Franz Tscheikner-Gratl prepared the input data. Tanja Vonach, Franz Tscheikner-Gratl and Manfred Kleidorfer conceived and designed the approaches; Tanja Vonach performed the simulations and analyzed the data during her master thesis. Wolfgang Rauch contributed analysis tools and technical resources. Manfred Kleidorfer and Wolfgang Rauch supervised the entire research. Tanja Vonach wrote the paper. All authors have read and approved the final manuscript.
Conflicts of Interest:
The founding sponsors had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, and in the decision to publish the results.
|
2018-05-15T17:10:31.893Z
|
2018-01-29T00:00:00.000
|
{
"year": 2018,
"sha1": "054f393f21171e4d753b4d82b5efe8ca22a4a660",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/10/2/122/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d0612534ad82e25aedf118c9b563603d8db4ffb0",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
}
|
18927449
|
pes2o/s2orc
|
v3-fos-license
|
Positive definite maps, representations and frames
We present a unitary approach to the construction of representations and intertwining operators. We apply it to the $C^*$-algebras, groups, Gabor type unitary systems and wavelets. We give an application of our method to the theory of frames, and we prove a general dilation theorem which is in turn applied to specific cases, and we obtain in this way a dilation theorem for wavelets.
Introduction
Engineering problems in time-frequency analysis of coherent vector expansions, Gabor bases, wavelets based on scaling and integral translations, and multiresolution algorithms in signal processing are generally not thought to be related to operator algebras. In this paper, we show nonetheless that a fundamental idea of Kolmogorov adds clarity to known constructions in operator algebra theory, and moreover is the key to an extension of recent results in the more applied areas that we enumerated above. Of our original results (see section 5 below) we highlight a new algorithm for the construction of certain orthonormal frames of wavelet type. Our paper proposes a general method of construction of representations of various algebraic structures as operators on Hilbert spaces. Our goal is to show how some well known constructions of representations fit into the same framework and are consequences of a general result. Among the structures considered, we mention C * -algebras, groups, Gabor type unitary systems and wavelet representations.
In operator theory, the GNS construction producing representations of C * -algebras is a fundamental tool (see [BraRo]). In harmonic analysis unitary representations of groups can be constructed when a function of positive type is present (see [Fol]). Representations are ubiquitous also in the theory of wavelets and frames (see [HL], [Jor98]). We will see how these various results have in fact a common ground -a classical theorem of Kolmogorov (theorem 2.2), also known in the literature as the Kolmogorov decomposition of positive definite kernels. We follow here the ideas introduced in [EvLe]. It is shown there that the Kolmogorov theorem gives a unified treatment of several important dilation theorems such as the GNS-Stinespring construction for C * -algebras, the Naimark-Sz. Nagy unitary dilation of positive definite functions on groups, the construction of Fock spaces and the algebras of canonical commutation and anticommutation relations. Kolmogorov's result was used also by Sz. Nagy and C. Foias in dilation theory, for the commutant lifting theorem ( [SzF68], [SzF70]) which in turn was a key idea used by D. Sarason to obtain a solution to the Nevanlinna-Pick interpolation problem ( [Sar67]). For a more complete account of the history and applications of Komogorov's result, we refer to [C96].
We will indicate how this technique can be used also for construction of wavelet representations and Gabor type unitary systems.
More general constructions for Hermitian kernels are also possible and they are based on Krein spaces (see [C97]).
In section 2 we review the general result of Kolmogorov and we show how it can be used for the GNS construction and for positive definite maps on groups. Then we apply it to Gabor type unitary systems and we obtain unitary representations and for wavelets we get the cyclic representations introduced in [Jor98].
Section 3 concerns operators compatible with the representations defined in section 2, called intertwining operators. Again, the starting point is a general theorem (theorem 3.2). We consider some particular cases and study how the intertwining operators will be compatible with the additional structure that appears.
In section 4 we analyze some connections between representations and frames. We recall that a set {x n | n ∈ N} of vectors in a Hilbert space H is called a frame for the Hilbert space H if there are some positive constants A and B such that When A = B = 1 the we call it normalized tight frame.
It is known that any normalized tight frame is the projection of an orthonormal basis of a bigger Hilbert space (see [HL]). We will prove that the normalized tight frames can be dilated to orthonormal bases in a way that is compatible with the representations defined in section 2. We will get as immediate consequences the dilation theorems for groups and Gabor type unitary systems introduced in [HL].
In the last section we consider the case of wavelets obtained from a multiresolution analysis. It is known (see [Dau92]) that, unless some restrictions are imposed on the low-pass filter that starts the MRA construction, the wavelets obtained do not form an orthonormal basis but a normalized tight frame. Since such frames can be dilated to orthonormal bases, a natural question would be if the dilation preserves the multiresolution structure. The answer is affirmative and it is given in theorem 5.2 and more concretely in theorem 5.3. In this way we obtain "wavelets" in a Hilbert space bigger then L 2 (R).
Positive definite maps and representations
We begin this section with a general result of Kolmogorov ( [EvLe], [EvKa]). Then we consider several structures and show how to obtain representations from this general theorem.
Definition 2.1. Let X be a nonempty set. We say that a map K : X × X → C is positive definite, and denote this by 0 ≤ K, if n i,j=1 Moreover, H K and v K are unique up to unitary isomorphisms.
Remark 2.3. Kolmogorov's theorem is valid also for operator-valued positive definite maps and in this form it can be applied for the Stinespring construction and the Naimark-Sz.Nagy dilation. For details consult [EvLe] and [EvKa]. In this paper, for the application to wavelets and Gabor frames, we will need only the more particular version of Kolmogorov's theorem that we mentioned before.
Definition 2.4. If K : X × X → C is positive definite then we call [H K , v K ] the representation associated to K.
We note that Kolmogorov's theorem is purely set theoretic; there is no structure on X. We expect that, if X has some additional structure on it and if we assume some compatibility between the positive definite map K and this structure, then the representation associated to K will also be in agreement with the structure of X. In the next examples we will see that this is indeed the case and we review the technique in the case of C * -algebras and groups.
Example 2.5. [C * -algebras and the GNS construction] We consider now the case when X = A is a C * -algebra and prove that we can obtain the well known GNS construction from Kolmogorov's theorem.
Theorem 2.6. [The GNS construction] If A is a C * -algebra and ϕ is a positive linear functional on A, then there exists a representation π of A on a Hilbert space H, that has a cyclic vector Proof. The idea is to define K : A × A → C by We can use Kolmogorov's theorem to obtain the Hilbert space H K and the map v K : A → H K . For a fixed x ∈ A, define the operator π(x) as follows: and extend by linearity. Then everything checks out.
Example 2.7. [Groups and unitary representations] Take X = G a group. We call K : We note that such a positive definite map K is uniquely determined by its restriction φ(x) = K(x, 1) and φ is a function of positive type (see [Fol]). The proof of theorem 2.8 will show how the well known correspondence between functions of positive type and unitary representations of groups can be regarded as a consequence of Kolmogorov's theorem.
Theorem 2.8. Let G be a group and K a group positive definite map on G. Then there exists a unitary representation π K of G on a Hilbert space H K with a cyclic vector ξ 0 ∈ H K such that Proof. The proof works exactly as in the case of C * -algebras: consider [H K , v K ] the representation associated to K by Kolmogorov's theorem. Define the operators π K (x) for x ∈ G as follows: and extend by linearity.
Remark 2.9. Note that in the proof of theorem 2.8 we used the representation associated to K and we see that, when K is a group positive definite map, this representation has the unitary representation π K attached to it. The same observation can be done for the GNS construction: the representation of the C * -algebra is attached to the representation v K . This confirms our expectation: when the positive definite map has some compatibility with the existent structure on X, this compatibility projects a nice structure on the associated representation [H K , v K ]. This is the idea that we use throughout this section.
Example 2.10. [Gabor type unitary systems] We recall that a Gabor system is associated to two positive constants a, b > 0 and a function g ∈ L 2 (R) and is defined by The Gabor systems are one of the major subjects in the study of frames and wavelet theory. If we define the unitary operators U, V on L 2 (R), , (f ∈ L 2 (R)), then g m,n = U m V n g, (m, n ∈ Z), and U and V satisfy the relation Following [HL], if U and V are unitary operators on a Hilbert space H that verify the relation U V = λV U for some unimodular scalar λ, we then call {U m V n | m, n ∈ Z} a Gabor type unitary system. We will prove that these systems fit into our general framework and we construct representations for them.
Proof. Let v K : Z 2 → H K be the representation associated to K. Define the operators U and V as follows: , and then extend by linearity. We check that U, V are well defined and isometric. Take a i ∈ C, (m i , n i ) ∈ Z 2 , (i ∈ {1, ..., p}).
a i a j λ −mi λ −mj K((m i , n i + 1), (m j , n j + 1)) A similar calculation shows that U is well defined and isometric. Since the linear span of the vectors v K (m, n) is dense in H K , we can extend U and V to unitaries on H K .
Remark 2.12. Any Gabor type unitary system U , V on a Hilbert space H, that has a vector ξ 0 ∈ H with the property that the linear span of is dense in H, gives rise to a positive definite map K on Z 2 that satisfies (2.1),(2.2) as follows: (2.1),(2.2) are just immediate consequences of the fact that U and V are unitary and U V = λV U .
Example 2.13. [Wavelet representations] We recall briefly some facts about wavelet representations. Wavelet theory deals with two unitary operators U and T on L 2 (R), corresponding to the integer N ≥ 2 called the scale: A wavelet is a function ψ ∈ L 2 (R) such that is an orthonormal basis for L 2 (R). One way to construct wavelets is by multiresolutions and scaling functions (see [Dau92]). Scaling functions satisfy equations of the form where a k are complex coefficients. The scaling equation can be reformulated using representations. There is a representation of L ∞ (T) (T is the unit circle) on L 2 (R) given by ( ξ denotes the Fourier transform of ξ and functions on T are identified with 2π-periodic functions on R). Using this representation, (2.6) can be rewritten as is called a low-pass filter. Also, the representation satisfies (U, π, L 2 (R) , ϕ) is called the wavelet representation with scaling function ϕ.
The wavelet theory has shown a strong interconnection between properties of the scaling function ϕ and spectral properties of the transfer operator associated to the low-pass filter m 0 : where T is endowed with the normalized Haar measure. For more information on this we refer the reader to [BraJo]. In particular, functions that are harmonic with respect to R m0,m0 , i.e. R m0,m0 h = h, play an important role in the theory. We recall here a theorem from [Jor98] which establishes the link between functions which are harmonic with respect to R m0,m0 and wavelet representations, because it is another particularized instance of Kolmogorov's theorem.
Moreover, this is unique up to unitary equivalence.
Proof. We give here only a sketch of the proof that uses Kolmogorov's theorem, the rest are calculations wich can be found in [Jor98].
and extend by linearity and density.
and extend by linearity and density.
Everything can be checked out as the reader may see in [Jor98].
Intertwining operators
In the previous section we saw how positive definite maps induce representations on Hilbert spaces. Now we will show that intertwining operators can be constructed in a similar way from maps L : X × X → C which satisfy some boundedness condition. We will also see that, when X has some structure on it and L is compatible with this structure, then the intertwining operator induced by L will be compatible with the extra structure existent on the induced representations, i.e. the operator is indeed intertwining.
The format of this section is similar to the format of the previous one. We begin with a general, set theoretic result and then particularize it to various structures to obtain more information.
Definition 3.1. Consider two positive definite maps K, K ′ : X × X → C and L : X × X → C (not necessarily positive definite). We say that L is bounded with respect to K and K ′ if there is a constant c > 0 such that for all x i , y j ∈ X, ξ i , η j ∈ C, i ∈ {1, ..., m}, j ∈ {1, ..., n}. We denote this by Theorem 3.2. Suppose X is a nonempty set and K, K ′ are positive definite maps on X. If L : X × X → C and L 2 ≤ cKK ′ for some c > 0, then there exists a unique bounded linear operator S : This shows that B is a well defined bounded sesquilinear map which can be extended (by the density properties of v K and V K ′ ) to a bounded sesquilinear map B : In particular, one obtains (3.2). The uniqueness is clear because the spans of {v K (x) | x ∈ X} and {v K ′ (y) | y ∈ X} are dense. The converse is also easy, one needs to check that the map L defined by (3.2) satisfies L 2 ≤ S 2 KK ′ , but this is a consequence of Schwarz's inequality.
Definition 3.3. We call the operator S associated to L in theorem 3.2, the intertwining operator associated to L.
We will also be interested in subrepresentations and in the commutant of a representation. In these instances we will work with only one positive definite map K. We give here a definition which will be appropriate for these situations.
Definition 3.4. Consider K, K ′ , two positive definite maps on a nonempty set X and a constant c > 0. We denote Proposition 3.5. If K and K ′ are positive definite maps and c > 0 Corollary 3.6. Suppose K is positive definite on X. Then, for every positive definite map K ′ with K ′ ≤ cK for some c > 0, there exists a unique positive operator S : Moreover, S ≤ c. Conversely, for every positive operator S on H K there is a unique positive definite map on X that satisfies (3.3). In addition, K ′ ≤ S K.
Proof. Using proposition 3.5 and theorem 3.2, we find an operator S on H K that satisfies (3.3) and S ≤ c. S is positive because For the converse, when S is given, theorem 3.2 shows that there is a K ′ satisfying (3.3) and K ′2 ≤ S 2 KK. K ′ is positive because S is, and proposition 3.5 implies K ′ ≤ S K.
In the remainder of this section we apply theorem 3.2 to the situations when X has some additional structure on it and see how the intertwining operators are in compliance with the extra structure of the representations.
Example 3.7. [C * -algebras] Consider now X = A, a C * -algebra. We saw in example 2.5 that, when the positive definite map K : A × A → C is given by a positive functional ϕ : A → C, K(x, y) = ϕ(y * x), (x, y ∈ A, then the representation induced by K has the GNS construction attached to it. We want to see for what functions L : A × A → C the associated intertwining operator will intertwine the GNS representations. Theorem 3.8. Let A be a C * -algebra and ϕ, ϕ ′ two positive functionals on A. Suppose that ϕ 0 : A → C is linear and ϕ 2 0 ≤ cϕϕ ′ for some c > 0 in the sense that (3.4) |ϕ 0 (y * x)| 2 ≤ cϕ(x * x)ϕ ′ (y * y), (x, y ∈ A).
Then there exists a unique bounded operator S : are the GNS representations associated to ϕ and ϕ ′ respectively (see theorem 2.6).) Moreover S ≤ √ c. Conversely, if S : H ϕ → H ϕ ′ is a bounded operator that satisfies (3.5) then there is a unique linear map ϕ 0 : A → C that satisfies (3.6). In addition (3.4) holds with c = S 2 .
As a corrolary we deduce a basic fact about positive operators in the commutant of the GNS representation (see [BraRo]).
Corollary 3.9. Let ϕ, ϕ ′ be two positive functionals on a C * -algebra A, ϕ ′ ≤ cϕ for some c > 0 (i.e. ϕ ′ (x) ≤ cϕ(x) for all positive x ∈ A). There exists a unique positive linear operator S in the commutant of the GNS representation corresponding to ϕ such that Conversely, for any positive operator S in the commutant of π ϕ (A), there is a unique positive functional ϕ ′ on A such that (3.7) holds and ϕ ′ ≤ S ϕ.
[Groups] Take now X = G a group. We know from theorem 2.8 that, if K : G × G → C is positive definite and satisfies K(x, y) = K(zx, zy), (x, y, z ∈ G), then K induces a unitary representation of G on H K . In the next theorem we look at operators that intertwine these representations.
Theorem 3.11. Suppose G is a group and K, K ′ are positive definite maps on G satisfying then there is a unique operator S : are the unitary representations of G associated to K and K ′ respectively (see theorem 2.8)). Moreover S ≤ √ c. Conversely, if S : H K → H K ′ satisfies (3.9), then there is a unique L that satisfies (3.10). In addition L satisfies (3.8) and L 2 ≤ S 2 KK ′ .
Proof. Recall that, if v K : G → H K and v K ′ : G → H K ′ are the representations associated to K by Kolmogorov's theorem, then , (x ∈ G) (see the proof of theorem 2.8).
Theorem 3.2 implies the existence of an operator S : The rest follows.
Corollary 3.12. Let K, K ′ be two positive definite maps on the group G that satisfy and K ′ ≤ cK for some c > 0. Then there exists a unique positive operator S on H K in the commutant of the unitary representation π K (G), such that Conversely, for every positive operator S in the commutant of π K (G), there is a unique positive definite map K ′ on G that satisfies (3.11) and K ′ (x, y) = K ′ (zx, zy), (x, y, z ∈ G).
Proof. It is an immediate conseqence of theorem 3.11. It can also be proved from corollary 3.6.
Theorem 3.14. Let λ ∈ C, |λ| = 1 and K, K ′ positive definite maps on Z 2 satisfying the corresponding relations (3.12) and (3.13). Let L : Z 2 × Z 2 → C with the property that L 2 ≤ cKK ′ for some c > 0. If L satisfies the relations (3.12) and (3.13), (with K replaced by L, of course), then there is a unique operator S : are given by theorem 2.11)). Moreover S ≤ √ c. Conversely, if S : H K → H K ′ satisfies (3.14), then there exists a unique L : Z 2 × Z 2 → C that verifies (3.15) and in addition L will verify (3.12) and (3.13), too and L 2 ≤ S 2 KK ′ .
The density of the linear spans of {U
The converse follows from theorem 3.2: if L is defined by (3.15), the only thing that remains to be verified is that L satisfies (3.12) and (3.13), but this is a consequence of (3.14) and Corollary 3.15. If K, K ′ are positive definite maps on Z 2 satisfying the relations (3.12) and (3.13) and K ′ ≤ cK then there is a unique positive definite operator S on H K that commutes with U K and V K and Conversely, if S is a positive operator that commutes with U K and V K then K ′ defined by (3.16) satisfies (3.12) and (3.13).
Proof. The proof follows the same lines as before.
Remark 3.16. Theorem 3.2 gives us a general existence result for intertwining operators. The next theorems answer the question what conditions should be imposed on L to obtain that its associated operator S intertwines the extra structure existent on H K ? We saw that for C *algebras the necessary and sufficient condition is that L(x, y) = ϕ 0 (y * x) for some linear ϕ 0 , for groups we must have L(x, y) = L(zx, zy) and for Gabor type unitary systems, L must satisfy the relations (3.12) and (3.13).
Example 3.17. [Intertwiners of wavelet representations] We mentioned in example 2.13 and theorem 2.14 how wavelet representations can be associated to positive functions h ∈ L 1 (T) with R m0,m0 h = h. In [Dut1] and [Dut2] we studied the operators that intertwine these representations. We indicate now how these can be connected to Kolmogorov's theorem. So we will recall the results from [Dut1] and we sketch the proof based on theorem 3.2. Given h as in theorem 2.14 call (U h , π h , H h , ϕ h ) the cyclic representation of A N associated to h. Also, define the transfer operator associated to a pair m 0 , m ′ 0 ∈ L ∞ (T) by
Moreover S ≤ √ c. Conversely, if S is an operator that satisfies (3.17), then there is a unique
Define X as in the proof of theorem 2.14. For all (f, n), (g, m) ∈ X, we want to obtain Keep the first and the last terms of the equality and this defines L. L will give rise to S by theorem 3.2. For the details of the required computations, see [Dut1]. The converse, can be also obtained from theorem 3.2, but here the generality of theorem 3.2 isn't really needed.
Frames and dilations
Recall that a set {x i | i ∈ I} of vectors in a Hilbert space H is called a frame if there are two constants A, B > 0 such that Frames have been used extensively in applied mathematics for signal processing and data compression. They play a central role in wavelet theory and the analysis of Gabor systems.
In [HL] the normalized tight frames are interpreted as projections of orthonormal bases and it is proved there that Gabor type normalized tight frames can be dilated to Gabor type orthonormal bases, and normalized tight frames generated by groups can be dilated to orthonormal bases generated by the same group (see theorem 3.8 and 4.8 in [HL]). We will revisit these theorems and show that they are immediate consequences of a general result which proves that any normalized tight frame can be dilated to an orthonormal basis in such a way that the extra structure that may exists is preserved under the dilation.
We begin with a proposition that establishes what positive definite maps give rise to normalized tight frames when represented on a Hilbert space.
Proposition 4.1. Let K be a positive definite map on a set X. Then {v K (x) | x ∈ X} is a normalized tight frame if and only if for all x i ∈ X, ξ i ∈ C, (i ∈ {1, ..., n}): translates into (4.1). For the converse, we only need to verify (4.2) for f in a dense subset of H K (see [HeWe] lemma 1.10). Since the linear span of {v K (x) | x ∈ X} is dense in H K , we can take f = n i=1 ξ i v K (x i ) and (4.2) follows from (4.1). Definition 4.2. A positive definite map K on X is called a NTF if and only if {v K (x) | x ∈ X} is a normalized tight frame for H K .
Before we prove our general result we note that, if δ : X × X → C is defined by Proof. By definition, {v K (x) | x ∈ X} is a normalized tight frame for H K . Then, by [HL] proposition 1.1, there exists a Hilbert space H containing H K as a subspace and an orthonormal basis Therefore K ≤ δ.
Theorem 4.4. If K, K ′ are NTF positive definite maps on a countable set X, K ≤ cK ′ for some c > 0, then there exists an isometry W : Proof. Since K ≤ cK ′ , by corollary 3.6, there exists a positive operator S on H K ′ such that Since S is positive, it has a positive square root S 1 2 . Then By the uniqueness part of Kolmogorov's theorem, there is a unitary W : But then {S 1 2 v K ′ (x) | x ∈ X} is a normalized tight frame for H. Also, we know that {v K ′ (x) | x ∈ X} is a normalized tight frame for H K ′ . So S 1 2 : H K ′ → H maps a normalized tight frame to a normalized tight frame, therefore it must be a co-isometry (see [HL] proposition 1.9). It follows that S 1 2 S 1 2 * : H → H is the identity on H so S is the identity on H.
We also know that range(S 1 2 ) = range(S). This implies that S(Sv) = Sv for all v ∈ H K ′ and, as S ≥ 0, S is the projection onto H. Consequently, we also have S = S 1 2 and everything follows now by an easy computation: Recall some definitions from [HL]. If U is a countable set of unitaries on a Hilbert space H, then ξ ∈ H is called a complete wandering vector (complete normalized tight frame vector) if {U ξ | U ∈ U} is an orthonormal basis (normalized tight frame) for H. A dilation theorem will take the following form: If U is a unitary system on a Hilbert space H that has a complete normalized tight frame vector η, then there is a Hilbert space H 1 that contains H and a unitary system U 1 on H 1 such that U 1 has a complete wandering vector ξ and if P is the projection onto H then P ξ = η, P commutes with U 1 and U 1 → U 1 | H is an isomorphism of U 1 onto U.
The proof will be guided by the following steps: 1. Construct K : U × U → C, K(x, y) = xη | yη ; then K is an NTF positive definite map and H K = H, v K (x) = xη, (x ∈ U) and U is the extra structure U K induced by K. 2. Verify that δ : U × U → C satisfies the required compatibility conditions with U. 3. Construct H δ , v δ and the additional structure U δ with cyclic vector ξ δ which is a complete wandering vector for U δ . 4. Since K ≤ δ (proposition 4.3), according to theorem 4.4 there is an isometry W : H → H δ which is induced by K; the projection P onto W H is also induced by K and P ξ δ = η. As K is compatible with the structure U, W will intertwine U and U δ and P commutes with U δ . So W H is invariant for U δ and W U W −1 = U δ | H for all U ∈ U (U δ is the unitary in U δ that corresponds to U in the representation). 5. Identify H with W H and everything will follow.
We will use the guidelines of remark 4.5 to show how one can obtain the dilation theorems 3.8 and 4.8 from [HL] for groups and Gabor type unitary systems.
Theorem 4.6. [HL] Suppose U is a unitary group on H with a complete normalized tight frame vector η. Then there is a Hilbert space H 1 containing H and a unitary group U 1 such that U 1 has a complete wandering vector ξ, if P is the projection onto H then P commutes with U 1 , P ξ = η and U 1 → U 1 | H is an isomorphism of U 1 onto U. Consequently, P U 1 ξ = U 1 | H η for all U 1 ∈ U 1 (that is the normalized tight frame {U η | U ∈ U} can be dilated to the orthonormal basis {U 1 ξ | U 1 ∈ U 1 }).
Proof. Define K : U × U → C, K(x, y) = xη | yη for x, y ∈ U. It is clear that K is an NTF positive definite map with (4.3) K(zx, zy) = K(x, y), (x, y, z ∈ U) and H K = H, v K (x) = xη and the representation π K given by theorem 2.8 is π K (x) = x for x ∈ U.
It is also clear that δ satisfies a relation of type (4.3) so it is compatible with the group structure and by theorem 2.8 it induces a cyclic representation (H δ , π δ , ξ δ ) of U with ξ δ = v δ (1) a complete wandering vector.
By proposition 4.3, K ≤ δ. By theorem 4.4 there is an isometry W : H → H δ which is induced by K, the projection P onto W H is also induced by K and P ξ δ = η. Then, by theorem 3.11, W is intertwining that is W x = π(x)W, (x ∈ U), and P is in the commutant of π δ (U). So W H is invariant for all π δ (x), x ∈ U and W xW −1 = π δ (x) for x ∈ U.
Theorem 4.7. [HL] Let U = {U m V n | m, n ∈ Z} be a Gabor type unitary system associated to λ on a Hilbert space H. Suppose U has a complete normalized tight frame vector η ∈ H. Then there is a Gabor type unitary system U 1 (= {U m 1 V n 1 m, n ∈ Z}) associated to λ on a Hilbert space H 1 containing H, such that U 1 has a complete wandering vector ξ and if P is the projection onto H then P commutes with U 1 and V 1 , P ξ = η and Proof. The proof is analogous to the proof of theorem 4.6, the only difference is to verify that δ satisfies the compatibility relations (3.12) and (3.13) and this is trivial.
A dilation theorem for wavelets
Let us recall the algorithm for the construction of compactly supported wavelets. For details we refer the reader to [Dau92] for the scale N = 2 and to [BraJo97] for arbitrary scale N .
One starts with the low-pass filter m 0 ∈ L 2 (T) which is a trigonometric polynomial that satisfies m 0 (1) = √ N and the quadrature mirror filter condition Then define the scaling function ϕ ∈ L 2 (R) by taking the inverse Fourier transform of To construct wavelets one needs the high-pass filters m 1 , ..., m N −1 ∈ L ∞ (T) such that the matrix is unitary for a.e z ∈ T.
The wavelets are defined as follows: or, in terms of the wavelet representation, It is known that, in order to achieve orthogonality, extra conditions must be imposed on m 0 . If R m0,m0 has only one continuous fixed point (up to a multiplicative constant), the set {U m T n ψ i | m, n ∈ Z, i ∈ {1, ..., N − 1}} is an orthonormal basis for L 2 (R).
However, when this extra condition is not satisfied, one still gets good properties, namely, the fact that the above set is a normalized tight frame for L 2 (R). In the sequel, we show how one can dilate this normalized tight frame to an orthonormal basis in such a way that the multiresolution structure is preserved so that "wavelets" in a space bigger then L 2 (R) are obtained.
We begin with a proposition that explains the multiresolution structure of the cyclic representations presented in example 2.13.
In the sequel we define m 0 ∈ L ∞ (T) to be non-singular if the set {z ∈ T | m 0 (z) = 0} has zero measure and |m 0 | is not constant 1 a.e.
Each f ∈ L ∞ (T) is the pointwise limit of a uniformly bounded sequence of trigonometric polynomial, hence, by lemma 2.8 in [Dut1], n for all f ∈ L ∞ (T) and n ∈ Z and (5.8) follows by density. (5.9) is proved in theorem 5.6 from [Jor98].
It remains to prove (5.12) because (5.13) follows from this immediately. The argument is essentially the one in [BraJo97] theorem 10.1. We will include it here to make sure everything works.
Since m(ρz) is a circular permutation of m(z), it follows that we must have µ k (ρz) = µ k (z), that is µ k (z) = λ k (z N ) for some λ k ∈ L ∞ (T). Then and we compute and this shows that .., N − 1}} by an argument similar to the one used in the begining of the proof (now for ψ h i instead of ϕ h ). This completes the proof of (5.12).
Motivated by the discussion in the begining of this section, we give a dilation theorem for wavelets. The theorem describes how one can dilate a normalized tight frame wavelet to an orthonormal wavelet in a bigger space.
The proof is similar to the one of theorem 4.4 but some additional arguments are needed. Since h ∈ L ∞ (T), we have |h| 2 ≤ h ∞ 1 so, by theorem 3.18 there is a positive operator S on H 1 that commutes with U 1 and π 1 and S has a positive sqare root S 1 2 that commutes with U 1 and π 1 . Also the projection P onto the range H of S 1 2 must commute with U 1 and π 1 . Then we can restrict π 1 and U 1 to H and The uniqueness part of theorem 2.14 implies that there is a unitary W from H h to H with W ϕ h = S 1 2 ϕ 1 , W U h = U 1 W and W π h (f ) = π 1 (f )W for f ∈ L ∞ (T). From these commuting properties of W and P it follows that W (U m h T n h ψ h i ) = S 1 2 (U m 1 T n 1 ψ h 1 ), (m, n ∈ Z, i ∈ {1, ..., N − 1}). Hence, S 1 2 maps an orthonormal basis to a normalized tight frame so it must be a co-isometry. Then, proceeding as in the proof of theorem 4.4 we get that S = S 1 2 = P and everything follows.
When m 0 is a regular filter (we will give the precise meaning of that in a moment), we can really get our hands on the abstract cyclic representation associated to the constant function 1 so that we obtain a very concrete dilation theorem for non-orthogonal wavelets in L 2 (R). The construction is given in the next theorem and it is based on the results presented in [Dut2].
Proof. The fact that the cyclic representation associated to the constant function 1 is proved in [Dut2], one needs only to take the inverse Fourier transform of the representation presented there to obtain the one described here. Then (5.18) and (5.19) follow trivially from the definition, (5.20) follows from the definition and the commuting properties of P 1 , (5.21) is included in the definition of the cyclic representation, (5.22) and (5.23) are consequences of proposition 5.1 and (5.24) (which is also well known, see [Dau92] or [BraJo97]) follows from the fact that the projection of an orthonormal basis is a normalized tight frame (see [HL]).
|
2014-10-01T00:00:00.000Z
|
2004-05-01T00:00:00.000
|
{
"year": 2005,
"sha1": "66c52ce99c532dd9d2ef7d94fd63f8605191fd0d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math/0511137",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "482a6bdd764f9cd3ca6bf3fcd16949867b1a8ab8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
116510448
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of Pentax-AWS®, Glidescope®, and King Vision® for difficult-airway intubation in manikins model by paramedics
Introduction: Prehospital tracheal intubation of a difficult airway is challenging for paramedics. Thus far, the potential role of video laryngoscopes for this purpose has not been confirmed. Therefore, this study aimed to determine the impact of different types of video laryngoscopes on the success rate and time to intubation by paramedics. Methods: This is a prospective, randomized, crossover manikin study involving 18 paramedics. Participants performed intubation on a difficult airway in a high-fidelity manikin using Pentax-AWS®, Glidescope®, and King Vision® (with two blade types). Time to intubation and success rate of intubation were determined. Participants also rated the best glottic view and reported their preferences of devices. Results: In a difficult-airway scenario, the median time to intubation with Pentax-AWS® was 22.9 s (interquartile range, 19.5–24.9 s), which was significantly shorter than using other devices. There were no significant differences in the time to maximal exposure of the vocal cords between four devices (p = 0.156). The time to insert the endotracheal tube with Pentax-AWS® and King Vision® with a guide-channel blade was significantly shorter than that with the other two devices (all, p < 0.05). Pentax-AWS® and King Vision® with a guide-channel blade showed higher success rates than the other two devices (p = 0.04). With regard to device preference, 14 participants preferred Pentax-AWS® among all devices analyzed. Conclusion: Pentax-AWS® could be an appropriate device for paramedics in cases of difficult airways, with high success rate.
Introduction
Prehospital endotracheal intubation of difficult airway is challenging for paramedics. According to the current guidelines, endotracheal intubation is still regarded as the optimal method for maintaining a secure airway. 1 Thus far, the survival benefit of prehospital intubation by paramedics has not been confirmed. 2,3 Failure rates of up to 30% have been reported for tracheal intubation in cases where paramedics performed intubation using the Macintosh laryngoscope. 4 The Macintosh laryngoscope is regarded as the gold standard for endotracheal intubation. 5 In recent times, various types of video laryngoscopes (VL) have been developed. VLs have been shown to need a shorter intubation times and resulted in higher success rate than other laryngoscopes in clinical studies that simulated a difficult airway. [6][7][8] Therefore, VLs could be an alternative to the Macintosh laryngoscope in in-hospital endotracheal intubations. 9,10 VLs are classified according to the presence of a guide channel and curvature of the blade. VLs without a guide channel are of two types according to curvature of blade: Macintosh type and angulated blade. 11 Pentax-AWS ® (Pentax corporation, Tokyo, Japan) (AWS) is a VL equipped with a blade and guide channel. The endotracheal tube is preloaded into the blade with a guide channel. The operator can insert the endotracheal tube by pushing it without additional manipulation after maximal exposure of the vocal cords. Glidescope ® (Verathon, Bothell, WA, USA) (GVL) is a commercial product comprising a VL equipped with an angulated blade. With this device, the endotracheal tube should be mounted on a pre-shaped angle stylet to match the curvature of the angulated blade. King Vision ® (King Systems, Noblesville, IN, USA) (KV) is a VL composed of a fixed 2.4-inch video screen, handle, and disposable blade. There are two types of blades in this device: one with a tube guide channel (KV guide) and one without a channel (KV guideless).
The type of VL used could influence the success rate and the time to intubation (TTI) by paramedics. Some studies have previously compared the direct laryngoscope and VLs in a difficult-airway situation. 9,12 However, to the best of our knowledge, no studies have examined the impact of different types of VL on the success rate and TTI by paramedics in difficult-airway situations. Therefore, this study aimed to determine whether the type of VL affected the results of intubation by paramedics.
Study design
We conducted a prospective, randomized, crossover manikin study at the simulation center of Hanyang University in March 2016. The local ethics committee approved this study in January 2014 (HYI-14-004-1). We registered the study protocol with Clinicaltrials.gov before study initiation (NCT02074072).
Equipment and materials
Participants intubated the airway with AWS, GVL, KV guide, and KV guideless using an endotracheal tube with an internal diameter of 7.0 mm (Portex, St. Paul, MN, USA) and the manufacturer stylet for GVL ( Figure 1). We used a high-fidelity manikin (Difficult Airway Management Simulator-Training Model, Kyoto Kagaku, Kyoto, Japan) for difficult simulated airway with cervical spine immobilizations and intermediate degree of limited mouth openings. Manikin was placed on a bed (760 mm × 2110 mm, 228 kg; Transport stretcher, Stryker Co., Kalamazoo, MI, USA).
Participants
We recruited 18 paramedics who participated in an airwaymanagement workshop in March 2016. We included healthy volunteers aged between 16 and 60 years. We excluded people who had wrist and low-back disease. All participants signed a written consent form before participation. The sample size was calculated on the basis of a pilot study on the time required for intubation with AWS, GVL, KV guide, and KG guideless devices. The mean (standard deviation) (seconds) TTI was 21.29 (2.47) for AWS, 64.68 (23.01) for GVL, 40.03 (22.09) for KV guide, and 76.21 (18.01) for KV guideless. The estimated sample size was calculated using G-power 3.1.2 ®m (Heine Heinrich University, Düsseldorf, Germany) and revealed that a sample of 16 participants was required for this study (effect size of 1.104, a-error of 0.05, and power of 0.8); nonetheless, we enrolled 18 participants to account for a 10% drop-out rate.
Interventions
All participants completed a brief questionnaire consisting of demographic information (age, gender, body weight, and height) and prior experience of intubations using VLs in a clinical situation. Ten minutes prior to the start of the trials, participants were allowed to practice intubations with all laryngoscopes to familiarize themselves with the use of the Difficult Airway Management Simulator-Training Model in normal airway settings in neutral position. A total of 18 participants were enrolled and randomly allocated to four groups. After allocation, the participants were arranged in a random order by a computer-generated list of random numbers to minimize learning effects and were asked to perform intubation with the laryngoscopes. The intubations were performed under simulated normal and difficult-airway settings: Group A (n = 5) performed the first intubation with AWS; Group B (n = 5) performed the first intubation with GVL; Group C (n = 3) performed the first intubation with KV guide; and Group D (n = 5) performed the first intubation with KV guideless. For VLs, the manikin's head and neck were placed in the neutral position. The height of the bed was approximately 80 cm, which was approximately the height of the participant's mid-chest level. Participants had a 10-min break after each intubation in one simulated airway and a 30-min break before change to another airway scenario ( Figure 2).
Outcomes
The primary outcome was intubation time, which was recorded from the start point to the mid-point and from the mid-point to the endpoint. The person recording the time was informed about the method to record the intubation time and was blinded to the objective of this study. The start point was taken as the time when the participant inserted the blade between the teeth after the person recording the time asked him or her to start. The mid-point was when the participant exposed the vocal cord maximally and stated "I can see." The endpoint was at the first manual ventilation after intubation, regardless of success or failure of air inflation into the manikin's lungs. The time to visualize the glottic view (TTV) was measured from the start point to the midpoint, and the time to progress the endotracheal tube (TTP) was consecutively measured from the mid-point to the endpoint. The TTI was calculated from the start point to the endpoint (TTV + TTP). Intubation failure was considered to occur when the tip of the tube was not properly placed in the trachea but was placed in the esophagus or the oral cavity or when the TTI was ≥90 s. 13,14 Secondary outcomes were the success rate for intubation, attaining a glottic view using the percentage of glottic opening (POGO) scale, and the preference for laryngoscopes. The preference for laryngoscopes was recorded by asking the participants to choose the laryngoscope that would be most favorable in difficult-airway situations.
Statistical analysis
Data were compiled using a standard spreadsheet application (Excel, Microsoft, Redmond, WA, USA) and were analyzed using the Statistical Package for the Social Sciences (SPSS) 18.0 KO for Windows (SPSS Inc., Chicago, IL, USA). We generated the descriptive statistics and presented them as frequencies and percentages for categorical data and medians with interquartile ranges for continuous data, because the data were not normally distributed. To compare the intubation time among the four laryngoscopes and POGO scale, the Friedman test was used for continuous variables. A post hoc analysis was performed using the Wilcoxon signed rank test and a Bonferroni correction. Values of p < 0.05 were considered significant.
General characteristics
A total of 18 participants were enrolled, and none of them were excluded. The general characteristics of the participants are shown in Table 1.
Tracheal intubation in normal airway
TTI of the KV guide was the shortest, followed by AWS, GVL, and KV guideless (p = 0.026). There were no significant differences between the AWS and KV guides (p = 0.845). There was no significant difference among the VLs except for KV guideless (all p > 0.05). In terms of TTP, progression of the endotracheal tube using VLs with a guide channel (AWS and KV guide) was shorter than that using VLs without a guide channel (KV guideless and GVL) (all p < 0.05). However, there was no significant difference between AWS and KV guide (p = 0.744) ( Table 2). Intubation with AWS and KV guide showed the highest success rate, followed by GVL and KV guideless.
Tracheal intubation in a difficult airway
The TTI and TTV of the AWS were significantly shorter than those of the other VLs (all p < 0.05). However, there was no significant difference among the other VLs (all p > 0.05). TTP of the VLs with guide (AWS and KV guide) was faster than that of the VLs without guide (KV guideless and GVL) (p < 0.05), except for GVL and KV guide (p = 0.053). However, there was no significant difference between KV guideless and GVL (p = 0.102; Table 3).
Preference for laryngoscopes
A total of 14 participants (77.8%) preferred AWS, 2 participants (11.1%) preferred GVL, and the remaining participants preferred KV guide among the 4 laryngoscopes for use in difficult-airway situations.
Discussion
In this study, we demonstrated that VLs with a guide channel (AWS and KV guide) were more successful and faster than VLs without a guide (KV guideless and GVL) when used by paramedics in a manikin with a simulated difficult airway. Paramedics could expose the vocal cord well with all four types of VLs analyzed (POGO score > 80). There was no esophageal intubation in the failed cases. Among the four types, AWS was the most preferred by the participating paramedics. A number of patient and manikin studies have evaluated the use of VLs by paramedics in difficult-airway situations. [15][16][17][18] AWS and Airtraq ® showed shorter TTI than the Macintosh laryngoscope in a manikin study. 8 In another study, Glidescope ® Ranger and McGrath ® Series 5 showed longer TTI than the Macintosh laryngoscope in a simulated difficult airway. 5 In this study, all four VLs provided acceptable visualization of the glottis, and there were no significant differences in the TTV among the four VLs. However, the TTI was shorter with AWS and KV guide than the other VLs. Furthermore, VLs with a guide channel (AWS and KV guide) showed shorter TTP than the other two VLs, and VLs without a guide channel made it difficult to insert the endotracheal tube into the trachea. In addition, paramedics could have difficulty in operating the endotracheal tube with the aid of the monitor of VLs. 12 The tip of the endotracheal should pass through an acute angle to enter the larynx and may risk coming in contact with the anterior tracheal wall. 5 The success rate of VLs with a guide channel (AWS and KV guide) was higher than that of VLs without a guide (KV guideless and GVL). There was no esophageal intubation, and the cause of failure to intubation was a TTI >90 s. Paramedics who failed to intubate found it difficult to insert the endotracheal tube, despite a good laryngeal view. 5 They were unfamiliar with operating the endotracheal tube while indirectly visualizing the airway anatomy on a video screen. 12 With the AWS device, the blade tip needs to be inserted posterior to the epiglottis (Miller-type approach), which elevates the epiglottis directly. On the other hand, the GVL is inserted anterior to the epiglottis in the vallecular fossa (Macintosh-type approach). 11 Some of the paramedics who failed to intubate had inserted the GVL using the Miller-type approach, which could make the tube insertion more difficult. The success rates for paramedics in endotracheal intubation using a VL for difficult airways are variable. 5,8 The success rate within 30 s was higher with AWS and Airtraq ® than with the Macintosh laryngoscope in a manikin study, 8 whereas the success rate for intubation was similar for McGrath ® , GVL, and Macintosh laryngoscopes in another study. 5 The reason for this variability may be attributed to different study settings. The participants rated AWS as the most-preferred airway device in difficult-airway situations. AWS has the target symbol on the liquid crystal display monitor, which indicates optimal alignment when centered on the glottis. In addition, the image is visible from almost all angles, and therefore, paramedics need not to be positioned close to patient's head in a difficult scenario. Moreover, the AWS has a blade-equipped guide channel. Therefore, paramedics can insert the tube by just pulling along the guide channel. The manufacturer of the KV recommends that the Macintosh-or Miller-type approach be used and that the midline of the blade be inserted perpendicular to the nose to avoid the chest in patients (lateral blade insertion). However, the AWS did not need lateral blade insertion due to its different handle design. In addition, some paramedics complained of difficulty in inserting the blade tip by the Macintosh-type approach. These characteristics of AWS could be the reason for its preference over the other VL devices.
Despite our important findings, this study had several limitations that need to be addressed. First, difficult airways created by an advanced simulator may not be equivalent to the actual situations encountered clinically. More sophisticated and standardized simulation models representing realistic difficult airways should be used in the future. Second, we compared only four different types of VLs. Various other types of VLs have been developed and are used in clinical settings, and their utility in the management of the difficult airways should be investigated. Third, we examined only two intubation scenarios-normal airway and difficult airway-with limited mouth opening and immobilized cervical spine. The usefulness of these VLs should be determined in other situations as well, such as in airway edema, presence of blood, or copious secretions in the oropharyngeal cavity. As different experiences and level of skills among paramedics could influence the result of intubations, future studies should use a larger sample size to confirm our findings.
Conclusion
VLs with a guide channel such as AWS could be an appropriate laryngoscope for paramedics to use in cases of difficult airways, as it less time consuming and has a higher success rate than the other VLs analyzed in this study.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
|
2019-04-16T13:25:12.869Z
|
2017-09-01T00:00:00.000
|
{
"year": 2017,
"sha1": "043455e600bc9e098d68aaeefbd00a84248e47ed",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1024907917724727",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "20129abf0bfd15e9bb173e4c78d95d85f6d1be0b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
209342260
|
pes2o/s2orc
|
v3-fos-license
|
Changes in women's dietary diversity before and during pregnancy in Southern Benin.
Dietary diversity before and during pregnancy is crucial to ensure optimal foetal health and development. We carried out a cohort study of women of reproductive age living in the Sô-Ava and Abomey-Calavi districts (Southern Benin) to investigate women's changes in dietary diversity and identify their determinants both before and during pregnancy. Nonpregnant women were enrolled (n = 1214) and followed up monthly until they became pregnant (n = 316), then every 3 months during pregnancy. One 24-hr dietary recall was administered before conception and during each trimester of pregnancy. Women's dietary diversity scores (WDDS) were computed, defined as the number of food groups out of a list of 10 consumed by the women during the past 24 hr. The analysis included 234 women who had complete data. Mixed-effects linear regression models were used to examine changes in the WDDS over the entire follow-up, while controlling for the season, subdistrict, socio-demographic, and economic factors. At preconception, the mean WDDS was low (4.3 ± 1.1 food groups), and the diet was mainly composed of cereals, oils, vegetables, and fish. The mean WDDS did not change during pregnancy and was equally low at all trimesters. Parity and household wealth index were positively associated with the WDDS before and during pregnancy in the multivariate analysis. Additional research is needed to better understand perceptions of food consumption among populations, and more importantly, efforts must be made to encourage women and communities in Benin to improve the diversity of their diets before and during pregnancy.
| INTRODUCTION
The transition from the Millennium Development Goals to the Sustainable Development Goals in 2015 placed the health and wellbeing of women and adolescents at the centre of the global agenda (De-Regil, Harding, & Roche, 2016;Mason et al., 2014). Many of the global nutrition efforts in recent years have focused on women during pregnancy and children during their first 2 years of life-the so-called "1,000 days" period that is considered a window of opportunity to improve both maternal and children's outcomes in a sustainable manner (Mason et al., 2014;Sharma et al., 2017). Even though dietary intake and nutritional status of women during preconception are essential determinants of a healthy pregnancy as well as optimal foetal growth and development, data regarding diet in the preconception period remain scarce (Dean, Lassi, Imam, & Bhutta, 2014). Studies on the continuum before and during pregnancy are even scarcer, in particular in low-and middle-income countries.
In the literature, randomised and observational studies related to pregnancy, and less frequently to the preconception period, focused primarily on women's micronutrient status and supplementation, in particular with regard to multivitamins, iron, and folic acid (Khan et al., 2011;Potdar et al., 2014;Salcedo-Bellido et al., 2017;Sengpiel et al., 2014;Zheng et al., 2015). For example, a randomised trial in Bangladesh showed that early multimicronutrient supplementation in pregnancy reduces the occurrence of stunting in boys during months 0-54, but not in girls (Khan et al., 2011). However, little attention has been paid to women's overall diet quality during preconception and gestation, particularly concerning dietary diversity, which has been shown to be associated with greater probability of micronutrient adequacy (Martin-Prevel et al., 2017). Poor dietary diversity during pregnancy has been documented in many contexts, particularly in low and middle-income countries (Lee, Talegawkar, Merialdi, & Caulfield, 2013). A lack of nutritious foods, as well as low socio-economic levels, are recognised as primary constraints in such contexts (Huybregts, Roberfroid, Kolsteren, & Van Camp, 2009). Dietary behaviours may also be responsible for changes in food consumption pattern during pregnancy. Several studies showed that beliefs about certain foods, cultural taboos, misinformation, lack of knowledge, personal aversion, and lack of appetite could affect women's diets during pregnancy (Huybregts et al., 2009;Kavle & Landry, 2018;.
Increasing our knowledge of changes in women's dietary diversity
and their determinants is necessary to design effective long-term nutrition strategies that would optimise pregnancy and foetal outcomes. In this research, we used an original preconceptional cohort design to examine changes in women's dietary diversity from preconception to pregnancy and to investigate their environmental, social, demographic, and economic determinants in Southern Benin.
| METHODS
This study was part of the Retard de croissance intra-utérin et paludisme (RECIPAL) cohort study, which has been fully described elsewhere (Accrombessi et al., 2018). Nonpregnant women of reproductive age were recruited at a community level from Sô-Ava and Abomey-Calavi, two semiurban districts of Benin, and followed up with monthly until they became pregnant; these women constituted the primary cohort (preconceptional follow-up). The subsample of women who became pregnant was then tracked monthly at the maternity clinic from early pregnancy to delivery; they constituted the secondary cohort (gestational follow-up). The present study collected dietary intakes of women from both the primary and secondary cohorts between November 2014 and December 2017.
The RECIPAL study was approved in Benin by the ethical committees of the Institute of Applied Biomedical Sciences and the Ministry of Public Health, and in France by the French National Research Institute for Sustainable Development (IRD). The study was conducted according to the Helsinki Declaration for medical research. Before data collection, written informed consent was obtained from each participant after ensuring their understanding of the purpose, objectives, confidentiality rules, benefits, and risks of taking part in the study.
The study took place in four subdistricts in Southern Benin as follows: So-Ava, Houedo-Aguekon, Vekky in the district of So-Ava, and Akassato in the district of Abomey-Calavi. Both districts are semiurban areas, but Sô-Ava has the distinction of being a lake area mainly occupied by natives, whereas Abomey-Calavi is more heterogeneous in terms of population. The climate is subequatorial and characterised by a long rainy season (April-July), a short dry season (August-September), a short rainy season (September-October), and a long dry season (November-March).
Women were enrolled in the primary cohort when they met the following criteria: being 18-45 years old, married, nonpregnant, apparently healthy, not known to be sterile, using no current contraception, having no travel plans of more than 2 months during the 18 months after inclusion, willing to become pregnant, and planning to deliver in
Key Messages
• Dietary diversity scores of reproductive age women were low in semiurban areas of Southern Benin and less than 41% of women reached the minimum dietary diversity for women.
• Women's dietary diversity scores did not change during pregnancy compared with the preconception period, with small variations in the consumption of some food groups such as eggs, dairy products, fruits, and dark green leafy vegetables.
• The absence of change in women's dietary diversity scores was mainly due to socio-economic constraints and might be determined by dietary restrictions related to strong socio-cultural beliefs.
either the Sô-Ava or Abomey-Calavi districts. These women were visited at home every month and tested for pregnancy. Women with positive pregnancy tests were enrolled in the secondary cohort.
Women who did not conceive after 1 year of follow-up were invited to the district maternal care centre for a medical examination. In cases of genital infection, they received medical advice and were referred to a gynaecologist. Follow-up stopped after 2 years when women did not become pregnant.
Demographic and socio-economic characteristics of both women and their households were collected once upon inclusion in the primary cohort via a structured questionnaire. Data included household size, assets, housing type, women's ages, parity (number of children, alive or dead), type of union (polygamous/monogamous), ethnic group, education, and main activities. A multiple correspondence analysis (Sourial et al., 2010;Traissac & Martin-Prevel, 2012) using socioeconomic data was performed to compute a wealth index and to classify households into low, middle, and high wealth levels according to tertiles.
Dietary assessments of women were conducted before conception and at each trimester of pregnancy. The minimum number of dietary assessments per woman considered in this analysis was two, and the maximum was four. One quantitative 24-hr dietary recall (Gibson, Charrondiere, & Bell, 2017;Gibson & Ferguson, 2008) was performed through face-to-face interviews. Women were asked to describe all foods, drinks, and snacks consumed over the last 24 hr, including a detailed description of the recipes. Food items consumed were classified into 10 food groups according to recommended classifications Four additional food groups-red palm oil, other oils and fats, sugar and sugary drinks, and alcoholic beverages-were used for the purpose of describing women's dietary patterns. These groups were not used in the calculation of WDDS or MDD-W.
Women's height was measured at inclusion. Women's weight was measured twice during the preconceptional follow-up, then once a month during the gestational follow-up. Both weight and height were measured according to World Health Organization (WHO) standard procedures (Norgan, 1988). Height was measured to the nearest millimetre with a SECA 206 gauge (Hamburg, Germany). Weight was measured with calibrated electronic scales (Tefal, France) with a precision of 100 g. Body mass index (BMI) was calculated before pregnancy and women were classified as underweight (BMI < 18.5 kg/m 2 ), normal (18.5 ≤ BMI ≤ 24.9 kg/m 2 ), or overweight or obese (BMI ≥ 25 kg/m 2 ) based on WHO classification (WHO, 2018).
Data were collected by seven enumerators (five nurses and two nutritionists) holding at least bachelor's degrees, with experience in field data collection. They were trained over 6 days on the 24-hr recall technique, the questionnaire and tools, and anthropometric measurements. The questionnaire was pretested by the enumerators during the training and was adjusted where needed. During data collection, the enumerators were supervised daily by an experienced nutritionist doubling as the principal investigator and supported by a team of experts in nutritional epidemiology. The supervisor checked the proper completion of the questionnaires daily as well as consistency of the answers. Data from dietary recalls were entered and cross-checked by repeated entry using the Epidata entry 3.1 software (Lauritsen & Bruus, 2004), whereas anthropometric, socio-economic, and demographic data were entered using ACCESS 2007.
Statistical analyses were performed using Stata 13 (College Station, TX, USA). We first described the basic characteristics of the sample from the primary cohort and examined whether women who became pregnant during the project (n = 316) differed from women who did not (n = 581). We presented mean (SD) for continuous variables and frequencies (%) for categorical variables. The main analysis was restricted to women who had one assessment at preconception and at least one assessment during pregnancy (n = 234). The mean WDDS, the proportion of women who consumed different food groups, and the proportion of women reaching the MDD-W were compared over the entire follow-up using a linear mixed model (for continuous variables) or a logistic mixed model (for categorical variables) including a random intercept (the individual) to take into account repeated measurements for the same subject. In bivariate analyses, we examined factors that were associated with women's dietary diversity before pregnancy, using the WDDS as the continuous response variable in linear regression models and using. Factors tested included subdistricts (geographical factor); women's ages, household size, parity, type of union, ethnic groups, women's and their husband's education levels (socio-demographic factors); women's and their husbands' professional activities and wealth index of the household (economic factors); women's body mass index (nutritional factor). Variables associated with the WDDS with a level of statistical significance of 0.20 were considered for the multivariate analysis. Blocks of factors were constituted based on conceptual reasons (factors belonging to a same dimension); these blocks were successively entered in the model using a manual ascending method. The final multivariate model was used to test if the WDDS changed between the visits of followup (preconception, trimester 1, 2, or 3 of pregnancy). Interaction terms factor*visit were also tested in the final model to examine whether any change in the WDDS differed according to the modality of the factors. Univariate and multivariate analyses were systematically controlled for the season because of its known effect on food availability and hence on dietary diversity. Statistical level of significance was set at p < .05.
| Limitations and strengths of the study
This study has some limitations. There was only one 24-hr recall administered at each time point; for this reason, we could not survey women's habitual dietary intake. We also focused on dietary diversity, a single dimension of diet quality, and did not take into account other dimensions or food quantities. Another limitation is the lack of a qualitative survey on the socio-cultural component for objective measurement of attitudes, behaviours, and beliefs regarding diets before and during pregnancy. Such data would have helped us gain more insight into the trends of our results. However, this was beyond the primary focus of our study, which was to investigate whether there are changes in women's dietary diversity before and during pregnancy.
Further research will focus on this purpose. Nevertheless, the cohort design of the study was a real strength and constituted a unique source of data in West Africa. As this study was part of a larger study for which biological samples were collected, we believe that the high rate of lost to follow-up was precisely due to very strong endogenic belief and mistrust towards the research team.
| RESULTS
A total of 897 women participated in the dietary intake study ( Figure 1). Only 815 of those women were followed during the preconceptional period (primary cohort), because 82 women were already pregnant when the study started and hence entered the sec- From the primary cohort, 581 women did not become pregnant during the study and 234 women did.
At inclusion, women who became pregnant during follow-up were significantly younger than those who did not (26.8 vs. 28.2 years, p < .001; Table 1). A higher proportion of women who became pregnant lived in monogamous households and were employed in comparison to women who did not become pregnant. There were no differences in household size, parity, ethnic group, and education between the two groups. The prevalence of overweight and obesity was high in both groups, but mostly among nonpregnant women (37% vs. 26%, p = .035).
Before pregnancy, the WDDS ranged from 2 to 8 food groups, with an average of 4.3 ± 1.1 food groups (Figure 2). The WDDS also ranged from 2 to 8 food groups at trimesters 1 and 2 of pregnancy and ranged from 1 to 7 food groups at trimester 3. The mean WDDS was 4.2 ± 1.2, 4.3 ± 1.2, and 4.1 ± 1.2 food groups in the first, second, and third trimesters of pregnancy, respectively. No statistical difference was observed in the mean WDDS according to the visit. The MDD-W (5 food groups out of 10) was reached by 41.1% of women before conception. This proportion decreased to 37.5%, 36.9%, and 36.6% at trimesters 1, 2, and 3 of pregnancy, respectively, but none was statistically different from the preconception value.
Before pregnancy, as well as during pregnancy, women's diets were mainly composed of "grains, plantains, white roots, and tubers" (in particular maize, cassava, and their derivatives), "other vegetables" (mainly tomatoes, onions, and pepper), "others oils and fats" and "meat, poultry, and fish" (mainly fish and their derivatives; Figure 3).
Nevertheless, the percentage of women who consumed foods from the group of meat, poultry, and fish was slightly lower during the 3rd trimester of pregnancy compared with before pregnancy (p = .03).
Before pregnancy, the proportions of women who consumed "nuts and seeds" (mainly groundnuts, sesame seeds, and nere) and "pulses" (mainly cowpeas and Bambara nuts) were approximately 40% and 30%, respectively. These proportions did not differ statistically during pregnancy. Other fruits and "dark green leafy vegetables" were consumed by less than 30% of women before pregnancy and tended to increase at trimester 1 or 2 and to decrease back to the initial level at trimester 3 (p < .05). Overall, the proportion of women who consumed dairy products was very low, but it increased to 10.5% at trimester 3 (p = .036). Eggs were consumed by less than 5% of women, except at trimester 2 when the consumption reached 11.4% (p = .024). The consumption of foods from "other vitamin A-rich fruits and vegetables" group was statistically lower during pregnancy as compared with the preconceptional period (p = .041). The proportion of women who consumed sugar-sweetened beverages was not significantly different before and during pregnancy (approximately 40%).
Before conception, the mean WDDS varied across the subdistricts, with the highest level observed in Akassato and the lowest level in Vekky (Table 2). Toffin women also had lower WDDS compared with the other ethnic groups (4.2 vs. 4.5 food groups, p = .001), as did women with zero or one child compared with women with two or more children. The mean WDDS gradually increased with the wealth index. Women's ages, professional activities, and education levels were not associated with dietary diversity. Household size and the professional activities and education levels of the husbands were likewise not associated with the WDDS.
In the multivariate analysis adjusted for the season, subdistrict, parity, ethnic group, and wealth index, the adjusted mean WDDS ± SEM equalled 4.3 ± 0.07 food groups at preconception, 4.2 ± 0.08 at trimester 1 of pregnancy, 4.3 ± 0.09 at trimester 2, and 4.2 ± 0.1 at trimester 3, and these differences remained nonsignificant (Table 3). Only the parity and the wealth index remained positively associated with the mean WDDS in this adjusted model. The subdistrict had no effect on the mean WDDS, but the interaction term subdistrict*visit was statistically significant, suggesting that the change in the mean WDDS across the four visits was different across the subdistricts. A stratified analysis by subdistrict revealed that the mean WDDS did not statistically change over the four visits in Akassato and in Vekky, whereas it decreased during pregnancy in Sô-Ava compared with the preconception visit, in particular at trimester 2 ( Figure 4).
| DISCUSSION
The present study used unique cohort data to describe the dietary diversity of women of reproductive age before conception and during pregnancy in two districts of Southern Benin. Our findings showed that dietary diversity scores of women before conception were low as compared with the cut-off of five food groups in these semiurban areas, more so in the subdistricts of Sô-Ava and Vekky than in the subdistrict of Akassato. Barely 41% of women reached the minimum dietary diversity for women, and their diets were primarily composed of cereals, oils, vegetables, and fish. Before pregnancy, dietary diversity of women was strongly associated with the subdistrict in which they lived, the ethnic group to which they belonged, the number of children they had, and the wealth index of their household. Overall the WDDS did not vary when women became pregnant, and the scores remained low at all trimesters of pregnancy. However, in the particular setting of So-Ava, the WDDS slightly decreased during pregnancy, in particular at trimester 2, as compared with the preconception period. The nutritional needs of pregnant women are high, and diversified diets are necessary to meet those needs.
We observed no difference in dietary diversity between the preconception and pregnancy periods, thus suggesting pregnant women do not change their diet upon learning they are pregnant. The low dietary diversity observed in preconception could therefore persist during pregnancy and put pregnant women at risk of micronutrient deficiencies with associated consequences on the baby (Hjertholm et al., 2018;Young et al., 2015).
A study conducted in five resource-poor settings has documented that dietary diversity was associated with micronutrient adequacy among women (Arimond et al., 2010). The link between the mother's diet and the occurrence of malformations or pathologies such as hypertension and diabetes in their children as adults has been widely demonstrated (Ramakrishnan, Grant, Goldenberg, Zongrone, & Martorell, 2012;Weber, Ayoubi, & Picone, 2015).
In the multivariate analysis, parity and household wealth index were both associated with a higher WDDS at preconception and with a positive change in the WDDS throughout follow-up. The socioeconomic level of the household is a well-known determinant of women's dietary diversity in African contexts (Doyle, Borrmann, Grosser, Razum, & Spallek, 2017;Huybregts et al., 2009;Krige et al., 2018;Rosen et al., 2018). Here, we showed that women with less economic constraints had better access to varied foods and were able to increase their dietary diversity during pregnancy. Women with several children were also more likely to cook every day for them and to benefit themselves at the same time. Our study did not show a significant relationship between a woman's education level and dietary diversity before or during pregnancy. This may be due to the small number of women with formal education (n = 25). Previous studies in some developing countries have often shown a link between the level of education of women and the quality of their diet (Huybregts et al., 2009, Mayén, Marques-Vidal, Paccaud, Bovet, & Stringhini, 2014. Rosen et al., 2018). In our study, we did observe small variations in some food group consumption. For example, the consumption of dairy products slightly increased at trimester 3 of pregnancy compared with the other visits. Extensive knowledge of the study area leads us to hypothesise that women drink a large amount of herbal teas, with or without milk, as well as other fresh dairy drinks in late pregnancy (Latham, 2001). It is likely that similar beliefs about eating eggs exist in Benin. These beliefs generally concern foods or food groups that are not very frequently consumed, so the impact on dietary diversity scores was low.
| CONCLUSION
Our study used a cohort design to show that dietary diversity scores of women living in two semiurban districts of Southern Benin were low before conception and did not change during pregnancy. This lack of change was mainly due to socio-economic constraints and possibly to negative behaviours related to socio-cultural beliefs. Although increasing food availability and accessibility in such a context is challenging, efforts can be made to increase awareness and inform women and the whole community of the importance of a diversified diet during pregnancy.
ACKNOWLEDGMENTS
We are sincerely grateful to all families who participated to the study, as well as to the midwives, nurses, and community-health workers who worked hard for recruiting and following participants. We also thank the whole RECIPAL and Nutripass (IRD) team. This study was
CONFLICTS OF INTEREST
The authors declare that they have no conflicts of interest.
|
2019-12-14T14:02:12.846Z
|
2019-12-12T00:00:00.000
|
{
"year": 2019,
"sha1": "cf300db3cd87069145cdb4665822d1f4726eaa29",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mcn.12906",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3db470f76fc6495faf975f644a233ebe24e20c8a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265068410
|
pes2o/s2orc
|
v3-fos-license
|
Sign Language Recognition and Translation: A Multi-Modal Approach Using Computer Vision and Natural Language Processing
Sign-to-Text (S2T) is a hand gesture recognition program in the American Sign Language (ASL) domain. The primary objective of S2T is to classify standard ASL alphabets and custom signs and convert the classifications into a stream of text using neural networks. This paper addresses the shortcomings of pure Computer Vision techniques and applies Natural Language Processing (NLP) as an additional layer of complexity to increase S2T’s robustness.
Introduction
Globally, sign language is one of the main languages for those who cannot communicate verbally.Despite its global presence, not many people understand it or use it.In 2020, 48 million people in the United States alone experience some form of hearing loss, with less than 500,000 -about 1% -of them that drive sign language regularly (Lacke, 2020;NIDCD, 2021).The World Health Organization (WHO) estimates that the number of individuals with hearing loss will affect nearly 2.5 billion by 2050 (WHO, 2023).With these setbacks, signers may find it challenging to communicate with other individuals not akin to their mode of communication.
While mild hearing loss can be remedied with hearing aids and rehabilitation, these solutions may often be too expensive.Individuals can alternatively learn sign language.Hand gestures are a form of non-verbal communication used by individuals in conjunction with speech to communicate.With the increasing use of technology, hand-gesture recognition is considered an essential aspect of Human-Machine Interaction (HMI), allowing the machine to capture and interpret the user's intent and respond accordingly.The ability to discriminate between human gestures can help in several applications that range from virtual and augmented reality to healthcare services (Ceolini et al., 2020).
As technology becomes easier to use and accessible, many people can likely perform simple commands with computer devices, such as typing text and video streaming.To address the problem statements, we propose S2T -a solution to close the sign language knowledge gap by translating simple hand gestures into text.
Sign-to-Text v1
The first Sign-to-Text (S2T) iteration was implemented using Computer Vision to classify the English alphabet and custom gestures for text, such as space and delete.Computer Vision allows for gesture learning and recognition through images or video by identifying repeated patterns.Specific key descriptors can be isolated in a given frame using preprocessing techniques to eliminate noise and allow the neural network to perform on the highest data quality.While this process allows for the appropriate classification of newly introduced data, Computer Vision alone is not accurate enough to classify all ASL signs due to the limitations of Computer Vision and the nuances of ASL.
Classification accuracy in Computer Vision is dependent on the quality of the data.Two key factors that affect performance are image lighting, which affects how much detail can be seen, and image quality, which affects how much detail is retained.These can be seen within the data as qualities such as object luminosity, palm orientation, and hand shape.
The nuances of ASL are due to the limited range of signs.About 10,000 different ASL signs correspond to the English language or about 200,000 words.Some signs differ from others by a slight hand rotation, while others are polysemous.Signs that vary slightly with one another and signs that have multiple meanings make it near-impossible for Computer Vision alone to classify the signer's entire message with 100% accuracy, especially when trying to sign long sentences.Here we introduce Natural Language Processing (NLP) in conjunction with Computer Vision to overcome ASL nuances and address the weaknesses of Computer Vision as a standalone solution (Klingler, 2021).
Natural Language Processing
NLP is the computer's ability to understand language in both verbal and written forms.NLP is used in various applications, such as Speech Recognition, Language Translation, and Image Interpretation.In recent scientific research, it is also used to investigate inter-specie communication between humans and whales to understand and better aid them.S2T can improve output results by leveraging specific NLP techniques such as autocorrection and context awareness.S2T can also enhance accessibility by applying Machine Translation (MT).
Autocorrect
Autocorrect is a word processing task that identifies misspelled words and tries to resolve them by providing potentially intended words as a replacement.Autocorrect can be implemented in many ways depending on its use case, but all follow the same foundation to rely on some form of corpus or dictionary (D'Agostino, 2021).
The first iteration of S2T can correctly classify hand gestures with 82.76% accuracy.S2T can benefit from autocorrect by identifying misclassified alphabet gestures and replacing them with candidate words.This may help improve S2T's accuracy in achieving the desired final output.
Context Awareness
Simple autocorrection may not fully capture the user's intent in their sentences.Simple algorithms such as the Levenshtein distance would compare misspelled words too closely similar based on the number of edits from each word.This type of algorithm may often time alter and lose the original context, making it hardly usable for regular conversation language processing.Due to the complexity of languages, context awareness can be used to help retain the original context and convey user intent.Context awareness can be implemented in many ways, including part-of-speech tagging and attention mechanisms.The main idea behind context awareness is to analyze the sentence and extract key terms.These terms will then determine the best word to replace a target word (autocorrect), provide insight, and suggest the following word (autocomplete).When context awareness is used with autocorrect, it is more likely to retain the context of a given sentence and less likely to veer off (Wood, 2014).
Machine Translation
MT is an NLP technique that translates one language into another without the help of humans.There are four main types of MT techniques -Rule-Based Machine Translation (RBMT), Statistical Machine Translation (SMT), Neural Machine Translation (NMT), and Hybrid Machine Translation (HMT).Early iterations of MT use the rulebased approach to extrapolate grammatical rules as the basis for building sentences.However, this approach poses several limitations, such as the inability to process complex sentence structures and idioms.SMT is another approach where the system uses extensive bilingual data and statistical models to determine the most probable output.Like RBMT, SMT can also not process complex sentences and idioms (Martin et al., 2011).
NMT is a more recent approach that utilizes deep learning models.NMT takes advantage of being trained over large amounts of data, enabling it to process complex sentences and idioms as opposed to RMBT and SMT.Depending on how the data and model are prepared, these single-network approaches may not catch all translations.HMTs can be used to combat this by combining translation models to improve the output further (Brownlee, 2019;Torregrosa et al., 2019;Aulamo et al., 2021).
This paper is organized as follows; Section 2 reviews the previous related work.Section 3 details the proposed methodology.Section 4 outlines the experimental design.Section 5 describes and analyzes the experimental results, and finally, Section 6 concludes the paper and provides our future directions.
Related Work
This research will explore NLP techniques and apply them to S2T to enhance the translation quality after making prior classifications in Computer Vision.We know that the research field combining NLP and ASL is limited.However, it is noted that NLP can be applied to ASL applications when provided with some consumable input, such as text.In the field of NLP, immense research has been put into autocorrect, context awareness, and machine translation.Since S2T can be broken into two parts (autocorrect and machine translation), we treat each part as an individual entity.
Autocorrect algorithms can vary in performance depending on their use case.However, they all follow a similar pattern by cross-referencing an accurate corpus to identify misspelled words.TextBlob is a standard open-source library launched in 2013 and has been widely used as a standard autocorrect tool (TextBlob, 2013).A study on TextBlob shows that it can correct 54.6875% of the mistakes in a given prompt.This low score can be due to TextBlob's over-correcting behavior and lack of information to correct it to the target word (Popovic, 2023).
There are also many machine translation algorithms and architectures that each perform best depending on the specific application.Transformer models commonly show great success and have been a standard in many NLP tasks since Google introduced them in 2017 (Caswell and Liang, 2020).
Sign-to-Text v2
S2T is equipped with computer vision techniques to translate sign language into text.We propose NLP as a second layer of data processing to enhance translation accuracy and introduce an extra translation feature to make the program more accessible.This additional layer will address the main drawbacks of Computer Vision as a standalone solution.
Classification Improvement
One major flaw of S2T-v1 has its low classification accuracy of 82.76%.Given the letter-by-letter translation nature of S2T, a letter-by-letter classification will most likely result in typos in a given text.To reduce the number of typos based on gesture classification, autocorrect can be used to detect and fix them.Traditionally, autocorrect can identify misspelled words by comparing the target words against a known dictionary or corpus.Advanced autocorrect features must be utilized, such as context awareness, due to the nature of how misspellings are created.With context awareness, it can further analyze the text stream to provide a closer and more appropriate approximation to the user's intended sentence.
Language Translation
Another feature S2T can leverage is transforming the English output into another language.This additional feature does not directly affect the classification accuracy of the original S2T implementation.Instead, language translation makes it more accessible for users to communicate effectively with various language speakers.The primary challenge that S2T will face is retaining context through its text processing transitions.As machine translation is the final layer of S2T, it will face potential inaccuracies in the initial phase of computer vision classification and the autocorrect technique.Our research explores and compares different autocorrect and machine translation methods to ensure the closest possible translation the user intends to convey.
Datasets
For autocorrect to perform well, it requires a dataset that contains correctly spelled words as the source of truth (GWICKS, 2018).Without this, the autocorrect would perform erroneous corrections, such as correcting correct words into incorrect words.This dataset must be pruned of any odd words that may be defined, as these words are infrequent in regular conversations.These sparse representations are pruned as it may negatively impact the autocorrect performance in accuracy.
The other dataset required for autocorrection would be a dictionary of words and corresponding frequencies, on which the autocorrect will base its corrections.Additionally, with the prior dataset, we can create a second dataset with words and their corresponding probabilities of appearing in the English language (Tatman, 2017).
Our work serves ASL, which directly transcribes into English.Therefore, it is necessary for any dataset we use to have bilingual alignments with the English language.Tatoeba, an open-source collective for sentences and translations, is our select source for the translation task (Tatoeba, 2006).Phrase pairs in the retrieved data consist of userprovided, collectively evaluated, and approved translations for many languages, including lowresource languages.As this work is not solely extensive into machine translation, our team found that the one-to-many translation mappings at the sentence level are cordial to our application.
In preparation for the NMT and SMT models observed in this work, given that we have chosen not to develop single-model, multilingual support, all bilingual pairs are uniformly processed.All punctuation is stripped, and all characters are lowercase where applicable.For NMT specifically, all tokens are vectorized before model training.We have also limited the vocabulary size for all models to reduce complexity in this iteration.
Autocorrection
We propose the following autocorrection algorithm in Algorithm (1).return tgt as string 21: end procedure Our autocorrection algorithm follows a general structure; however, we wanted to experiment with what word distance algorithm would work best for our project domain.Our team considered researching the performance differences between Minimum Edit Distance, Needleman-Wunsch, and Damerau-Levenshtein algorithms.As our baseline, TextBlob library's correction function will be used.
The Minimum Edit Distance algorithm (1), known formally as the Levenshtein Distance algo- Algorithm 3 Needleman-Wunsch Algorithm rithm, measures the minimum difference between two words, x and y.The algorithm's recurrence is commonly used in dynamic programming (Nam, 2019).The Minimum Edit Distance algorithm involves the usage of three cost variables: del_cost, ins_cost, and repl_cost, for each deletion, insertion, and replacement of a letter in word x at index i to the letter in word y at index j, respectively.These three variables can be set to whichever value the user wishes, but for our purposes, we set the values of del_cost to 1, ins_cost to 1, and repl_cost to one of two values as described in (2).Namely, if the letter of word x at index i is not equal to that of word y at index j, then repl_cost is set to a variable miss_cost, which is 2. Otherwise, repl_cost is set to another variable match_cost, which is 0.
The Needleman-Wunsch algorithm (3) generalizes the Levenshtein distance and considers global alignment (Kellis, 2021).It functions very similarly to the Minimum Edit Distance algorithm, filling in a similar table of values, but is used primarily in bioinformatics to align protein or nucleotide sequences.Because of this, gaps are punished and given a designated gap penalty in the algorithm's overall calculations.
In the algorithm definition defined in (3), g is the gap penalty, and s(x i , y j ) is the similarity score between words x and y at indices i and j, respectively.Unlike Minimum Edit Distance, which minimizes the number of edits to convert some word x to another word y, Needleman-Wunsch maximizes the score that an alignment between two sequences The Damerau-Levenshtein algorithm (4) calculates the Damerau-Levenshtein distance between two given strings by following the same process as the classical Levenshtein distance but differs from this by including transpositions in its operations calculations (Zhao and Sahni, 2019).This algorithm first determines the optimal string alignment distance and then calculates a distance with adjacent transpositions.The applications of this algorithm include DNA and fraud detection, and the U.S. government uses it in export control.
TextBlob is a Python library for processing textual data.We used our project's .correct()function to identify and correct misspelled words in a given string.This function works by utilizing a dictionary of English words, determining whether a word is correct.If incorrect, a list of possible words based on edit distances is generated, and the word with the least edit distance is selected.
To bolster the accuracy of our autocorrection algorithm, we also considered the implications of context awareness.The context awareness algorithm we used is part of the SpaCy module: the ContextualSpellCheck (Goel, 2020).This module is loaded into a SpaCy pipeline that can then perform on a given sentence string.Contextual-SpellCheck will then analyze the entire input, identify misspelled words using an English dictionary, and suggest what each incorrect word should be based on the context of the words around it.The context of each of these words is trained through a model at word-by-word, sentence-by-sentence, and document (entirety) levels.These suggested words were then utilized in our minimum edit distance function to increase the priority of these contextbased words being chosen as the ultimate correction.The SpaCy module ContextualSpellCheck was chosen over similar approaches, such as BERT (Bidirectional Encoder Representations from Transformers), due to its compatibility with our code.SpaCy allowed for quick evaluations and gave us the means to increase priority for individually chosen words numerically.
In our proposed autocorrect algorithm (1), we implement the SpaCy-ContextualSpellCheck pipeline as the assignment to sgt using the incorrect corpus src.We then skew the original autocorrect suggestions made by one of the given algorithms above using a word from src and, if context-awareness is allowed and the word is recognized in sgt.This aims to take the contextual suggestions and boost the probabilities of choosing those words.As a result, the words chosen before or after contextual skewing can lead to different words being given as the top result in ac_sgt.
To process the corpus, the algorithm temporarily "removes" directly subsequent punctuation for each word seen.This punctuation is then "returned" once this word is processed.The reason for this particular step results from how each word is processed.The current algorithm can receive an input word with punctuation and output without that punctuation, and the punctuation would get "eaten".If we allowed this to continue for an entire corpus, the corrected corpus could have a different contextual meaning from its original.As such, each word must be sub-processed so that if there is punctuation, that punctuation is saved and returned to its original place.
Machine Translation
There are many approaches to performing MT, as mentioned in Section 1.2.3.Considering the use cases for our pipeline, we seek methods that can produce quality translations with low overhead in terms of resource usage and increased speed.Initially, we decided to utilize large language models (LLMs) such as T5 or GPT for the end-to-end task.However, to better understand the modern machine translation task from its roots and assess methods built solely for translation, we have chosen to utilize NMT as the base approach, with SMT as a supplement to the outputs of the base model.Choos-ing these two presents an opportunity to explore an HMT approach, which will be further elaborated in Section 6 as future work.
The NMT model utilized in this framework is the ever-familiar Transformer, trained on bilingual pairs.The Transformer is known to be a significant improvement over previous neural architectures like Recurrent Neural Networks (RNNs) and Gated Recurrent Units (GRUs) for sequence transduction (Vaswani et al., 2017).The key feature of the Transformer is the implementation of multi-head attention modules-generally, attention-based methods in artificial neural networks.
Simple word-based SMT was selected to supplement NMT, namely the IBM model series.As an overview, the IBM models consist of several iterations, each aiming to resolve the deficiencies from the previous, that utilize word alignment probabilities to generate tokens.Selective features such as fertility and context are included depending on the model version to improve the model outputs.
In our work, we employed IBM Models 1 and 2 from Python's NLTK library, trained on the same bilingual pairs as the Transformer.These early iterations of the IBM series are outdated regarding a well-performing, standalone translation model.Despite this, we have chosen these models as a preliminary mechanism for establishing confidence in the outputs of the NMT model.We ran each of our algorithms over fifty sentences with randomly distributed incorrect words.We compared these results to the corresponding correct sentence counterparts to determine the percentage of errors that were correctly fixed after being run.
Algorithm
Our findings showed that the Minimum Edit Distance (Levenshtein) algorithm utilizing context awareness performed the best out of all tested algorithms.In contrast, the base Needleman-Wunsch without context awareness performed the poorest.Without context awareness, Damerau-Levenshtein performed the best.
Overall, context awareness improved each algorithm that we tested.Needleman-Wunsch received the most improvement at ten percent but did not outrank the other context-aware options.Damerau-Levenshtein benefited the least from context awareness, and Minimum Edit Distance's percentage of errors fixed increased enough to bump it into first place in the algorithm rankings.All three models were trained and evaluated on over 200,000 English-French bilingual pairs provided by Tatoeba (Tatoeba, 2023).
Model
The Bilingual Evaluation Understudy (BLEU) metric is the prominent standard for supervised evaluation of the quality of machine-generated translations.As shown in Table 2, it is used to evaluate the Transformer model to verify that our implementation corresponds with other NMT standards.The IBM Models were not evaluated with BLEU, as we have decided that the purpose of these selected SMT methods would be better suited for unigram overlaps.Hence, we have also evaluated all models with ROUGE-1.Although not used as often as BLEU for judging translation quality, we have selected this metric based on determining each model's efficacy in generating relevant words for a desired translation.These observations drive future work of translation in our pipeline.
To compare, the training and evaluation of the original Transformer on the WMT14 English-to-French dataset scored 38.1 for BLEU.Using the same architecture on the Tatoeba dataset, we have obtained a score of 31.8, a 6.3% decrease.
Conclusions and Future Work
This paper proposes a multi-modal approach to improve sign language recognition and translation by combining computer vision and NLP techniques.By applying autocorrect as a fail-safe for computer vision classification, our team was able to fix 63.25% of the errors present in our dataset, which beats the baseline model by 9.94%.This improvement in word correction provides the machine translation layer to perform better as it can retain the context closest to the intended meaning.However, the NMT model implemented in this study performed slightly subpar compared to the original Transformer for English-to-French translation from different datasets.The evaluations conducted for SMT also show poor performance on the selected database.More extensive tuning and training on perhaps another corpus, such as those from past WMT conferences or OPUS, would benefit all methods selected here.This may also align the results of our implementation closer to those of related works utilizing the same architectures.As MT relies on the results of autocorrect, our plan plans to investigate more into improving the implementation of autocorrect.The root of misclassifications primarily comes from the results of computer vision first.While these misclassifications are due to the similarity between each gesture, not all gestures are utterly similar.This suggests that autocorrect can benefit from emphasizing weights for each classification group.By applying an additional bias per classification group, autocorrect can achieve increased correction accuracy overall.
Further improvements to autocorrect focus on an improved method of context awareness.The current implementation uses the SpaCy-ContextualSpellCheck pipeline.While it already improves upon standard autocorrect algorithms, the overall performance is still not substantial enough to be reliably used.Our team researched using the Viterbi algorithm to improve SpaCy by better determining the best corrections using part-of-speech tagging and hidden Markov models.We can further enhance SpaCy by directly implementing a BERT model step into the pipeline, allowing for more accurate predictions.Despite MT results in this work underperforming, we are looking to merge the sequencing capabilities of the attention-based neural network and the purely linguistic nature of the statistical approach to improve translation quality.Our future work seeks to leverage these approaches into a confidence-driven hybrid approach -justifying NMT outputs and resolving tokens estimated to have high uncertainty through SMT (Wang et al., 2016).
Table 2 :
Results of each algorithm by BLEU and ROUGE metrics, on Tatoeba EN-FR dataset.IBM Models were not evaluated on BLEU-4.
|
2023-11-10T14:06:02.111Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "031b4c7f691b25c1495e17f7b98372d628ae3603",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "031b4c7f691b25c1495e17f7b98372d628ae3603",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
7593813
|
pes2o/s2orc
|
v3-fos-license
|
Optimal Policies for Deteriorating Items with Maximum Lifetime and Two-Level Trade Credits
The retailer’s optimal policies are developed when the product has fixed lifetime and also the units in inventory are subject to deterioration at a constant rate. This study will be mainly applicable to pharmaceuticals, drugs, beverages, and dairy products, and so forth. To boost the demand, offering a credit period is considered as the promotional tool.The retailer passes credit period to the buyers which is received from the supplier. The objective is to maximize the total profit per unit time of the retailer with respect to optimal retail price of an item and purchase quantity during the optimal cycle time. The concavity of the total profit per unit time is exhibited using inventory parametric values.The sensitivity analysis is carried out to advise the decision maker to keep an eye on critical inventory parameters.
Introduction
In business transactions, the offer of settling dues against the purchases without any interest charges from the supplier is attractive for the retailer.During this permissible delay period, the retailer can sell the item and generate the revenue and incur interest on it by depositing in the bank or financial firms.Goyal [1] developed a mathematical model to compute economic order quantity when delay in payments is permissible.The literature review by Shah et al. [2] gave upto-date references on trade credit and inventory modeling.Sarkar et al. [3] developed an inventory model considering trade credit and price discount offer.
Huang [4] established that the retailer is further beneficial if the credit period which is received from the supplier is passed onto the customers.The economic order quantity is computed when the supplier offers the retailer a credit period and the retailer passes a credit period to the customers with < .This scenario is known as two-level trade credits.Huang [5,6] extended the above model with floor constraint and finite replenishment rate, respectively.Teng and Chang [7] analyzed the two-level trade credit scenario by relaxing the assumption < Pal et al. [8] analyzed three-stage trade credit policy in a three-layer supply chain.
Another important parameter for inventory modeling is deterioration of items, namely, volatile and radioactive chemicals, medicines and drugs, fruits and vegetables, electronic gadgets, and so forth.Ghare and Schrader [9] gave first inventory model for exponentially decaying items.Shah et al. [10], Goyal and Giri [11], and Bakker et al. [12] collected articles on deteriorating inventory modeling.Leśniewski and Bartoszewicz [13] applied the control-theoretic approach to design a new replenishment strategy for inventory systems with perishable stock.Sana [14] discussed an article on optimal selling price and lot size with time varying deterioration and partial backlogging.Most of the articles cited in these reviews considered infinite lifetime of the product.
In this paper, we analyze an EOQ model for the retailer under the following assumptions: (1) items in inventory are deteriorating continuously and have maximum lifetime and (2) the retailer follows two-level trade credit financing.The goal is to maximize total profit per unit time for retailer with 2 International Journal of Mathematics and Mathematical Sciences respect to cycle time.Finally, we carry out sensitivity analysis to study the effects of an inventory parameter at a time on optimal solution.Based on it managerial insights are discussed for the retailer.The paper is organized as follows.The introduction is given in Section 1.In Section 2, notations and assumptions are listed to formulate the proposed problem.Section 3 is about derivation of profit function.In Section 4, numerical examples and sensitivity analysis are carried out to validate the mathematical model.The conclusion and the future scope of the developed model are exhibited in Section 5.
Notations and Assumptions
We will use following notations and assumptions to develop the mathematical model of the problem under consideration.: procurement quantity per cycle (a decision variable), (): retailer's total profit per unit time.
Assumptions
(1) The inventory system under study deals with deteriorating items having expiry rate.The deterioration rate tends to 1 when time tends to maximum lifetime .
Following Sarkar [20], Chen and Teng [21], and Wang et al. [22], the functional form for deterioration rate is There is no repair or replacement of deteriorated items during the cycle time.
Mathematical Model
The retailer's initial inventory of units depletes to zero at = due to combined effect of demand and time-dependent deterioration.Hence, the rate of change of inventory level at any instant of time is governed by the differential equation with () = 0.The solution of differential equation ( 2) is Consequently, the retailer's order quantity is The sales revenue is The ordering cost is OC = .
The purchase cost of unit is PC = CQ.The holding cost is Next, we need to compute interest earned and interest charges for the retailer in the following two cases.
Case 1 ( ≥ + ).Here, the retailer has sold all the items before the permissible time , so the interest charged is zero; that is, IC 1 = 0.The retailer spawns revenue from the beginning of the cycle and settles the account at time .So the retailer's interest earned per cycle is International Journal of Mathematics and Mathematical Sciences 3 Case 2 ( ≤ +).Here, the retailer lacks the fund to settle the account at because the customer will settle the account at time + .So, the retailer will pay interest charges as and interest earned on the generated revenue at the rate during [, ], which is given by Hence, the total profit per unit time for retailer is where (B) Suppose ≤ .Here, the retailer does not generate any revenue from the customer.So the interest earned by the retailer IE 3 = 0.The interest is charged for all the items and is given by The total profit per unit time for retailer is The goal is to maximize the total profit per unit time with respect to cycle time when items in inventory are deteriorating and having maximum lifetime.The nonlinearity of the objective functions in ( 11)-( 12), ( 14) does not allow us to obtain the closed form solution.We analyze the model with numerical values for the inventory parameters in the next section.
Numerical Examples
The necessary condition to optimize profit function is to set ()/ = 0 and follow the steps given below to select the best solution for the retailer.
Step 1. Assign values to all inventory parameters.
If ≥ + , then compute total profit per unit time from (11) otherwise compute 2 from (12).By knowing optimum cycle time , retailer can determine order quantity using (4).Step 3.For < , the retailer's replenishment time can be calculated by setting 3 / = 0. Obtain the total profit per unit time 3 from ( 14) and order quantity from (4).
We consider the following examples to validate the mathematical formulation.
Example 2. Take = 0.72 years and all other inventory parameters are as given in Example 1.The cycle time obtained is 0.1267 years.We have < + , so corresponding profit per unit time for the retailer is $8559.39by purchasing 129.45 units.The concavity of the profit per unit time is exhibited in Figure 2.
Example 3. To demonstrate the scenario < , consider = 0.6 years and = 0.8 years.Then 3 ()/ = 0 gives cycle time = 0.2260 years.The profit is $7516.68 and purchase quantity is 234.99 units.Figure 3 shows that profit obtained is concave.
(1) (Figure 4) The retailer's cycle time is very sensitive to the credit period offered to the customer.Increase (2) (Figure 5) The retailer's total profit per unit time increases sharply when demand and selling price of an item increase.Settling the account at a later date is also beneficial to the retailer.Though maximum lifetime of product is uncontrollable, it can be controlled to increase the profit.The retailer can adopt advance facility to extend life of the product.Increase in purchase cost decreases profit drastically.
The retailer must maintain the balance between the credit periods and .By placing orders frequently the retailer will increase the ordering cost.So the trade-off is also required to combat between ordering cost and credit period .
Figure 6 shows how smaller delay period encourages the decision maker to buy more quantity.
Conclusions
In this paper, ordering strategy is studied for the retailer when the product has fixed lifetime and is deteriorating in nature.It is established that the retailer should intelligently decide the payment time for the settlement of the accounts to the supplier and from the customer.This will reduce the risk of default from customers.The future study for stochastic demand or fuzzy demand will be more practical.Further research can be on the analysis of risk reduction using reliability theory.
2. 1
. Notations : constant demand rate, : ordering cost per order, : purchase cost per unit, : unit sale price, where > , ℎ: inventory holding cost (excluding interest charges) per unit per unit time, : interest earned per $ per year, : interest charged per $ for unsold stock per annum by the supplier, : credit period offered by the supplier to the retailer, : credit period offered by the retailer to the customer, (): time varying deterioration rate at time , where 0 ≤ () ≤ 1, : maximum lifetime (in years) of the deteriorating item, (): inventory level at any instant of time , 0 ≤ ≤ , : cycle time (a decision variable),
Figure 1 :
Figure 1: Concavity of total profit with respect to cycle time for ≥ + .
4) The credit period offered to the customers by the retailer results in revenue inflow in [, + ] [23].(5)When ≤ + , the retailer would pay interest during [, +] at the rate for unsold stock.When > +, the retailer will settle the account at and does not incur any interest charges during the cycle.
Shortages are not allowed.Lead time is zero or negligible.( in International Journal of Mathematics and Mathematical Sciences
|
2018-04-03T03:42:28.788Z
|
2014-04-10T00:00:00.000
|
{
"year": 2014,
"sha1": "1517d271177752ac0b938fc34e5a66872a0ab6c5",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijmms/2014/365929.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1517d271177752ac0b938fc34e5a66872a0ab6c5",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
237963399
|
pes2o/s2orc
|
v3-fos-license
|
A ROBUST MULTI-OBJECTIVE MODEL FOR MANAGING THE DISTRIBUTION OF PERISHABLE PRODUCTS WITHIN A GREEN CLOSED-LOOP SUPPLY CHAIN
. The required processes of supply chain management include optimal strategic, tactical, and operational decisions, all of which have important economic and environmental effects. In this regard, efficient supply chain planning for the production and distribution of perishable productsis of particular importance due to its leading role in the human food pyramid. One of the main challenges facing this chain is the time when products and goods are de- livered to the customers and customer satisfaction will increase through this.In this research, a bi-objective mixed-integer linear programming (MILP)model is proposedto design a multi-level, multi-period, multi-product closed-loop supply chain (CLSC) for timely production and distribution of perishable products, taking into account the uncertainty of demand. To face the model uncertainty, the robust optimization (RO) method is utilized. Moreover, to solve and val- idate the bi-objective model in small-size problems, the (cid:15) -constraint method (EC) is presented. On the other hand, a Non-dominated Sorting Genetic Al- gorithm (NSGA-II) is developed for solving large-size problems. First, the deterministic and robust models are compared by applying the suggested solu- tions methods in a small-size problem, and then,the proposed solution methods are compared in large-size problems in terms of different well-known metrics. According to the comparison, the proposed model has an acceptable performance in providing the optimal solutions and the proposed algorithm obtains efficient solutions.Finally, managerial insights are proposed using sensitivity analysis of important parameters of the problem.
1.
Introduction. Supply chain design, and its related transportation and logistics systems is an important issue for all segments of society due to its effects on the main variables of the country's economy such as production, employment, price, and cost of living index [61]. In the past, each production center tried to increase its market share by paying attention to the number of products produced, but in today's competitive circumstances, it is clear that production centers and companies seek to create strategic and operational decisions to optimize and manage their logistic systems.Therefore, to gain more advantage in the market, they should look for solutions by which they can reduce costs and increase customer satisfaction continuously and simultaneously. Customer satisfaction increases if products and goods reach customers within a certain time. Especially in the fields of perishable products, research has shown that shipping costs are a great part of the cost of the products.
One of the vital operational decisions related to these challenges is the use of a multi-level system for the distribution of goods, which causes a large reduction in costs and improves service quality.In addition to the economic aspects, the use of this type of distribution system leads to reduced traffic, environmental pollution, and noises in the city centers, because the vehicles of the last level are smaller and provide more satisfaction to the citizens [2].
Perishable products are the items that may be damaged or spoiled over time by changes in temperature, pressure, humidity, or any environmental conditions, such as food, dairy, vegetables, meat, medicine, etc. The perishable supply chain has always been one of the most significant and attractive issues in supply chain management at different times.The challenge for companies in managing perishable food supply chains is that the value of the product is highly dependent on the environment over time.Shipping time, temperature, pressure, and humidity are the key elements in transporting perishable items.Carrying such materials, it is necessary to observe the requirements in which the mentioned variables can be controlled.Any changes in the mentioned elements can affect the quality of the shipped products.
Failure to comply with the required standards at any point in the supply chain of perishable products, can cause irreparable damage to the customer's products and make them unusable.Therefore, choosing the shipping method is highly important.So, the distribution of perishable products throughout the supply chain with the highest possible quality is one of the most significant competitive processes in the field of perishable products and companies should give heed to this concept while designing the optimal supply chain.
In this research, it is intended to design a mathematical model for a green CLSC network for perishable products and with uncertain demand, which tries to minimize costs and environmental pollution simultaneously. To cope with the uncertainty of the parameters, a Robust Optimization (RO) approach is used and the efficiency of the model is evaluated according to the robust feasible solutions. To validate the proposed model, the bi-objective model is first transformed into a single-objective model by the -constraint approach and then, is solved by designing numerical instances in the CPLEX solver. Additionally, aNon-dominated Sorting Genetic Algorithm (NSGA-II) is thendesigned to solve the large-size problems which are closer to real-world issues. Finally, results are presented to assess the efficiency of the algorithm and provide managerial insights at the chain level.
In the following, Section 2 reviews the literature of the research in the form of previous studies. The proposed problem and the proposed modeling are presented in Section 3. The robust model of the proposed problem is devolved in Section 4. Section 5 includes the design of solution methods. The validation of the suggested model and the computational results of the research are presented in Section 6 and finally, in Section 7, the conclusions and future suggestions are described.
2. Survey on the literature. In this section, the limited studies that have been conducted in the field of the perishable supply chain are reviewed as well as literature related to forward, reverse, and CLSCs under uncertainty.
Ozceylan et al. [37] modeled an integrated CLSC network and optimized the disassembly line balance. This paper considered strategic and tactical decisions simultaneously within a CLSC network. The objective was to minimize costs, including transportation, purchase, renovation, and dismantling station operations. Amin and Zhang [3] provided a multi-objective mathematical model including several commodities, factories, recycling technologies, demand markets, and collection centers. This modelaimed at minimizing the costs of the supply chain, in addition to minimizing the waste rate and operation time in collection centers.Soleimani and Kannan [49] designed a large-scale multi-period, multi-level, multi-product CLSC network. They combined genetics and particle swarm optimization algorithms to improve the efficacyofGenetic Algorithm (GA) by considering the positive aspects of Particle Swarm Optimization (PSO) algorithm.
Rezapour et al. [46] proposed a competitive CLSC network in the price-dependent demand market. They developed a two-level model that reverses strategic network decisions are made at a higher level and CLSC tactical and operational planning is done at the lower level. There is a competition between two supply chains to supply new products to similar markets and between a new supply chain to supply a new or remanufactured product. Li and Jia [33] presented a coordinated supply chain model considering product quality and stochastic demand. Zahiri and Pishvaee [60] designed a blood supply chain network considering uncertainty. To this end, a bi-objective mixed-integer linear programming (MILP) model wassuggested which aims to the minimization of cost and the maximization ofdemand fulfillment. Due to the uncertain nature of the data, two probabilistic robust models were developed based on the credibility criterion. The results of the case study indicated the appropriateefficiency of the offered models. Keshavarz et al. [26] developed a multi-objective multi-product multi-period reverse supply chain model considering uncertainty.The objectives were to minimize the total cost and maximize the green points of the purchased raw materials. In this regard, multi-objective decisionmaking methods were employed.
Heydari et al. [24] studied a coordinated supply chain model with random stochastic and considering the change in reordering time. Pal and Mahapatra [38] presented a production-based supply chain model considering inspection errors and incomplete quality of products in conditions of shortage and stochastic demand. Haddad Sisakht and Rayan [23] developed a CLSC network by considering different modes of transportation under stochastic demand and uncertain carbon tax rates.Their goal was to minimize the total cost of the supply chain at three levels. Kavyanfar et al. [25] developed a stochastic multi-product multi-level mathematical model to design the supply chain of small and medium industries in the clustering industry.Their suggested model aimed to minimize the total cost, which was solved by benders decomposition.They also presented a case study and sensitivity analysis to evaluate its efficiency. Dai et al. [14] developed a nonlinear model with fuzzy constraints to solve the location-routing problem (LRP) using GA and harmonic search in a three-level supply chain network of perishable products. Their objective was to minimize the total costs of the supply chain. They employed LINDO software to evaluate their proposed algorithms and it was found that the proposed algorithms have a high ability to solve problems in a suitable operating time.
Tirkolaee et al. [50] developed a self-learning PSO algorithm to design a robust supply chain under uncertainty. They proposed an MILP model to deal with locational, allocation and inventory decisions. A novel MILP model was proposed by Goli et al. [19] to design a sustainable supply chain network for perishable products distribution. They considered lead times and customer satisfaction as two main criteria and implemented a hybrid meta-heuristic algorithm to tackle the complexity of the problem. Recently, Lotfi et al. [35] proposed an RO model to design a sustainable and resilient CLSC network. They addressed the conditional value at risk by developing a two-stage MILP model. Finally, the LP-Metric method and CPLEX solver were employed to find the optimal solution.
In Table 1, a summary of dominant and relevant research conducted in recent years is comprehensively reviewed. In this study, a novel MILP model is provided to configure a green CLSC for perishable products.Finally, after reviewing the literature, it is concluded that the research gap includes the following: I. Designing a green CLSC network taking into account assumptions such as different production technology, different modes of transportation specific to perishable products that require special equipment and service at specific time windows.This is implemented by developing a bi-objectiveMILP model, including minimizing total costs and minimizing the total amount of emissions. II. Making integrate optimal decisions for inventory management, location, and allocation of facilities and transportation planning. III. Using the ROtechnique proposed by Bertsimas and Sim [8] to be applied tothe developed model and comparing deterministic and uncertain conditions in different cases. IV. Developing the ε-constraint method and NSGA-II to validate and solve the proposed model. V. Implementing sensitivity analysis on the key parameters of the problem to investigate the behavior of objective functions and presenting managerial insights.
3. Problem description. In designing the supply chain of perishable products, customer satisfaction increases if products and goods reach customers within a certain period. Research has shown that a significant portion of the cost of products is related to shipping costs. In this regard, one of the vital operational decisions is the use of a multi-level system for the distribution of products, which leads to a large reduction in costs and improves service quality. In addition to the economic aspects, the use of this type of distribution system leads to a reduction in traffic, environmental pollution, and noise in city centers, because the final level vehicles are smaller and provides more satisfaction to the citizen. In this problem, a seven-level CLSC network including supply centers, production centers, distribution centers, and customers in the forward supply chain and collection, disposal, and recovery centers in the reverse supply chain is considered. Figure 1 shows the proposed seven-level supply chain network.
Based on the proposed network, the problem aims to • Determine the optimal locations of facilities at 5 levels of production, distribution, collection, disposal, and recovery, • Calculate the quantity of products in production centers, the quantity of raw materials supplied from supply centers, • Determine the level of inventory in distribution centers, The quantity of products sent from production centers to distribution, from distribution centers to customers, the quantity of products returned from customers to collection centers, and the quantity of products sent from collection centers to disposal and recovery centers, As the main objective functions, the total cost of the supply chain network, and the total volume of pollutant emissions should be minimized.
On the other hand, the demand of customers is uncertain and defined in an uncertain interval. Furthermore, at the production level, various production technologies along with various modes of transport between levels are considered. Also, an important feature of this supply chain is the timely supply and distribution of raw materials and products due to their perishable nature. In the following, the main assumptions of the model are presented.
I. The proposed supply chain network includes seven levels: 1) supply centers 2) production centers 3) distributors 4) customers 5) collection centers 6) disposal centers and 7) recovery centers. II. Determining the optimal location is done in 5 levels of manufacturer, distributor, collection, recovery, and disposal centers. III. Customer demand is uncertain and takes value according to an uncertain interval which is specified by the RO approach. IV. Several modes of transportation systems are considered in the supply chain network. V. Several levels of production technology are considered. VI. The capacity of various facilities and centers is limited. VII. Costs of facility location, transportation, inventory shortage, and maintenance are fixed. VIII. The problem is planned for one period.
IX. In each planning period, a certain time is considered for the delivery of raw materials from supply centers to production centers and delivery of products from distribution centers to customers. X. Inventory shortage can occur in distribution centers. XI. Several types of raw materials and several types of final products are considered. XII. The volume of pollutant emissions depends on the amount of load and the distance traveled between different levels.
XIII. Each raw material is used in the production of the final product with a specific consumption coefficient.
Sets, indices, parameters, and variables of the proposed mathematical are as follows: Sets and indices s : Set of supply centers (s ∈ S), p : Set of production centers (p ∈ P ), d : Set of distribution centers (d ∈ D), c : Set of customers (c ∈ C), m : Set of collection centers (m ∈ M ), q : Set of disposal centers (q ∈ Q), o : Set of recovery centers (o ∈ O), t : Set of time periods (t ∈ T ), r : Set of products (r ∈ R), a : Set of raw materials supplied from suppliers (a ∈ A), e : Set of transportation mode from supply centers to production centers (e ∈ E), f : Set of transportation mode from production centers to distribution centers (f ∈ F ), g : Set of transportation mode from distribution centers to the customers (g ∈ G), h : Set of transportation mode from customers to the collection centers, from collection centers torecovery and disposal center and from there to the production center (h ∈ H), w : Set of production technology (w ∈ W ),
Parameters
DE crt : Demand of customer c for product r in period t, DA art : Quantity of raw material type a required to produce one unit of product r in period t, RA ar : Consumption coefficient of raw material type a to produce one unit of product r, OA aro : Recovery coefficient of product r to raw material type a in the recovery center o, CAS sa : Capacity of supply center s to supply raw material type a in each period, CAP prw : Capacity of production center p for product r with technology w in each period, CAD dr : Capacity of distribution center d for product r in each period, CAM mr : Capacity of collection center m for product r in each period, CAQ qr : Capacity of disposal center q for product r in each period, CAO or : Capacity of recovery center o for product r in each period, ESP spate : Cost of transporting raw material a from the supplier s to the producer p with transportation modee in the period t, GSP spae : Volume of CO 2 emission (depends on the volume of load and the distance) for transport a unit of raw material a from the supplier s to the producer p with transportation modee, T SP spae : Preparation and transportation time of raw material a from supplier s to producer p with transportation modee, EP D pdrtf : Cost of transporting the product r from manufacturer p to distributor d with transportation modef in period t, GP D pdrf : Cost of transporting product r from manufacturer p to distributor d with transportation modef in period t, EDC dcrtg : Cost of transporting product r from distributor d to customer c with transportation modeg in period t, GDC dcrg : Volume of CO 2 emission (depends on the volume of load and the distance) for transport a unit of product r from distributor d to customer c with transportation modeg, T DC dcg : Preparation and transportation time of product r from distributor d to customer c with transportation modeg in each period, ECM cmrth : Cost of transporting the product r from customer c to collection center m with transportation modeh in period t, GCM cmrh : Volume of CO 2 emission (depends on the volume of load and the distance) for transport a unit of product r from customer c to collection center m with transportation modeh, EM Q mqrth : Cost of transporting the product r from collection center m to disposal center q with transportation modeh in period t, GM Q mqrh : Volume of CO 2 emission (depends on the volume of load and the distance) for transport a unit of product r from collection centerm to disposal center q with transportation modeh, EM O morth : Cost of transporting the product r from collection center m to recovery center o with transportation modeh in period t, GM O morh : Volume of CO 2 emission (depends on the volume of load and the distance) for transport a unit of product r from collection center m to recovery center o with transportation modeh, EOP oprth : Cost of transporting the returned processed product r to recover in recovery center o and production center p with transportation modeh in period t, GOP oprh : Volume of CO 2 emission (depends on the volume of load and the distance) for transport a unit of returned processed product r to recover in recovery center o and production center p with transportation modeh in period t, U E e : Capacity of transportation mode e, U F f : Capacity of transportation mode f , U G g : Capacity of transportation mode g, U H h : Capacity of transportation mode h, DS sp : Distance between supply center s and producer p, DB pd : Distance between producer p and distributor d, DC dc : Distance between distributor d and customer c, DD cm : Distance between customer c and collection center m, DM mq : Distance between collection center m and disposal center q, DF mo : Distance between collection center m and recovery center o, DG op : Distance between recovery center o and producer p, α rc : Flow rate of returned product type r from customer c in each period, β r : Flow rate of disposable product type r which are transferable from collection centersto disposal centers, 1 − β r : Flow rate of recoverable product type r which are transferable from collection centersto recovery centers, CP prtw : Production cost of product type r in period t by manufacturer p with technology w, CD drt : Processing cost of product type r in distribution center d in period t, CM mrt : Processing cost of product type r in collection center m in period t, CQ qrt : Processing cost of product type r in disposal center q in period t, CO ort : Processing cost of product type r in recovery center o in period t, F P ptw : Fixed cost of establishing production center p in period t with production technology w, F D dt : Fixed cost of establishing distribution center d in period t, F M mt : Fixed cost of establishing collection center m in period t, F Q qt : Fixed cost of establishing disposal center q in period t, F O ot : Fixed cost of establishing recovery center o in period t, HD drt : Unit inventory cost of product r in distribution center d in period t, BD drt : Unit shortage cost of product r in distribution center d in period t, (LP at , U P at ) : Time window for supplying raw materials type a in period t to manufacturers, (LC crt , U C crt ) : Time window for delivering product r in period t to customer c, M M : A very large number.
Variables
XP prtw : Quantity of product r produced by the producer p in period t by the production technology w, XA spate : Quantity of raw material type a transported from supply center s to production center p in period t with the transportation mode e, XB pdrtf : Quantity of product r transported from production center p to distribution center d in period t and with transportation mode f , XC dcrtg : Quantity of product r transported from distribution center d to customer c in period t and with transportation mode g, XD cmrth : Quantity of product r returned from customer c to collection center m in period t with transportation mode h, XE mqrth : Quantity of product r transported from collection center m to disposal center q in period t and with transportation mode h, XF morth : Quantity of product r transported from collection center m to recovery center o in period t and with transportation mode h, XG oparth : Quantity of recovered raw material a from product r which is transported from recovery center o to production center p in period t with transportation mode h, ZA spate : 1 if raw material type a is transported from distribution center d to customer c in period t and with transportation mode g, otherwise 0. ZB dcrtg : 1 if product r is transported from distribution center d to customer c in period t and with transportation mode g, otherwise 0.
: Inventory of product r in distribution center d at the end of period t, BO drt : Shortage of product r in distribution center d at the end of period t.
Deterministic mathematical model. Objective functions
Objective function (1) minimizes the total cost of the supply chain, which is developed based on the modeling of similar works such as Yavari and Geraeli (2019) Objective function (1) consists of four parts. The first part covers transportation costs at each stage of the forward and reverse supply chain. The first part also includes seven terms: the total cost of transportation between suppliers and producers, producers and distributors, distributors and customers, customers and collection centers, collection and disposal centers, collection and recovery centers, and finally between recovery centers and producers. The second part of the objective function includes the location cost of all facilities at different levels in the supply chain. This part of the objective function consists of 5 terms: the total cost of locating manufacturers, distributors, collection centers, disposal centers, and recovery centers. In the third part, the inventory and shortage costs of the distribution centers are calculated, respectively. Finally, in the fourth part, production costs, operating costs in distribution centers, collection, and disposal, and recovery centers are determined.
Objective function (2) represents the minimization of the total volume of emissions within different levels of the supply chain. This objective function includes 7 terms: the total volume of pollution emissions between supply centers and producers, producers and distribution centers, distribution centers and customers, customers and collection centers, collection centers and disposal centers, collection centers and recovery centers, and finally between recovery centers and producers. Constraints Constraint (3) indicates the capacity limitation of each production center according to the level of technology in each period.
Constraint (4) indicates that to build a production center, a level of technology must be chosen for it.
Constraint (5) shows the capacity limitation of supply centers for supplying raw materials in each period.
Constraint (6) indicates the capacity limitation of distribution centers for distributing the products to the customers.
c∈C h∈H Constraint (7) represents the capacity limitation of collection centers to collect returned products from the customer.
Constraint (8) indicates the capacity limitation of disposal centers for processing and disposal of products sent from collection centers.
Constraint (9) indicates the capacity limitation of recovery centers to process and recover products sent from collection centers.
Constraint (10) Constraint (11) indicates the minimum volume of raw materials required to produce the final products in each period.
a∈A o∈O h∈H Constraint (12) shows the balance of the volume of input materials to production centers, which should be equal to the volume of final products sent from that production center to distribution centers in each period and according to the coefficient of consumption of raw materials.
Constraint (13) shows the balance of the volume of input materials to distribution centers, which should be equal to the volume of final products sent from that distribution center to the customers in each period.
Constraint (14) indicates the balance of the volume of input materials to collection centers, which should be equal to the volume of returned products (percentage of products received by the customer) by customers in each period.
q∈Q m∈M h∈H Constraint (15) indicates the balance of the volume of input materials to disposal centers, which should be equal to the volume of sent products (percentage of the products of the collection center) by collection centers in each period.
o∈O m∈M h∈H Constraint (16) indicates the balance of the volume of input materials to recovery centers, which should be equal to the volume of sent products (percentage of the products of the collection center) by collection centers in each period.
p∈P h∈H Constraint (17) indicates the balance of the volume of input materials to recovery centers, which should be equal to the certain volume of recovered products (percentage of the products of the recovery center) by recovery centers in each period.
Constraint (18) Constraint (20) Constraint (21) expresses the capacity limitation of the transportation mode h in each time period.
Constraint (22) determines the relationship between the allocations of production centers to supply centers with the volume of sent raw materials in each period.
Constraint (23) determines the relationship between the allocations of customers to distribution centers with the volume of sent products in each period.
Constraint (24) indicates the time window for receiving raw materials by production centers in each time period.
Constraint (25) indicates the time window for receiving products by distribution centers in each time period.
Constraint (26) indicates that the volume of the product sent by production centers to distribution centers should be less than or equal to its production volume in each period of each product.
Constraint (27) specifies the domain of the variables.
3.2.
Robust mathematical model. RO methods offer a risk-averse approach to dealing with uncertainties in optimization problems. They have attracted a lot of attention as an efficient tool to cope with real-world uncertainty [27,34]. Based on Pishvaeeet al. [42], a solution is called robust if it is feasible and optimally robust simultaneously.Feasibility means that the proposed solution must get feasible values of the uncertain parameters, and optimally robust means that the value of the objective function for (almost) all values of the uncertain parameters remains close to the optimal value or minimum or, at least, has the less deviation from the optimal value. In this research, the RO approach of Bertsimas and Sim is employed due to the development of a linear mathematical model and for considering the level of controllable conservatism close to real-world conditions. The Bertsimas and Sim's model is further explained for the linear optimization problem in which the objective function is minimization and the uncertainty coefficients exist in both the objective function and the constraints.
The optimization problem is assumed as follow: Also, uncertainty levels are defined as: Each of the constraint coefficients a ij , j ∈ N = 1, 2, ..., n is modelled as an independent random variable with symmetrical but unknown distributionâ ij , j ∈ N . They take value in the interval [a ij −â ij , a ij +â ij ], whereâ ij stands for the deviation from the nominal coefficient a ij . Similarly, each of the objective coefficients c j , j ∈ N take value in the interval [c j − d j , c j + d j ], where d j is the deviation from the nominal coefficient c j . Since the objective function is of minimization and the goal of robust models is to provide the maximum regret, only one side of the proposed interval is considered. Accordingly, it is supposed that c j takes value in [c j , c j + d j ].
To model the robust counterpart of the problem, Γ i is denoted as below: Consider constraint i as a T i x ≤ b i .Here, J i is denotedas a set of uncertain coefficients in row i. For each constraint row of i, a parameter of Γ i (an integer or non-integer number) is defined such that [0, |J i |]. In other words, the role of Γ i in constraints is to justify the robustness of the suggestedapproach and conservatism level of the solution. It has been proven that it is unlikely that all coefficients become uncertain simultaneously [8]. Therefore, it can change up to the maximum value of [Γ i − Γ i ]a it . In other words, only a subset of coefficients is allowed to affect the solutions adversely. With this assumption, it is ensured that if the same condition happens actually, the optimally robust solution will be feasible definitely.Also, due to the symmetric distribution of the variables, even if the number of changing coefficients exceeds Γ i , the optimal solution will be still feasible with a very high probability. Therefore, Γ i is considered as the protection level for constraint i.
Here, Γ 0 controls the robustness level in the objective function. Therefore, it is intended to determine the optimal solutions when Γ 0 number of coefficients are changed in the objective function and have the most significanteffect on the solution.Generally, higher values of Γ 0 raise the conservatism level against the higher cost it should be paid in the objective function. Here, Γ 0 is necessarily an integernumber but the rest of Γ i can be integer or non-integer.
On this basis, the nominal linear robust counterpart of the problem is obtained as below [8]: To transform the model into a linear optimization model, Lemma 3.1 is needed.
which is equal to the optimum value of the objective function of Model (31): Lemma (3.1) has been proven in Bertsimas and Sim [8].
By inserting the dual of Model (32) in the robust counterpart model, it is formulated as follow: In the proposed mathematical model, the customer demand (DE crt ), flow rate of returned product type r from customer c in each period (α rc ) and flow rate of disposable product type r transferable from collection centers to disposal centers (β r ) and flow rate of recoverable product type r transferable from collection centers to recovery centers (1 − β r ) are the main parameters of the problem with uncertain nature, which is defined in an uncertainty interval. Based on the Bertsimas' and Sim's approach, the uncertainty intervals are Regarding the uncertain interval space, each of uncertain DE crt is in a symmetric and bounded distance with the center of DE crt and in the form ofDE crt = ρDE crt . In this equation, DE crt is the estimated value of customer demand,DE crt is the fluctuation of demand and ρ > 0 is the uncertainty level. Likewise, α rc and β r would be in the form ofα rc = ρ ᾱ rc andβ r = ρ β r , respectively.
In the proposed mathematical model, Constraints (10), (14)-(16) have led to uncertainty due to the existence of uncertain parameters. Thus, these constraints should become robust based on the offered model of Bertsimas and Sim. As a result, the modeling of the proposed robust model is presented as follows. Constraint (33) is presented as an alternative to Constraint (10) to provide robust conditions: Moreover, the conservatism level (uncertainty budget) of Constraint (33) is equal to Γ crt ∈ [0, 1], which has a similar definition to the Bertsimas and Sim's model. Constraints (34)- (37) are also presented to provide the robust condition in Constraint (14): Furthermore, conservatism level (uncertainty budget) of Equation (31) z rt + P cmrth ≥β r E cmrth ∀c ∈ C, m ∈ M, r ∈ R, t ∈ T, h ∈ H, Likewise, conservatism level (uncertainty budget) of Constraint (38) is equal to Γ rt ∈ [0, |R|.|T |], which has a similar definition to the Bertsimas's and Sim's model. Finally, Constraints (42)- (45) are presented to provide the robust condition in Constraint (16): (42) to (45) with Constraint (16).
Exact solution method: -constraint.
The -constraint technique is among one of the most applicable multi-objective decision-making (MODM) methodsto cope with the multi-objectiveness of the problems. The Pareto front can be drawn by the -constraint method including non-dominated Pareto solutions. By considering the values of for each sub-objective function, the problem can be solved. For the proposed problem, the -constraint method is employed through Model (46).
The main steps in the -constraint method are as follows i. Considering one of the objectives as the main objective function, ii. Based on each objective function, the problem is solved. Then, the optimal values of objective functions are obtained. iii. The difference between the two optimal values of the second objective function is divided into several pre-determined parts. A table of values 2 , . . . , n is then generated. iv. The problem is solved by the main objective function and 2 , . . . , n , v. Pareto solutions are reported.
5.
Meta-heuristic algorithm: NSGA-II. Non-dominated Sorting Genetic Algorithm (NSGA) is one of the most popular and widely used algorithms in the field of multi-objective optimization for treating multi-objective problems, which was introduced by Deb et al. [15]. Since then, many researchers have been applying NSGA-II to different practical multi-objective optimization problems [44,51].
Besides the functionality of NSGA-II, it can be considered as standard for many other multi-objective optimization algorithms.The unique approach of NSGA-II in dealing with multi-objective optimization problems has been used over and over to create a novel algorithm. Undoubtedly, this algorithm is one of the most basic multi-objective evolutionary optimization algorithms and so, it is employed in this research.
In this study, in addition to presenting a multi-objective MILP model, an NSGA-II is also developed. Therefore, in this section, the implementation of the algorithm, operators, and generation of solutions are explained. The flowchart of the proposed NSGA-II is given by Figure 2.
To transform an initial solution to a chromosome, the chromosome must show the decisions separately for each level of the supply chain. For instance, facility location and the volume of distribution between different levels of the supply chain are to be considered in the chromosome. For this purpose, the chromosome consists of two parts. The first part displays the locations and the second part specifies the volume of distribution of products.In the first part, the values are between 0 and 1. In each cell, if the value is more than .05, the facility is established, otherwise it not. In the second part, the values are between 0 and 1, which shows the percentage of products sent from one origin to a specific destination. It is noteworthy that the second part is interpreted based on the result obtained from the first part. In other words, according to the located facilities in the first part, the distribution percentage of products located facilities is determined in the second part. This structure is repeated for all levels of the supply chain and all periods. For example, for the level between the distribution center and the customer in a particular period, the chromosome is shown in Table 2. In this example, there are 3 distribution centers and 5 customers.
According to Table 2, distribution centers 2 and 3 are established. The percentage of distribution from distribution centers 2 and 3 can be normalized to determine the amount of demand. Therefore, the interpretation of the chromosome is presented in Table 3.
After generating the initial population, their fitness function is calculated. In this study, the fitness functions of the algorithm are as same as the objective functions of the problem. After calculating the fitness function, the solutions are categorized. Thus, among a set of points, a number of them are considered as non-dominated to the others. In this way, several levels or fronts can be formed, and if needed, some of these levels are selected for the next steps and the rest are removed.Crossover and mutation operators are then used to generate the next generation. For the Table 4. Then, using the trial and error method based on the values of the objective function, the best value for each of them is obtained and reported in Table 5. 6.1. Validation of the proposed model and algorithm. In order to assess the validation of the NSGA-II, a number ofproblem instances are generated in small size to be solved by the proposed metaheuristic algorithm and -constraint method. The number of each center is reported in Table 6. Also, the other parameters of the problem are randomly generated with a uniform distribution according to Table 7. Cost of transporting raw materials from the supply center to the production center U (50,150) Cost of transporting products from production center to distribution center U (50,150) Cost of transporting from distribution center to customer U (50,150) Cost of transporting from customer to collection center U (50,150) Cost of transporting from the collection center to the disposal center U (50,150) Parameter Value Cost of transporting from the collection center to the recovery center U (50,150) Cost of transporting from the recovery center to the production center U (50,150) Volume of CO 2 emission released to transport raw material from the supply center to the production center U(50,100) Volume of CO 2 emission released to transport products from the production center to the distribution center U(50,100) Volume of CO 2 emission released to transport from the distribution center to the customer U(50,100) Volume of CO 2 emission released to transport from customer to the collection center U(50,100) Volume of CO 2 emission released to transport from the collection center to the disposal center U(50,100) Volume of CO 2 emission released to transport from the collection center to the recovery center U(50,100) Volume of CO 2 emission released to transport from the recovery center to the production center U(50,100) Preparation time for transportation of raw material from the supply center to production center U (10,20) Preparation time for the transportation of products from distribution center to customers U (10,20) Distance between supply center and production center U(500,1500) Distance between production center and distribution center U(500,1500) Distance between the distribution center and customer U(500,1500) Distance between customer and collection center U(500,1500) Distance between collection center and disposal center U(500,1500) Distance between collection center and recovery center U(500,1500) Distance between recovery center and production center U(500,1500) Table 8. According to Figure 3, the red dots show the solutions obtained by the NSGA-II and the blue dots show the solutionsgained by the -constraint method. Regarding Table 8 and Table 3, the difference between the solutionsof the two methods is very nominal, which indicates the appropriate performance of the proposed metaheuristic algorithm.
6.2. Deterministic model vs. robust model. Here, to investigate the impact of robustness, a comparison is made between the deterministic and robust model and the obtained results are given in terms of the mean value of the objective functions and mean CPU time. Table 9 represents the results obtained by the proposed NSGA-II and the -constraint approach. Table 9, it is revealed that the deterministic model yields better solutions without ensuring robustness. In other words, although the objective functions and CPU time take higher values in the proposed robust model, the robustness of the solutions under the uncertainty is not guaranteed by the deterministic model. On the other hand, the proposed NSGA-II could provide acceptable solutions in comparison with the -constraint method in both deterministic and robust models.
6.3. Validation of model and the proposed algorithm in large size. In this section, using Table 7 and Table 10, four other problems are generated in the medium and large size to evaluate the performance and efficiency of the proposed algorithm in comparison with the exact method. For better validation of the proposed algorithm and its capability to identify the optimal Pareto front, four criteria specific to multi-objective algorithms are used.Due to this, the four criteria of mean ideal distance (MID), diversity metric (DM), spacing metric (SM), and the number of Pareto solutions (NPS) are calculated. Then, according to the values of these criteria, the performance of the offered algorithms is evaluated.The better performance of the algorithm is due to the higher value of DM, lower value of MID, lower value of SM, and higher value of NPS [47]. The values of criteria calculated for the problems in different uncertainty levels are presented in Tables 11-13. According to Tables 11-13, the proposed NSGA-II performs closely to the exact algorithm and as a result, it has a high efficiency to find near-optimal solutions. In this regard, the proposed NSGA-II can be employed to solve large-size problems.
More specifically, the NSGA-II performs much better than the EC in terms of Diversity criteria. This means that NSGA-II can generate Pareto fronts with a wider range of possible solutions.
In terms of MID, the NSGA-II also performs more efficiently rather than EC. That is, in this algorithm, the distance from the ideal point at any solution is less than the EC method.
Additionally, in terms of SM, the performance of the NSGA-II has been much more appropriate and it has produced a more uniform front than EC. Besides, the NSGA-II has always managed to find more Pareto solutions.
For further comparison of the algorithms, the simple additive weighting (SAW) method is employed. First, all the values of criteria are normalized. Next, the average value of the criteria is calculated (with the same weight) for each algorithm in each problem such thatit considers them as the efficacy of that algorithm in that problem. The execution steps of SAW are given as follows: Step 1: Nature of indexes must be identified according to its positiveness or negativeness.
Step 3: By taking into account the importance coefficients or weight of measures and normalized values of the decision matrix, SAW can be computed for each experiment according to Equations (49) These evaluations are presented in Figures 4-6 for the uncertainty levels of 0.2, 0.4, and 0.5. Accordingly, the proposed NSGA-II performs highly close to the exact method. However, since the EC method has failed to solve the large-size problems (Problem 4) exactly, the proposed NSGA-II is employed. Therefore, the proposed NSGA-II is the best tool to cope with large-size problems and the EC method lacks the necessary efficiency. Figure 7 also represents the solution time for solving different categories of the problem. According to Figure 7, by rising the size of problems, the exact solution time has increased dramatically until the EC is not able to solve Problem 4 exactly within the proposed time limit. On the other hand, the proposed NSGA-II can solve the problems in a much shorter time. All in all, the NSGA-II needs less time to discover the Pareto solutions which be considered as another advantage of this algorithm.
According to the obtained results, it can be inferred that the proposed NSGA-II is an efficient tool to deal with the proposed network. Managers may consider this algorithm to tackle the complexity of problem in the real-world and even extend it to other developed problems in a given network. For example, if other assumptions or limitations are incorporated into the problem, the flexibility of the algorithm plays a key role in providing a feasible-robust-optimal solution. 7. Conclusion and future works. This study addressed a green CLSC related to the production and distribution of the perishable product was designed under uncertainty. The main contribution is the design of a backup decision system for planning the supply, production, and distribution of perishable products taking into account major real-world assumptions such as different levels of technology and approved time windows for the distribution of products using vehicles with a suitable cooling system. On the other hand, anRO approach was utilized to cope with the problem uncertainty. The main objectives of the problem were to minimize the total cost of the network and the total volume of emissions at the chain levels. To tackle the complexity of the offered model, A NSGA-II was developed. Furthermore, the GAMS/CPLEX solver and ε-constraint method were used as an exact method to solve the problem and prove the correctness and validation of the model. After studies and comparisons made by the two proposed research methods, it was found that the proposed algorithm covers a wider range of solutions and the quality of the solutions is higher and produces a more uniform front. Also, the solution time of the proposed NGSA-II was significantly shorter compared to the exact method. One of the notable points was the increase of objective functions by increasing the level of uncertainty, which was observed in all categories. Based on the main limitations of the study, the following suggestions are given for future research direction: i. Developing other multi-objective meta-heuristic algorithms to be compared with the proposed NSGA-II, such as Multi-Objective Particle Swarm Optimization (MOPSO) algorithm [44], Multi-Objective Stochastic Fractal Search (MOSFS) algorithm [29] and Multi-Objective Grey Wolf Optimizer (MOGWO) algorithm [30]. ii. Applying other uncertainty techniques to be compared with the proposed RO approach, such as fuzzy programming [20,52] and stochastic optimal control [57], iii. Studying the sustainable development in the problem by addressing the social aspect of the proposed CLSC network [1,36].
|
2021-08-27T17:02:05.977Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "86544c833e51d53362dd49c674e79f0fa4cbf8b3",
"oa_license": null,
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=e0c272f6-ffef-4a99-ae61-8f4d782a4a83",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d63e1f9c088dc34c2e68b9a591fff5c5b77360d7",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
231848858
|
pes2o/s2orc
|
v3-fos-license
|
Clinical Improvement After Treatment With IncobotulinumtoxinA (XEOMIN®) in Patients With Cervical Dystonia Resistant to Botulinum Toxin Preparations Containing Complexing Proteins
This study investigated the clinical long-term effect of incobotulinumtoxinA (incoBoNT/A) in 33 cervical dystonia (CD) patients who had developed partial secondary therapy failure (PSTF) under previous long-term botulinum toxin (BoNT) treatment. Patients were treated four times every 12 weeks with incoBoNT/A injections. Physicians assessed treatment efficacy using the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS) at the baseline visit, week 12 and 48. Patients rated quality of life of CD with the Craniocervical Dystonia Questionnaire (CDQ-24). Titres of neutralizing antibodies(NAB) were determined at start of the study and after 48 weeks. All patients had experienced significant and progressive worsening of symptoms in the last 6 months of previous BoNT treatment. Repeated incoBoNT/A injections resulted in a significant reduction in mean TWSTRS at week 12 and 48. Patients' rating of quality of life was highly correlated with TWSTRS but did not change significantly over 48 weeks. During the 48 weeks -period of incoBoNT/A treatment NAB titres decreased in 32.2%, did not change in 45.2%, and only increased in 22.6% of the patients. Thus, repeated treatment with the low dose of 200 MU incoBoNT/A over 48 weeks provided a beneficial clinical long-term effect in PSTF and did not booster titres of NAB.
INTRODUCTION
Intramuscular injections of botulinum neurotoxin (BoNT) have become the treatment of choice for patients with cervical dystonia (CD) (1,2). BoNT preparations are high molecular weight aggregates of the biologically active neurotoxin (a polypeptide with a 100 kDa heavy chain and a 50 kDa light chain) and complexing proteins (hemagglutinating and non-hemagglutinating) (3,4). The BoNT/A formulation onabotulinumtoxinA (onaBoNT/A; Botox R , Allergan Inc, Irvine, USA) is composed of a 900 kD complex (so-called LL complex), the abobotulinumtoxinA formulation (aboBoNT/A; Dysport R , Ipsen Ltd., Slough, UK) is probably a mixture of 600 kD L complex and 300 kD M complex and the BoNT/B formulation rimabotulinumtoxinB (rimaBoNT/B) Myobloc TM , Solstice Neurosciences Inc, San Francisco, USA and NeuroBloc R , Eisai Ltd., Hertfordshire, UK) consists of a 700 kD complex (4,5).
Repeated BoNT injection therapy can lead to reduced responsiveness to treatment [partial secondary treatment failure (PSTF)] and to the development of neutralizing antibodies (NABs) against botulinum neurotoxin. It has been suggested that a higher content of bacterial proteins might contribute to this secondary treatment failure (6). Potential adjuvant activity of the complexing proteins is also discussed (7,8). For instance, neutralizing antibodies had been detected in more than 17% of CD patients following onaBoNT/A treatment (9)(10)(11) before the protein content in this preparation was altered in 1998. Following the reduction of protein content, NABs were only reported in 1.2% of the patients receiving onaBoNT/A (12). For aboBoNT/A, a secondary non-responder rate of ≤5% and a NAB rate of >2% was found (13). In more recent cross-sectional studies prevalence of NABs was found to be even larger than 10% in patients being long-term treated over more than 10 years (14,15).
However, additional factors to size and amount of complexing proteins must play a crucial role for the generation of resistance to a BoNT formulation, since high secondary non-responder rates of up to 44% have been observed in CD treatment with the 700 kD BoNT/B formulation already after a few injection cycles (16,17). Most likely, the percentage of biologically inactive, but still immunologically relevant fragments of the neurotoxin may play a crucial role in the antigenicity of a BoNT formulation (4,18).
Once complete secondary treatment failure (CSTF) has occurred and high titres of NABs have been induced, it is recommended to terminate treatment with the administered BoNT serotype (9,19). In patients with PSTF some clinical response at week 4 may still be observed when high doses of botulinum toxin are used (20). In most patients with NAB induced PSTF a clear decrease of duration of clinical response is the first clinical sign of PSTF, the response at week 4 may persist, although relevant NAB titers have already been induced (20)(21)(22). If treatment failure has occurred after treatment with ona-or aboBoNT/A or rimaBoNT/B, the use of an alternative BoNT preparation containing complexing proteins usually does not overcome non-responsiveness (21). Therapy with different BoNT serotypes such as type B and type F may initially be successful (21,23,24), but will also induce antibody formation after few applications (21). To overcome antibody-induced treatment failure, extraction of NABs by plasmapheresis and immunoadsorption was successfully applied (21, 25) but was found not clinically practicable (21). Nowadays, these patients are considered to be candidates for deep brain stimulation (26).
Since July 2005, incobotulinumtoxinA (incoBoNT/A; Xeomin R , Merz Pharmaceuticals GmbH, Frankfurt, Germany) is available for the treatment of focal dystonias (27). Using an innovative purification procedure, all complexing proteins are removed resulting in a preparation containing only the pure botulinum neurotoxin (150 kD) and the lowest protein load of all available BoNT/A formulations (18,28). Since a reduction in complexing proteins is thought to reduce the risk of NAB development and secondary non-responsiveness, this risk may be low under incoBoNT/A treatment (15,29).
After this new BoNT/A formulation had become available, the question arose whether patients with resistance to abo-or onaBoNT/A may recapture benefit when treated with incoBoNT/A.
One might argue that incoBoNT/A administration in the treatment of CD patients with the previous PSTF will not have any clinical effect because the neurotoxin is completely unprotected against the attacks of neutralizing antibodies. On the other hand, incoBoNT/A is manufactured differently to the other BoNT/A preparations and may have a slightly different 3D-structure, which is relevant for NAB binding (4,18). It is therefore theoretically possible that some NABs induced by aboor onaBoNT/A do not detect and do not reduce the biological function of incoBoNT/A (30). If this were the case at least some of the patients with PSTF would respond progressively and the NAB titres would decline after switch to incoBoNT/A We, therefore, designed the following open, prospective, non-interventional study to analyse the clinical efficacy and the development of antibody titres after four injections of the low dose of 200 MU incoBoNT/A in a cohort of 33 partial secondary non-responders to BoNT preparations containing complexing proteins.
On the basis of the current recommendations of treatment management of CD-patients with partial secondary treatment failure and antibody formation, we had to expect that patients continued to worsen and antibody titres were boostered.
Compliance With Ethical Standards
This open, prospective, observational, non-interventional, single center study was carried out according to the Declaration of Helsinki and Good Clinical Practice.
Informed consent was obtained from all individual participants included in the study. Local ethics committee of the Heinrich-Heine-University Duesseldorf, Germany (#4085) approval was obtained allowing to take blood samples and to determine the antibody status and publish these data in combination with anonymous clinical data of patients having given informed consent.
Definition of Partial Secondary Treatment Failure
Criteria for partial secondary treatment failure (PSTF) were: (i) the patient had previously had a good response by at least 3 TSUI score points (31), (ii) the patient presents with a systematic worsening of CD despite dose increase and/or change of BoNT preparations containing complexing proteins. Systematic worsening was defined as an increase by at least two points over three consecutive TSUI scores each determined about 3 months after injection; (iii) the patient reports reduced efficacy for these last three consecutive injections in comparison to previous injections [for a detailed discussion of the definition of PSTF see Hefter et al. (32)].
Patients and Intervention
The charts of all CD patients attending our botulinum toxin outpatient clinic were screened for eligibility; 55 patients presented with PSTF according to our definition (see section Definition of Partial Secondary Treatment Failure) and were informed on 4 different therapy options: (1) to participate in the present study, (2) to continue BoNT/A therapy out-side this study, (3) to cessate BoNT therapy, and (4) to undergo deep brain stimulation. Thirty-three of these patients gave informed consent to participate and were consecutively recruited. The other 22 patients decided to undergo deep brain stimulation (n = 20) or to stop BoNT therapy (n = 2).
Most (n = 25) of the 33 recruited patients had already previously been included in a study on treatment of de novo CD-patients with 500 U aboBoNT/A and had clinically been characterized very well (33). At the time of recruitment 24 patients had a main rotational component, nine a main lateral component. None of the patients suffered from a pure antecollis or antecaput (34). In 10 patients a severe additional retrocomponent was present, in 15 an additional shoulder elevation and in seven patients a moderate to severe head tremor. As described previously patients with head tremor had responded quite well (33). A second worsening with head tremor was a sensitive and objective symptom for the development of PSTF. Since 2003 CD-patients in our institution are treated according to the cap/col-concept (35) which takes into account the differences between neck and head position and movements and of the underlying activity of muscles causing these different head positions and movements (35).
After recruitment demographical and treatment-related data [date of the last two injections (T-1, T-2), the preparation used, total dose, dose per muscle, and corresponding TSUI scores] were extracted from the charts.
Patients received intramuscular injections of 200 U incoBoNT/A without EMG guidance every 12 weeks (four injection cycles = 48 weeks) according to their previous BoNT injection protocols. If a special muscle M had been treated with a dose TM, it was treated with a Xeomin R dose XM (=200U * TM/T) after the switch to incoBoNT/A, where T is the total dose of the previous preparation.
Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS)
The severity of CD was assessed by the treating physicians (UK or MM) at baseline visit T0, at week 12 (T1) and at week 48 (T4). When UK had scored the patient the first time MM analyzed the patient the next time and vice versa. Who treated the patient the first time in the study varied randomly. HH collected the data so that the scoring physician was not biased by the preceding investigation. The TWSTRS total score (range 0-85 points) (36) was used which consists of the three scores for the subscales severity (range 0-35), disability (0-30), and pain (0-20). The subscales of disability and pain are based on the patients' subjective assessments.
Since the severity of CD had worsened before therapy was switched to incoBoNT/A, patients were considered treatment responders if their scores on the TWSTRS severity subscale at week 48 had improved from baseline by ≥3 points. Patients with an improvement of more than five points were classified as very good responders. Definite non-response was present when the previous worsening continued and a further increase of three points or more was found. A TWSTRS change from baseline of no more than ±2 points was regarded as no change. Our definition of treatment response was based on the results of a previous randomized, double-blind, comparator trial between onaBoNT/A, and incoBoNT/A (27) and will be discussed in detail in section "Is the Improvement of TWSTRS Severity Score Under incoBoNT/A clinically Relevant?".
Craniocervical Dystonia Questionnaire (CDQ24)
Quality of life (QoL) was rated by the patients at baseline and after 12 and 48 treatment weeks using the craniocervical dystonia questionnaire (CDQ-24), a 24-item disease-specific instrument based on the five subscales: stigma, emotional well-being, pain, activities of daily living, and social/family life (37). Patients with more than 20% improvement of CDQ24 were classified as responders, those with a worsening of more than 20% were classified as non-responders.
Antibody Testing
Blood samples for BoNT antibody testing were collected at the start of the trial and after 48 weeks. Antibody titres were determined by an independent blinded contractor (Toxogen GmbH, Hannover, Germany) using the sensitive mouse hemidiaphragm assay (MHDA) for neutralizing antibodies (19). The upper and lower limit of neutralizing antibody detection were 10 and 0.1 mU/ml, respectively. All blood samples were analyzed at the same time following the collection of all clinical data to avoid any influence of knowing patients' antibody status according to clinical scoring procedure. One sample was lost in transport and one sample was spilled; 64 samples were analyzed.
Statistical Analysis
The primary outcome measures of the study were the change from baseline (before first incoBoNT/A injections) to week 12, and to the end of the 4th treatment cycle at week 48 in TWSTRS severity and total score and CDQ-24. The Wilcoxon test was used to analyse non-parametrically these paired measurements. For correlations the non-parametric Spearman's rho was used. All tests were part of the commercially available statistics package SPSS (version 23; Armonk, USA).
Significant Worsening of CD Severity Prior to incoBoNT/A Treatment
Thirty-three CD patients (17 females/16 males; mean age 56.4 ± 5.2 years) under long-term BoNT treatment presenting with partial secondary therapy failure were included in the study. At the last injection prior to study entry, six of the patients (18.2%) had received onaBoNT/A, 17 (51.5%) aboBoNT/A, and 10 (30.3%) had already been switched from a type A preparation to rimaBoNT/B. Dose ranges of the previous BoNT FIGURE 1 | Highly significant worsening of mean normalized TSUI score during the last 6 months under previous BoNT treatment before the first incoBoNT/A injection. ***p < 0.001. T-2 = Mean normalized TSUI score before the second last BoNT injection prior to incoBoNT/A treatment; T-1 = Mean normalized TSUI score before the last BoNT injection prior to first incoBoNT/A treatment; T0 = normalized baseline TSUI score (=100%) just before first incoBoNT/A treatment. TSUI-scores at T-2 and T-1 were normalized to baseline TSUI-score at T0.
preparations during the last two injections were 200-300 MU onaBoNT/A, 800-1,200 MU aboBoNT/A, and 10,000-15,000 MU rimaBoNT/B. In line with our definition of PSTF, all patients experienced highly significant deterioration (p < 0.001) of CD severity during the last 6 months before onset of incoBoNT/A therapy (Figure 1). For each patient, TSUI-scores of the last two injections at T-2 and T-1 were normalized to patient's baseline TSUI-score at T0 (=100%). Mean normalized TSUI-scores at T-2 and T-1 were highly significantly (p < 0.001) lower than mean baseline TSUI-score. There was no difference in deterioration between patients previously treated with aboBoNT/A, onaBoNT/A, or rimaBoNT/B.
Significant Improvement of CD Severity After incoBoNT/A Treatment
A significant reduction in TWSTRS severity subscore compared to baseline was observed at week 12 (p < 0.05) and at week 48 (after 4 incoBoNT/A injections; p < 0.01; Figure 2). Because of incomplete data for the TWSTRS score, a direct comparison between baseline and week 48-TWSTRS severity score was only possible in 25 patients. Compared to baseline score TWSTRS severity subscore decreased and improved in 20 patients (=80%), did not change in one patient (4%) and increased in four patients (16%). Eleven of the patients were definite responders (44%) after 48 weeks, only three patients were definite non-responders (12%) and possible responders (no clear-cut change: ±2 points) were further 11 patients (44%). Changes of more than five points were seen in five patients (very good responders: 25%). The individual development in TWSTRS severity subscores is illustrated for the entire cohort in the upper part of Figure 3 and for the three responder groups in the lower part of Figure 3. Improvements were also observed at week 48 for the TWSTRS subscale pain, and for total TWSTRS; however, they failed to reach significance ( Table 1).
No correlation was found between clinical response to incoBoNT/A after 12 and 48 weeks with the previous total dose or the previous duration of treatment. For sake of comparison and simplicity inco-and onaBoNT/A doses were kept constant, aboBoNT/A doses were divided by 3 and rimaBoNT/B doses by 30 following a European consensus recommendation (38) FIGURE 2 | Mean (+SD) TWSTRS severity subscore at baseline and following incoBoNT/A injections. *p < 0.05; **p < 0.01. T0 = mean baseline TWSTRS just before first incoBoNT/A treatment; T+1 = mean TWSTRS after 12 weeks just before second incoBoNT/A treatment; T+4 = mean TWSTRS 12 weeks after 4th incoBoNT/A treatment.
well-knowing that these conversion ratios may vary from study to study and from muscle to muscle [for an overview see (39,40)].
Only Little Changes in Quality of Life During incoBoNT/A Treatment
Median CDQ-24 scores also improved from baseline to week 48 but failed to reach the significance level of 5% ( Table 1). Eight of the 33 patients (24.2%) reported a 20% improvement, whereas only four patients (12.1%) considered their QoL worsened by 20% or more; in all other patients (63.7%), changes were smaller than ±20%. At week 12 patients' subjective rating of quality of life (CDQ24) at baseline was just below significance for a correlation with the physicians' TWSTRS severity ratings (r = 0.3694; p = 0.058). However, there was a highly significant correlation between the CDQ24 with the total TWSTRS, when pain and disability subscores were added to the severity subscore (r = 0.5792; p < 0.001). At week 48 no correlation between CDQ24 and total TWSTRS was found (r = 0.2563; p > 0.05).
Development of Antibody Titres During incoBoNT/A Treatment
Information on neutralizing antibody titers at baseline was available for 32 of our 33 CD patients with PSTF: 25 patients tested positive for the presence of neutralizing antibodies, seven patients tested negative. One of the post-incoBoNT/A samples of the patients testing negative was spilled; therefore, 31 pre/post incoBoNT/A comparisons were available ( Table 2).
All patients testing negative at baseline remained negative (6/31 = 19.4%; Table 2). Of the 25 CD patients testing positive at baseline, 10 patients (40%) had decreased antibody titers after 48 weeks ( Table 2), and titers remained constant in eight patients (32%; Table 2); boostering occurred only in seven patients (28% ; Table 2). Overall, an increase in NAB titres was only detected in seven of the 31 patients with available pre/post comparison data (22.6%); in all other 24 patients (77.4%) titres either declined or remained constant. Titers remained high in four patients who presented with an initially high titer. In four patients with a positive assay, the test was negative after four cycles of incoBoNT/A treatment ( Table 2). No correlation was found between initial NAB titers and the changes of TWSTRS severity scores over 48 weeks of incoBoNT/A treatment and between changes of NAB titers and changes of TWSTRS severity scores after 48 weeks. There was a non-significant trend that higher scores were associated with higher NAB titers at baseline, but not at week 48. None of the six patients testing negative at baseline and during the study ( Table 2) was classified a responder.
DISCUSSION
The present study shows that injection therapy with a standard dose of 200 MU incoBoNT/A over a 48-week period provides a significant, beneficial clinical long-term effect in a cohort of CD patients (Figure 2), who had continuously worsened under aboor onaBoNT/A or rimaBoNT/B pre-treatment.
Is the Improvement of TWSTRS Severity Score Under incoBoNT/Aclinically Relevant?
In contrast to other studies on the efficacy of single BoNT injections for CD treatment, where the primary efficacy analysis is assessed at week 4 following treatment (2,27,41), efficacy analysis in the present study was performed at the end of an injection cycle just before the next injection cycle was started. This is important to keep in mind since in patients with NABinduced PSTF the 4-weeks effect may be preserved whereas the duration of the efficacy has already declined (21). By means of the method used here, improvement can only be detected when the effect of an incoBoNT/A injection lasts longer than 12 weeks. Therefore, the significant improvement in the present study, therefore, does not reflect a transient 4-weeks effect, but indicates a permanent improvement during the entire injection cycle lasting months. Thus, efficacy analysis in the present study is highly conservative.
Our data compare well to a randomized, double-blind, comparator trial between onaBoNT/A and incoBoNT/A (27) in patients responding well to onaBoNT/A. Mean improvement of the TWSTRS-severity subscore 16 weeks after either incoBoNT/A or onaBoNT/A injection was significant from baseline for both BoNT/A preparations without difference between them. The mean change in TWSTRS severity score at week 4 was in the order of−6 which decreased to−1.8 at week 16. Using our responder criterion of ≥3 points improvement in TWSTRS severity score at week 12, the corresponding responder rates at week 12 for incoBoNT/A and onaBoNT/A injections in the comparator trial can be estimated to be close to 50% (assuming a linear decrease of efficacy from week 4 to 16). In the present study, incoBoNT/A treatment of patients with partial secondary treatment failure to BoNT treatment resulted in a responder rate around 44%. This also compared well to a double-blind, randomized, controlled trial on efficacy and safety of treatment of de novo and well-responding CD-patients with 500 unit aboBoNT/A (42). Responder rates in the Dysport R -arm were 30/35 (=86%) at week 4, 26/35 (=74%) at week 8 and 2/35 (=5.7%) at week 16 (42). Linear interpolation to estimate the responder rate at week 16 yields 14/35 (=40%). This is also close to responder rates observed when CD patients unresponsive to BoNT/A were switched to treatment with BoNT/B (24).
When evaluating the relevance of clinical improvement with incoBoNT/A, we have to take into account that our patients experienced a highly significant deterioration during the last 6 months before the switch to incoBoNT/A treatment. Initiation of incoBoNT/A treatment did not only stop this deterioration but unexpectedly initiated a slowly progressive improvement. This improvement is not due to a large injection dose of incoBoNT/A since the dose of 200 MU incoBoNT/A is low compared to the doses of BoNT/A or BoNT/B used before the switch (see Results section Significant Worsening of CD Severity Prior to incoBoNT/A Treatment).
Some of the patients who had previously been switched to BoNT/B after PSTF to BoNT/A, had expected a larger incoBoNT/A effect and were disappointed. They tended to underestimate the effect of the switch to incoBoNT/A. On average, there was no significant change in quality of life as measured by the CDQ24. However, at least three patients reported that the effect of incoBoNT/A injections was close to the effect experienced when they had received BoNT/A injections for the first time. These patients probably overestimated the incoBoNT/A effect because of the preceding deterioration.
Similar to a large study in de novo CD patients correlating TSUI and CDQ24 scores (43), the correlation between TWSTRS severity subscore and CDQ24 in the present study was only weak. A much better correlation was found between total TWSTRS and CDQ24, since both scales contain items asking about pain and everyday life activities. The items in the CDQ24 addressing stigmatization correlated best with severity scores. This was also the case in the above-mentioned study (43).
Development of Neutralizing Antibodies Under incoBoNT/A Treatment
The present results indicate that incoBoNT/A injections may be clinically effective without boostering NAB levels. In most of the patients (>77%), NAB titres did not increase despite the injection of the pure neurotoxin type A. Furthermore, titers declined as rapidly under incoBoNT/A injections as after cessation of any BoNT/A treatment (30). Doses of 200 MU incoBoNT/A with a protein load of 0.8 ng thus seem to lie close to the detection limit of the human immune system (30).
Our results also support that there is no simple relationship between NAB titers and clinical outcome in CD patients. The comparison of TWSTRS severity score and NAB titers did not show any correlation before or after incoBoNT/A treatment and no correlation between changes in both parameters. This is in full agreement with Lange et al. (44) who suggested that there is little or no relation between clinical data and NAB titers. To our experience, the paralysis time which is the direct outcome measure of the MHDA yields better correlations with clinical data than the derived NAB titers (45).
So far it is not clear why in some patients with PSTF the MHDA does not detect neutralizing antibodies. Compared to other studies analyzing antibodies in patients with PSTF (13,44), the number of patients without detectable NABs in the MHDA was rather low (7/32 <22%). These patients did not respond better to incoBoNT/A than most of the other patients. On the other hand, NAB titers declined below the detection limit in four patients. These patients had developed PSTF under ona-or abo BoNT/A treatment with positive NAB testing but responded well to incoBoNT/A and became negative in the MHDA test after 48 weeks. This underlines how complex the problem of responsiveness to different BoNT/A preparations and the mechanisms of NAB induction in BoNT treatment of dystonias are.
Clinical changes may precede changes in NAB titers considerably. It has been shown that high antibody titers take years to decline and long-lasting treatment failure is common (20,21,30). Therefore, the development of neutralizing antibodies should be avoided from the very beginning. Animal experiments might help to carefully analyse the temporal course of antibody induction and changes of efficacy (46), but their interpretation is of limited value due to species differences.
Speculations on Possible Reasons for the incoBoNT/A Effect and Lack of Correlation With Antibody Titres
It has been reported that the application of higher doses and EMG guidance may lead to clinical improvement in patients with PSTF (47). Patients in the present study were treated with 200 MU incoBoNT/A which is fairly low compared to the doses patients received prior to this study. Thus, PSTF was not overcome in the present study by means of high incoBoNT/A doses. Furthermore, patients received incoBoNT/A injections without EMG guidance following the same injection protocols as used for their previous BoNT injections. There is no reason to assume that incoBoNT/A was administered more precisely than the other BoNT formulations.
To explain the effect of incoBoNT/A treatment, one has to take into account the differences in the various BoNT preparations regarding neutralizing antibody induction. IncoBoNT/A does not contain complexing proteins. It has been reported that components of the BoNT/B haemagglutinin complex stimulate interleukin 6 production and probably enhance antibody production against the neurotoxin (7). This may also be the case for BoNT/A complexing proteins (8). Haemagglutinins act as lectins with high specificity to galactose-containing glycoproteins of glycolipids (48). Lectins are known to function as immune adjuvants. The cell-binding subunit of ricin, for example, stimulates the antibody production against a virus antigen (49). An additional factor influencing the immune response could be flagellin which was identified as a protein component of the abobotulinumtoxinA bulk toxin (50). Flagellin interacts with Toll-like Receptor 5 (TLR5) initiating an innate immune response (51) and is known to be an immunological adjuvant (52). Because of the reduced bacterial protein content of incoBoNT/A, there seems to be less sensitization of the human immune system with the pure neurotoxin than with the entire BoNT complex (4). Furthermore, because of a new purification process used for incoBoNT/A, the relation between intact (biologically active) and damaged (biologically inactive, but still immunizing) neurotoxin A is more favorable for incoBoNT/A than for the other BoNT/A preparations (18).
OnaBoNT/A, aboBoNT/A, and incoBoNT/A are manufactured quite differently. Differences in purification and vacuum extraction may have an impact on the 3D-structure of the highly complex botulinum neurotoxin molecule. But it is this 3D structure which is relevant for antibody formation and binding. If therapy failure in a patient is mediated by a monoclonal antibody-induced by abo-or onaBoNT/A which does not detect incoBoNT/A, this patient will respond as a de novo-patient to incoBoNT/A. IncoBoNT/A injections may, therefore, be clinically effective without boostering NAB levels during long-term treatment. In clinical practice, the spectrum of NABs in a patient whether it consists only of a monoclonal AB or contains polyclonal ABs is usually not known. Furthermore, the efficacy of a human antibody in reducing the biological function of BoNT may be different in a human being compared to the MHDA and may be a general reason why MHDA titres do not correlate well with the effect in clinical practice.
CONCLUSION
The present study provides evidence that incoBoNT/A is clinically effective in the long-term treatment of CD patients who had become poorly responsive to other BoNT preparations. The present clinical data in combination with NAB measurements showing continuous improvements in CD severity as well as non-increase of NAB titers following repeated incoBoNT/A injections in the majority of CD-patients with PSTF indicate low antigenicity for incoBoNT/A. This should be further explored in a multicentre, prospective study monitoring clinical outcome as well as antibody titers before and during incoBoNT/A treatment. Confirmation of low incoBoNT/A antigenicity might lead to changes in the way CD patients are treated. Shorter intervals and higher doses could be used-a further major step in the improvement of botulinum toxin injection therapy.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Local ethics committee of the Heinrich-Heine-University Duesseldorf, Germany (#4085). The patients/participants provided their written informed consent to participate in this study.
|
2021-02-09T14:17:25.864Z
|
2021-02-09T00:00:00.000
|
{
"year": 2021,
"sha1": "d0429c95f0689db501c7522edf66cc5d73b347aa",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.636590/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0429c95f0689db501c7522edf66cc5d73b347aa",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254206885
|
pes2o/s2orc
|
v3-fos-license
|
A Latent Profile Analysis of PERMA: Associations with Physical Activity and Psychological Distress among Chinese Nursing College Students
Background: The wellbeing of college students is an important concern for public health, and may have associations with insufficient physical activity and psychological distress. This study aimed to identify the latent classes of wellbeing based on the PERMA (i.e., positive emotions, engagement, relationships, meaning, and accomplishments) wellbeing framework, and to explore their associations with levels of physical activity and psychological distress. Methods: A cross-sectional online survey was conducted. A latent profile analysis was performed to characterize the different classes of wellbeing of nursing college students. Results: A group of 1741 nursing college students in China completed the study. Three wellbeing classes were identified in the final model (i.e., low-level wellbeing, moderate-level wellbeing, and high-level wellbeing). Significant differences were found between the three classes in terms of gender (p = 0.002) and year of study (p = 0.038). Low levels of physical activity participation were significantly associated with lower odds of being in the high-level wellbeing class compared with the moderate-level wellbeing class (OR = 1.398, 95%CI [1.023, 1.910], p = 0.035). Lower levels of psychological distress were also associated with greater wellbeing among the three wellbeing classes (p < 0.05). Conclusions: Effective strategies are needed to increase college students’ physical activity participation and decrease the severity of psychological distress to improve their health and wellbeing in China.
Introduction
Nursing college students often experience high learning pressure, and such life patterns can influence their wellbeing (e.g., psychological distress and depression) [1,2]. Other potential factors, such as a lack of correct family education, increasing fierce competition after graduation, and serious exposure to internet use also adversely affect college students' mental health [3,4]. In this respect, studies have reported that more than 30% of college students suffer from psychological distress [5]. Furthermore, the existing literature has found that negative wellbeing and mental health are associated with reduced academic function, frustration, poor physical health conditions, and even a high risk of suicidal tendencies [5,6]. One previous study has also described that nursing students experience a higher prevalence of wellbeing disorders than students in other subjects [7]. Therefore, it is important to understand the multidimensional wellbeing of nursing college students in order to develop well-targeted public health interventions and policies and improve their wellbeing. A growing body of empirical evidence suggests that wellbeing is mainly characterized by positive emotions, engagement, relationships, meaning, and accomplishments constituting a theoretical framework referred to as PERMA [8,9]. Specifically, positive emotions refer to the affective component or feeling well, with engagement referring to a deep psychological connection to a particular activity, relationship referring to social connection, meaning referring to purpose, and accomplishments referring to personal goals [10,11]. Wellbeing and flourishing have been examined using the PERMA wellbeing framework in adult workers [12], cancer survivors [13], and artists [14] across countries such as Korea, China, Brazil, and India. The PERMA wellbeing framework was also used in university contexts to examine the associations between wellbeing and the online learning environment [15], as well as health education [16]. However, these studies considered a single-dimensional approach in explaining the scores of five elements of PERMA wellbeing. Because the PERMA wellbeing profiler is used to calculate the mean scores of the five PERMA wellbeing elements, but not the categorical scores to classify different levels of wellbeing, limited studies have used the PERMA to explore different classes of wellbeing in nursing college students. To address this limitation, latent profile analysis (LPA) was suggested to be used to examine how the five PERMA wellbeing elements interact and to classify different classes of wellbeing of the nursing college students.
Previous studies have suggested that decreasing the psychological distress of nursing college students is critical for prevention and intervention to enhance their overall wellbeing [17]. This is because psychological distress (e.g., anxiety and depression) may contribute to an increased rate of mental disorders that have implications for nursing college students' educational and career development [1]. In addition, protective factors, such as physical activity participation, are considered as effective approaches to improve nursing college students' wellbeing [18]. Individuals who engage in regular physical activity are more likely to have an increased sense of wellbeing and a reduction in psychological distress [19]. Given the importance of developing differential interventions to enhance college students' wellbeing, it is necessary to examine the associations between physical activity participation and psychological distress and different wellbeing classes in this population. Therefore, the first aim of this study was to use the LPA to identify classes of wellbeing based on the five-element PERMA wellbeing framework in Chinese college students. In addition, this study examined the impact of levels of physical activity participation and psychological distress on different classes of wellbeing in this population. This study hypothesized that there would be significant differences in physical activity participation and levels of psychological distress among different wellbeing classes.
Study Design
A cross-sectional study using an online survey design was performed in this study. Ethics approval for this study was received from the University Human Research Ethics Committee (2021-R-165). Completion and submission of the online survey implied consent to participate. This was declared to respondents at the commencement of the survey.
Participants and Recruitment
Using convenience sampling, nursing students from a large medical college in China were invited to participate. All nursing students who were willing to participate in this study were eligible to enroll. An email invitation with the help of the College Deputy Vice Administration Office was sent to all nursing students. The email invitation included the purpose of the study, inclusion and exclusion criteria, and the online survey link (https://www.wjx.cn (accessed on 10 October 2021)) to the questionnaire. A total of 1741 Chinese nursing college students were included for analysis in this study after removing those who reported more than 960 mins of total physical activity or sedentary time per day (n = 184, 9.6%). Participants were aged between 17 and 24 years (M = 19.38; SD = 1.02). Among these participants, most were female (n = 1366, 78.5%), and were in their second year of study (n = 765, 43.9%).
Data Collection
Data collection took place between 10 October and 30 December 2021. The study survey included participants' demographic details (i.e., age, gender, and year of study), PERMA-Profiler, International Physical Activity Questionnaire-Short Form (IPAQ-SF), and an assessment of psychological distress.
The PERMA-Profiler Chinese version was used to evaluate individuals' multidimensional wellbeing [20]. This study used the 15 items of the PERMA-Profiler to assess the five elements of wellbeing (i.e., positive emotions, engagement, relationships, meaning, and accomplishments). Three items assessed each PERMA element, and composite scores were averaged across the three items per element. Each item was scored on a Likert-type scale ranging from 0 to 10 (0 = not at all, 10 = completely; 0 = never, 10 = always; 0 = terrible, 10 = excellent), with higher scores indicating greater wellbeing. This tool demonstrated good internal consistency among Chinese nursing college students in this study (Cronbach's α = 0.933).
The IPAQ-SF Chinese version was used to assess participants' physical activity participation and average sitting time on weekdays and weekends during the past seven days [21]. There are two types of IPAQ scores for data processing and analysis: a categorical and a continuous score. The categorical score classified participants into three physical activity intensity levels (i.e., low, moderate, and high). The continuous score is expressed as the metabolic equivalent task (MET minutes per week) of energy expenditure. In addition, participants' sitting time (i.e., hours per day) was also recorded on the IPAQ-SF. High validity and reliability for the IPAQ-SF have been established among Chinese adults with intraclass correlation coefficients above 0.84 [21].
The 10-item Kessler Psychological Distress Scale (K10), Chinese version, was used to assess psychological distress in the past month [22]. The K10 is a self-reported questionnaire containing ten questions with a score ranging from 1 to 5 to assess participants' frequency of nonspecific psychological distress across the past month based on questions related to symptoms of anxiety and depression. Participants chose how often they felt or thought in a certain way: 1 = almost never, 2 = sometimes, 3 = fairly often, 4 = very often, and 5 = all the time. The total score was obtained by summing all 10 items, with a total score of 10-50. A score of 15 or less reflected no symptoms of distress, while low distress ranged from 16 to 21, moderate distress ranged from 22 to 29, and high distress ranged from 30 to 50. The K10 scale is a valid instrument with acceptable internal consistency, with Cronbach's α over 0.954 in this study.
Statistical Analysis
Data analysis was conducted using the Statistical Package for the Social Sciences (SPSS) version 27.0 and the Mplus 8.3. Based on the data-cleaning rules for the IPAQ-SF, respondents who reported over 960 min of total physical activity or sedentary time per day were identified as over-reporting. The assumption is that individuals spend an average of 8 h of sleep per day [23]. Descriptive statistics were calculated using frequencies (i.e., percentages) for categorical variables and mean and standard deviations for continuous variables.
The LPA was used to determine the optimal number of wellbeing classes based on the PERMA wellbeing framework among Chinese nursing college students. Starting from the initial model (one category), the number of categories in the model is gradually increased until the model fits the data optimally. The optimal model was determined by a comprehensive consideration of fit indicators and theoretical values. The model fit indicators are the Log-likelihood test, Akaike's information criterion (AIC), Bayesian information criterion (BIC), sample-size-adjusted Bayesian information criterion (ssaBIC), entropy (>0.8 is acceptable), and Lo-Mendell-Rubin (LMR), and the bootstrapped likelihood ratio test (BLRT). The smaller AIC, BIC, and ssaBIC are more desirable and represent the models with better fit and which are more parsimonious. Additionally, the sample size requirement was considered in the LPA, with a sample size of more than 5% in each class.
Differences in demographic characteristics and the five PERMA elements among the wellbeing classes within the final model were compared using analysis of variance (ANOVA) (i.e., age, PERMA elements) and chi-square tests (i.e., gender and year of study). Furthermore, multinomial logistics regression analyses were conducted to assess the association of levels of physical activity participation and psychological distress with the latent classes. Covariates (i.e., gender, year of study) with p values of <0.05 were included in the multinomial logistics regression. The significance level was set at 0.05.
Results
A majority of the participants had engaged in either moderate (n = 881, 50.6%) or high (n = 385, 22.1%) levels of physical activity in the past week. Almost half of the participants had reported no psychological distress in the past month (n = 769, 44.2%). The average scores of participants' psychological distress scores and sitting time per day during workdays and weekdays were 18.67 (SD = 8.0), 6.56 (SD = 2.80), and 5.41 (SD = 3.08), respectively. Additionally, participants' average scores of the five PERMA wellbeing elements were more than 6.26 (SD range = 1.89-2.11) for each element (see Table 1). Table 2 displays the fit statistics of latent classes of the PERMA wellbeing of Chinese nursing college students. LL, AIC, BIC, and adjusted BIC continued to decrease with an increase in the number of latent classes. The LMR test was not significant for the 5-class, indicating that the 5-class solution did not fit the data better than the 4-class solution. The model identification and entropy value indicated that the 3-or 4-class models were most suitable. This study selected a 3-class model considering the sample size of each class being > 5%, a high entropy value, and higher model identification in a 3-class model than a 4-class model. Based on the parsimony and interpretability of the classes, a 3-class model was selected. There are significant differences in the five PERMA wellbeing elements across the three latent classes (p < 0.001) (see Table 3). Table 3 also describes differences in the three wellbeing classes on participants' characteristics. No significant differences were found between the three classes in terms of age. However, compared to participants with low-level wellbeing, participants with moderateand high-level wellbeing were more likely to be females (78. 13 Table 4 shows the associations between levels of physical activity participation and psychological distress with the three wellbeing classes after adjusting for covariates (i.e., gender and year of study). When comparing the high-level wellbeing class relative to the moderate-level wellbeing class, having no distress (OR = 0.109, 95%CI [0.075, 0.160], p < 0.001) or low distress (OR = 0.296, 95%CI [0.199, 0.440], p < 0.001) was significantly associated with higher odds of being in the high-level wellbeing class. Adversely, engaging in a low level of physical activity was significantly associated with lower odds of being in the high-level wellbeing class (OR = 1.398, 95%CI [1.023, 1.910], p = 0.035). −5644.624 11,333.248 11,453.416 11,383.525 0. Notes: C = class; LL = log likelihood; AIC = Akaike's information criterion; BIC = Bayesian information criterion; ssaBIC = Sample size adjusted Bayesian information criterion; LMR = Lo-Mendell-Rubin; BLRT = bootstrapped likelihood ratio test; bold letters indicate the best-fitting models. When comparing the low-level wellbeing class relative to the high-level wellbeing class, physical activity participation did not differ significantly between the two classes. However, there was a trend that no distress (OR = 0.131, 95%CI [0.071, 0.242], p < 0.001), low distress (OR = 0.208, 95%CI [0.103, 0.419], p < 0.001) and moderate distress (OR = 0.408, 95%CI [0.188, 0.886], p = 0.024) were associated with increased odds of being in the highlevel wellbeing class relative to the low-level wellbeing class.
Discussion
To our knowledge, this is the first study that has classified the latent classes of wellbeing using the PERMA wellbeing framework and measured associations between these wellbeing classes and levels of physical activity participation and psychological distress in Chinese nursing college students. Three latent classes were identified among nursing college students, including the low-level wellbeing class, moderate-level wellbeing class, and high-level wellbeing class.
This study demonstrated a significant difference in gender among the three wellbeing classes, which is in line with recent studies that the level of wellbeing among male students was lower than that among female students [24,25]. One potential reason suggested was that females report a higher frequency of online health information searching for health data and nutrition information, while males are more interested in smoking and online games via online health applications. Importantly, a higher percentage of participants in their second and third year of study seem to experience low-level wellbeing compared to moderate-and high-level wellbeing. This finding is not consistent with a similar study which indicated that senior students may accumulate more health information and enhanced health awareness than junior students to promote their health and wellbeing [24]. Specific reasons for these differences may be attributed to different major backgrounds of students and assessment tools of wellbeing. The implementation of clinical curriculum programs for nursing students in their second and third year of study may also contribute to the high prevalence of low-level wellbeing because of the risk of failure for nursing students in a professional clinical course [26]. In addition, increasing fierce competition after graduation may be a potential factor reducing the second-year and third-year nursing college students' wellbeing [3]. Considered together, these findings suggest the implementation of health education among nursing college students may be an effective strategy to improve their wellbeing.
Although studies have demonstrated the benefits of physical activity on the health and wellbeing of college students [19,27], there was an increase in the physical inactivity and sedentary time in this demographic found in the current study. This study found that 27.3% of college students reported lower levels of physical activity participation. There are several potential factors that may contribute to insufficient physical activity in college students, such as prolonged sitting time, low awareness of the beneficial effects of physical activity, and lack of time and motivation [19,27]. This study also found that the average sitting time of college students was more than 6 h per day during workdays. Similar prolonged sedentary sitting time was also found in individuals in the university workplaces [28,29]. Previous studies have indicated that insufficient physical activity participation has the potential to decrease academic engagement and behavior among college students, and prolonged sitting time is also associated with high levels of psychological distress [30,31]. It is, therefore, important to increase levels of physical activity participation and decrease sedentary sitting time in the design of effective interventions for the health and wellbeing of nursing college students.
This study found that low levels of physical activity participation were associated with low levels of wellbeing among the moderate-and high-level wellbeing classes of nursing college students. The positive association between low levels of physical activity participation and wellbeing implies that insufficient physical activity participation may be a potential risk factor in developing wellbeing. Low physical activity participation may contribute to a low level of quality of life, academic performance, and mental health, which are positively associated with wellbeing [31,32]. Furthermore, a sedentary lifestyle was positively associated with students' inactivity and long screen time, which have been identified as major public health issues that increase the risk of mental health problems (e.g., depression and anxiety) of college students [33]. Knowledge of the association between physical activity participation and wellbeing could help target interventions and direct resources to the college students [19]. Given the potential adverse effects of insufficient physical activity, there is a need to pay more attention to those nursing college students who engage in low levels of physical activity.
This study also found that moderate-to-high psychological distress accounted for 31.4%, which is to some extent in accordance with results of previous studies which found almost 24-35% of college students suffered from psychological issues, such as depression and anxiety [10,24]. A high level of psychological distress can result in a decline in college students' quality of life, which may contribute to low levels of wellbeing [19]. However, only 5.1% of the nursing students reported low-level wellbeing in this current study. These findings suggest that other beneficial factors may reduce the risk of psychological distress and then positively influence nursing students' wellbeing; for example, a high percentage of participants had moderate and high levels of physical activity participation in this current study (72.7%). Interestingly, compared with physical activity participation, the severity of psychological distress may be a more serious factor that influences college students' wellbeing, as significantly negative associations between levels of psychological distress and the three latent classes were found. The results of the relationship between psychological distress and wellbeing are consistent with recent studies which indicated that negative emotion had a direct influence on early adults' PERMA wellbeing elements [34,35]. This suggests that stress disorder syndrome was a potential predictor of low sleep health and sleep behavior, which may in turn influence wellbeing and quality of life. Another study also indicated that psychological distress can have an influence on the psychosocial functioning of quality of life and further reduce wellbeing [19]. Hence, relevant programs to reduce college students' psychological distress are recommended to enhance their health and wellbeing in future studies.
Some limitations of this study should be considered. Firstly, some basic information about students, such as family background and education, were not investigated in this study, as students' family background may have the potential to influence their wellbeing as indicated in the introduction chapter. Secondly, the gender distribution, with most respondents being females in this study, also limits the generalizability of the study results. Thirdly, using self-reported measures may also be a study limitation, as self-reported outcomes can lower the accuracy of the data and further affect the latent class results. Fourth, this current study is a convenience sample of nursing college students and is not representative of college students. Fifthly, this study enrolled nursing students, and group differences in the study variables between nursing students and students in other subjects are still unclear. Future studies might need to be conducted to compare different groups of wellbeing classes of students in other subjects and nursing and examine their associations with psychological distress and physical activity.
Conclusions
The LPA identified three classes of wellbeing based on the five elements of the PERMA wellbeing framework among Chinese nursing college students, including low-level wellbeing, moderate-level wellbeing, and high-level wellbeing. Considering the gender differences as the covariation, low levels of physical activity participation were associated with low levels of wellbeing between the moderate-level and high-level wellbeing classes. Furthermore, there were negative correlations between the levels of psychological distress and wellbeing among the three wellbeing classes of Chinese nursing students. These findings suggest that practice strategies to increase nursing college students' physical activity participation and reduce the severity of their psychological distress may be effective to promote their health and wellbeing.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by Shandong University Human Research Ethics Committee (2021-R-165).
Informed Consent Statement:
Completion and submission of the online survey implied consent to participate. This was declared to respondents at the commencement of the survey.
|
2022-12-04T16:07:20.895Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "4c289fbfd325d5159190a2c8a82a82e48e229899",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/23/16098/pdf?version=1669902427",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df01f3376f36e79ccd06664e34e2eaf894a16db6",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267246508
|
pes2o/s2orc
|
v3-fos-license
|
Digitization Of Pt Pertamina Mor Iv Semarang Archives
: Record management plays an important role in the continuity of a company, namely as a source of information and a center for company history. This provides benefits for research, consideration for decision-making, preparation of good work programs, and guidelines for carrying out work in the future. Records management activities in the office are increasingly dynamic along with the dynamics of organizational activities. Many office practitioners experience difficulties in managing records both conventionally and electronically. The purpose of this research is to determine the digitization of archives carried out by PT Pertamina MOR IV, namely by using computer technology. The research methods used are observation and interviews. The results of this research indicate that PT Pertamina MOR IV has used a combination of handling, namely manual archives and digital archives. Theoretically, digitizing archives should be done through a document scanning process. However, the process of digitizing archives at PT Pertamina MOR IV is carried out by entering data into a computer via the MS Excel program without scanning the documents first. The writer's suggestion for digitizing PT Pertamina MOR IV archives is through a document scanning process, to make it easier to manage and handle the archives. Some of the conveniences provided are easy to operate, attractive appearance, document search facility, data security, automatic retention, connecting to a computer, and enabling OCR facilities.
INTRODUCTION
In this era of increasingly rapid development of science and technology, information, and communication, it has had an impact on progress in various fields.One of them is in the office sector.The office is a unit tasked with providing information services to all parties who need it, both internal to the organization and external to the organization.To be able to provide good service, office activities must be carried out in a well-planned, organized, coordinated, and controlled manner.(Sugiyarto & Wahyono, 2014 : 8).
The process of activities or work that occurs in an office or company utilizes technological developments, including handling letters, holding meetings, preparing financial reports, and handling archives.Archives are the result of recording records or sources of information regarding an activity from an institution, organization, or individual.Archives can be in the form of charters, letters, proposals, activity reports, certificates, etc. Archives according to Law Number 43 of 2009 concerning archives, archives are records of events in various forms and media by developments in technology, communication, and information created and received by state institutions, regional governments, educational institutions, companies, political organizations, community organizations, and individuals in the implementation of social, national and state life.The use of technology in the office sector is intended to make the work process more effective and efficient.
Handling archives plays an important role in the running of a company, namely as a source of information and a center for company history.This provides benefits for research, consideration for decision-making, preparation of good work programs, and guidelines for carrying out work in the future.Meanwhile, according to Sugiyarto, and Wahyono, 2014: 15) Archives have a role as a "memory center", as a source of information, and as a monitoring tool that is very necessary for every organization in the context of planning, analyzing, developing and formulating policies, according to Sedarmayanti (2015, 43 ), the role of archives as a source of information can help remind people to make decisions quickly and accurately regarding a problem.Meanwhile, the general purpose of archives is to ensure the safety of accountability materials regarding the plans, implementation, and management of the life of an office institution.
The office is a unit tasked with providing information services to all parties who need it, both internal to the organization and external to the organization.To be able to provide good service, office activities must be carried out in a well-planned, organized, coordinated, and controlled manner.(Sugiyarto, Wahyono, 2014, 8).
Archive maintenance is a series of efforts aimed at protecting, preventing, and taking steps to save archives, both physical and information (content), as well as ensuring the survival of archives.(Asriel, 2018).Archive maintenance/management can be done in 2 ways, namely conventional archives and electronic archives.
Electronic filing systems provide many benefits, especially in terms of convenience, speed, and efficiency, as explained by the author, but in practice, there are still many organizations that have not utilized them optimally.(Asriel, 2018).
Electronic archives are archives that are stored and processed in a format using a computer, created and maintained as evidence of transactions, activities, and functions of institutions or individuals.(Yusuf & Zulaikha, 2020).
Management of electronic records currently still has several challenges that organizations/institutions often cannot overcome.Asogwa in Putranto (2017: 5).Reed Putranto (2017:5) believes that organizations' reluctance to adopt electronic systems is caused by several obstacles such as the costs of purchasing software, licensing, or maintenance which are often considered too expensive.Electronic archives are archives that are stored and processed in a format using a computer, created and maintained as evidence of transactions, activities, and functions of E-ISSN : 3031-5999, HAL 75-82 institutions or individuals.(Yusuf & Zulaikha, 2020).
The archiving stages are: preparing the manuscript, scanning, creating a folder on the computer as a storage area, creating a hyperlink to connect the archive list with the scanned archive, and creating complete media transfer administration.(Muhidin & Hendri, 2022) From the definitions of electronic archives above, it can be concluded that electronic archives are information documents that are created, recorded, processed, or transferred using electronic equipment and can be stored in various electronic formats.The electronic archive format can be divided into 4 (four) categories, namely: text-based; image-based; audio-based; and audio video-based.
Archive handling at PT Pertamina MOR IV Semarang has used a combination of handling, namely manual archives and digital archives.As a large company in Indonesia, PT Pertamina requires a lot of documents in the process of office activities.Archive storage systems that are still manual certainly have disadvantages if used in large companies.For example, in the archive search process, it is very difficult to find when using a manual storage system.For this reason, it is necessary to digitize the handling of archives at PT Pertamina MOR IV as a whole.With the hope that the work and activities of archive handlers will become more effective and efficient.So that time can be used well to be more productive in carrying out office work.
METHOD
The author uses observation methods and interview methods in collecting data.The author uses the observation method to observe activities in the archives sector of PT Pertamina MOR IV, assisting in determining problems and completing the required data.
The author conducted interviews with three archive employees at PT Pertamina MOR IV.The employees are Muhammad Joni and Aris who manage all archive activities except entering data on the computer.The third employee is Dora the employee in charge of entering data into the computer.The following are the interview guidelines used by the author.3) Are there any obstacles during the archive management process?4) Are the facilities at Gasini adequate?5) What types of documents are archived?6) How is the system used in archive management implemented?(ICTMT) -VOLUME. 1, NO. 1 2023 7) Who is responsible for maintaining and depreciating records?8) What is the average document retention time and document type?9) Has there ever been a system failure?10) What are the advantages of using the system?
RESULTS AND DISCUSSION
Technological advances today require us to take advantage of these technological developments.Archives are also experiencing development from archives that are processed manually to archives that are processed using computers, as a form of utilizing technology.
A. Archive HandlingPT Pertamina MOR IV
At PT Pertamina MOR IV, the author handles archives that have been provided by each The steps for handling archives at PT Pertamina MOR IV are: (1) Sorting documents from various functions/users Documents in the form of letters are sorted from the oldest year to the newest year.Next, the letter numbers are sorted with the smallest at the top and the largest at the bottom.If there is no letter number such as a note, it is placed at the bottom. (
2) Binding various documents
After sorting the documents, the writer is tasked with binding the documents.When binding documents, you need office tools including a perforator, paper fastener, stapler, hammer, and pliers.This document binding is intended so that the documents are neatly arranged.
The binding process involves punching a hole in the cover paper and inserting a fastener or paper hook into the paper hole.Then, the fastener is folded so that it is perpendicular to the hole in the paper.A similar document is turned backward, then takes about ten sheets to make holes and insert them into the paper fastener until the entire document is inserted.Next, insert the cover, lock the paper fastener with the lock, and pull using pliers so that the document can be locked optimally.The final step is hammering so that the document is completely locked.
E-ISSN : 3031-5999, HAL 75-82 (3) Record document indexes for internal purposes Documents are indexed based on documents that have been entered into Excel.This activity aims to ensure that when looking for the documents needed, they can be found more easily.
The table used to index archival purposes is as in Table 1.below.(5) Sorting documents Sorting documents in one order.If there are clips, they are removed and replaced with staples, then the staples are flattened using a hammer to make it neater.Documents that have been tidied up will have a sticky note on the front to explain in more detail about the document, which includes the type of document, the origin of the document, and the date of the document.
(6) Record a list of document details
Recording a list of archival details is intended to ensure that the document will be stored for many years.This description is also used to find out what documents have been included in the archive storage area.
(7) Provides a save notation to the document Provide a saving notation on documents in the form of agreements, reports, and proposals that have been recorded, then fill in according to the type of document, document number, location, and function.Furthermore, the shelves, boxes, and books will be filled by archive employees in that room.
1)
Since when have you worked at PT Pertamina?2) When you entered the archives field, was there any training or what?
function/user and taken by employees in the archives.The archives are in the form of incoming letters, outgoing letters, reports, meeting minutes, proposals for submitting archives, and so on.At PT Pertamina MOR IV all functions provide documents to the archives room except the financial function.Functions or users from this field include Human capital/human relations, BBM retail, Engineering, HSSE (Health, Safety, Security, and environment), S&D (Supply and Distribution), Medical, IT (Information Technology), PKBL (Partnership Program and Environmental Development), Legal, Marine, Aviation and Assets.
Record document indexes for box purposes
The author is responsible for writing the index used for box purposes.The author fills in the table provided according to what documents will be included in one box.Then, the author attached it to one side of the box.The following is a table for the index of document requirements:
Table 2
Indexing for Archival Purposes
|
2024-01-26T16:09:12.410Z
|
2023-12-31T00:00:00.000
|
{
"year": 2023,
"sha1": "6f437ea5b74a93d5743f9ae18d4e0622bfd2f1b4",
"oa_license": "CCBYNCSA",
"oa_url": "https://ictmt.stiepari.org/index.php/journal/article/download/51/15",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d5b603745b67705561b1d3f0564155d75e4c2490",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
270760411
|
pes2o/s2orc
|
v3-fos-license
|
Perioperative platelet reactivity over time in patients undergoing vascular surgery: An observational pilot study
Background Despite Antiplatelet therapy (APT), cardiovascular patients undergoing revascularisation remain at high risk for thrombotic events. Individual response to APT varies substantially, resulting in insufficient protection from thrombotic events due to high on-treatment platelet reactivity (HTPR) in ≤40% of patients. Individual variation in platelet response impairs APT guidance on a single patient level. Unfortunately, little is known about individual platelet response to APT over time, timing for accurate residual platelet reactivity measurement, or the optimal test to monitor residual platelet reactivity. Aims To investigate residual platelet reactivity variability over time in individual patients undergoing carotid endarterectomy (CEA) treated with clopidogrel. Methods Platelet reactivity was determined in patients undergoing CEA in a prospective, single-centre, observational study using the VerifyNow (change in turbidity from ADP-induced binding to fibrinogen-coated beads), the VASP assay (quantification of phosphorylation of vasodilator-stimulated phosphoprotein), and a flow-cytometry-based assay (PACT) at four perioperative time points. Genotyping identified slow (CYP2C19*2 and CYP2C19*3) and fast (CYP2C19*17) metabolisers. Results Between December 2017 and November 2019, 50 patients undergoing CEA were included. Platelet reactivity measured with the VerifyNow (p = < .001) and VASP (p = .029) changed over time, while the PACT did not. The VerifyNow identified patients changing HTRP status after surgery. The VASP identified patients changing HTPR status after eight weeks (p = .018). CYP2C19 genotyping identified 13 slow metabolisers. Conclusion In patients undergoing CEA, perioperative platelet reactivity measurements fluctuate over time with little agreement between platelet reactivity assays. Consequently, HTPR status of individual patients measured with the VerifyNow and VASP assay changed over time. Therefore, generally used perioperative platelet reactivity measurements seem unreliable for adjusting perioperative APT strategy.
Introduction
An estimated 40% of patients with coronary artery, cerebrovascular or peripheral arterial disease treated with clopidogrel have inadequate platelet inhibition.This phenomenon is called high on-treatment platelet reactivity (HTPR).Patients with HTPR have a 2.8-fold higher risk of cardiovascular events (CVE) compared to patients with adequate platelet inhibition [1].Strategies to counter the thrombotic risk added by HTPR vary from changing the dosage and switching antiplatelet agents to combining antithrombotic drugs for a synergistic effect.Early identification of patients with HTPR may allow for timely medical intervention and improvement of the treatment decision process.Long-term platelet reactivity would preferably be equal or lower compared to pre-operative.
Platelet function tests (PFT) can be used to both identify HTPR and monitor thrombotic risk.The therapeutic platelet activity window consists of HTPR at one end of the spectrum and low platelet reactivity (LPR) at the other.Hypothetically, this allows for tailored antiplatelet therapy to lower residual platelet activity (e.g., a higher dose or different drug) to reduce thrombotic risk.However, currently, definitive conclusions cannot yet be drawn.
Polymorphisms of the CYP2C19 gene have been shown to influence the efficacy of several drugs, including clopidogrel [2].CYP2C19 enzymes convert clopidogrel into its active metabolite.Common variants of the CYP2C19-gene result in slow metabolizer status, contributing to HTPR.Therefore, these patients are prone to a higher thrombotic risk.Both PFTs and genotyping have been proposed as tools to guide antiplatelet therapy.Personalized antiplatelet therapy based on platelet function testing in patients undergoing percutaneous coronary intervention proved non-inferior compared to genotype-guided APT [3][4][5].
A broad range of PFT assays are available, but the reproducibility, predictive value, and ease of use vary greatly [6].Moreover, the optimal time for testing residual platelet activity is still unknown.Fundamental knowledge concerning the pharmacodynamics of antiplatelet therapy over time is needed to interpret PFT results.Therefore, we conducted the first study to assess individual perioperative platelet reactivity with various PFTs at multiple time points in patients undergoing carotid endarterectomy (CEA) and the influence of CYP2C19 polymorphisms on platelet activity.
Methods
This pilot study was conducted at the University Medical Center Utrecht, The Netherlands, investigating perioperative platelet reactivity over time.This prospective single-centre observational cohort study explored the variability of platelet activity in patients undergoing CEA at four time points using the VerifyNow, VAsodilator-Stimulated Phosphoprotein (VASP) phosphorylation and the flow-cytometry-based platelet activation assay (PACT).Patients (� 18 years) were eligible if they were scheduled for an elective CEA, admitted to the hospital at least one day before and after surgery, and treated with the P2Y12-inhibitor clopidogrel.Patients were excluded if the surgical procedure exceeded 4 hours, required blood transfusion, postoperative intensive care admission was indicated, or treated with Vitamin-K antagonists or direct-acting oral anticoagulants (DOAC) before inclusion.The local ethics committee approved this study, and written informed consent was obtained from all patients following the Declaration of Helsinki.
All patients had started clopidogrel treatment (75mg q.d.) at least 24 hours before the intervention.Patients using acetylsalicylic acid were switched to clopidogrel (without a loading dose).Per the discretion of a stroke neurologist, patients who did not yet receive antiplatelet therapy at first presentation received a loading dose of 300mg clopidogrel at least 48 hours before the first measurement.All patients received clopidogrel 75mg q.d.during follow-up.Baseline characteristics were obtained from the electronic patient data file.All surgeries were performed by a vascular surgeon or a supervised vascular surgery fellow under general anaesthesia using longitudinal incisions.A fixed dose of 5000IU unfractionated heparin was administered peroperatively without reversion.Selective shunting was based on intraoperative neuromonitoring.All bedridden post-operative patients receive nadroparin (fraxiparin) injections.
The primary goal of this pilot study was to investigate individual fluctuation of perioperative platelet reactivity measured with platelet function tests.The secondary objective was to examine the influence of CYP2C19 polymorphisms on HTPR status.
Sample collection & sample size
Platelet function was assessed at four time points: 1) before induction of anesthesia, 2) one day after surgery, 3) five days after surgery, and 4) at least eight weeks up to one year after surgery.The blood draws at each time point were considered to reflect 1) pre-operative platelet reactivity, 2) post-operative platelet reactivity (after tissue manipulation), 3) platelet reactivity in the postoperative (declining) phase, and 4) long-term stable platelet reactivity (on-treatment).A vacutainer containing Tri-sodium citrate 3.2% was used for all venous blood draws.
We calculated the sample size for this pilot study based on an extrapolation of the data from the only study investigating platelet reactivity variation in patients using P2Y12 inhibitors.Of all patients treated with 75mg clopidogrel, 41.4% (34.7-48.4%)showed a change in platelet reactivity of >40 PRU and 28.6% (22.6-35.2%)showed a change in platelet reactivity of >60 PRU measured with the VerifyNow as stated in an analysis of the ELEVATE-TIMI 56 Trial. 7 The standard deviation was 68 PRU with a mean of 163.6 (SD 80.2).We calculated the sample size to assess the variation between two separate measurements.Assuming an intrapatient PRU-variation of 10% between test moments, we calculated a sample size of fifty patients using the 2-sample equivalence technique.The p-value was set at 0.05 and the power at 0.8, allowing statistical analysis of intra-test fluctuation between the time points and not the intertest agreement.We provide insight into the effect of polymorphisms by showing trends within responder-and non-responder groups.
Platelet function testing
Platelet reactivity was measured with the VerifyNow-P2Y12 assay according to the manufacturer's instructions (Accumetrics, Inc.).A cut-off value of >40% inhibition or PRU <208 was regarded as adequate P2Y12 inhibition.HTPR status measured with the VerifyNow assay was defined as both >208 PRU and >230 PRU [7].
The VASP assay was performed according to the manufacturer's instructions (Diagnostica Stago S.A.S.) [8].Results were obtained by flow cytometry and expressed as platelet reactivity index (PRI).A cut-off value of PRI�50% was used to define HTPR [9].The blood was left for one hour before processing for the PACT assay.Five μL of whole blood was diluted 1:10 (v:v) in HEPES buffered saline (HBS; 10 mM HEPES, 150 mM NaCl, 1 mM MgSO4, 5 mM KCl, pH 7.4), containing 62.5 mM ADP and in-house developed fluorophore-conjugated nanobodies against GPIbα (clone 17, APC) and fibrinogen (clone C3, Alexa488), or isotype control nanobody R2-Alexa488.Baseline platelet activation was assessed in the absence of a platelet agonist.Whole blood was incubated for 10 minutes at 37˚C, after which samples were fixed with 0.148% formaldehyde, 137 mM NaCl, 2.7 mM KCl, 1.12 mM NaH2PO4, 10.2 mM Na2HPO4, 1.15 mM KH2PO4, 4 mM EDTA, pH 6.8 for 20 minutes at room temperature and analysed on a BD FACSCanto II (BD Biosciences).Platelets were identified based on forward and sideward scatter and GPIbα-expression.Platelet fibrinogen binding was used as a marker of integrin aIIbβ3 activation.The flow cytometer was calibrated every week to maintain stable fluorescent intensity.All flow-cytometry data were corrected for baseline activation measured after stimulation with isotype control antibody (R2).Data are expressed as median fluorescence intensity (MFI).No HTPR cut-off value has been established for the PACT assay.
Statistical analysis
A paired t-test was used for normally distributed variables for continuous baseline variables and a Wilcoxon signed-rank test for non-normally distributed paired variables for all time points.We used a linear mixed model to analyse the predictive value of each test and time point (e.g., VerifyNow, PACT and VASP).We used a mixed-effects model to investigate the interaction between CYP2C19 polymorphisms and the PFT results.CYP2C19-polymorphisms (heterozygous and homozygous versus wildtype) were analysed by including them as the determinant of interest in the model.We tested the time by CYP2C19-polymorphism interaction.Model assumptions (i.e., distributional assumptions, homoscedasticity) were assessed with residual plots.The SAS statistical analysis system v9.4 was used for all analyses.
Results
Between December 2017 and November 2019, 50 patients undergoing carotid endarterectomy were included.During the study period, two patients were prescribed DOACs by a cardiologist without consulting the study team and consequently lost to follow-up.The mean age of participants was 71.8 years, and 73% were male.The average BMI was 26.0 kg/m 2 .Lipid-lowering agents were used by 81% of patients, and beta-blockers were used by 35%.Diabetes Mellitus was present in 25% of the population, hypertension in 71% and kidney failure (defined as eGFR<30ml/min/1.72m 2 ) in 17%.[ Table 1] Clopidogrel was prescribed to all patients.Seven patients (14%) received a clopidogrel loading dose of 300mg.Due to postoperative recovery time and logistic impairments, many patients could not visit the hospital after five days.Therefore, only nine patients are included for time point three.
Platelet reactivity
Platelet reactivity measured with the VerifyNow (p = < .001)and VASP (p = .029)changed over time, while platelet reactivity measured with the PACT did not.[ Fig 1].Platelet reactivity measured with VerifyNow was higher one day after surgery than at baseline or eight weeks after the procedure.This difference persisted when the patients who received a loading dose were excluded from the analysis.Baseline measurement results were similar to those eight weeks after the procedure.
In contrast, the procedure did not influence platelet reactivity measured with the VASP assay.However, platelet reactivity decreased eight weeks after surgery compared to baseline measurements.This difference persisted when the patients who received a loading dose were excluded from the analysis (p = .017).
Htpr
HTPR status was defined for the VerifyNow and the VASP assay.[Table 2] 16 patients were identified with HTPR by the VerifyNow and the VASP at any time (not necessarily the same).
According to VerifyNow, only two patients (4%) had HTPR at time point 4. One of these two patients had HTRP at all time points and was considered a "true non-responder".The other was also diagnosed with HTPR at time point two.All others were considered "temporary HTPR patients".Analysis using the more stringent cut-off value of >230 PRU resulted in 3 patients (6%) with HTPR before surgery,10 (20%) with HTPR after surgery (p = .007),and two (4%) with HTPR at least eight weeks after surgery.One of the patients identified with HTPR at time-point four was identified with HTRP at all time points.The other was not.
Based on platelet function measured with the VASP assay, there were fewer patients with HTPR (PRI � 50) at eight weeks after surgery than before surgery (p = .031)and compared with one day after surgery (p = .018).Eleven patients had HTPR at all time points.During the study period, there were no drug modifications in those with HTPR while on APT.The test results were blinded and did not lead to modifications in medication use during the study period.From all patients identified with HTPR at eight weeks, eight were CYP2C19-wildtype carriers, two were heterozygous carriers, and one was homozygous for a loss of function SNP (*2).A more stringent cut-off value for HTPR defined by the VASP of �60% has been proposed [12].
Genotyping
CYP2C19 genotype analysis indicated that 13 patients were either homozygous or heterozygous carriers of the CYP2C19*2 SNP and were considered slow metabolisers.Twenty-four patients were either homozygous or heterozygous carriers of the CYP2C19*17 SNP and were considered fast metabolisers.[Table 3] Six patients were carriers of both a CYP2C19*2 and a CYP2C19*17 allele and were considered slow metabolisers.Two of 13 slow metaboliser SNP carriers had HTPR according to the VerifyNow before surgery (�208PRU), which decreased to 1 out of 13 with the �230PRU cut-off value.Four out of 13 patients had HTPR one day after the procedure, and three out of 13 with the �230PRU cutoff value.Regardless of the cut-off value, only one patient had HTPR at all time points.This patient was homozygous for CYP2C19*2.
11 of 13 of the slow metaboliser SNP carriers had HTPR before surgery according to the VASP (PRI�50).Nine out of 13 patients had HTPR one day after the procedure.Four patients had HTPR 8 weeks after surgery.
According to the VerifyNow (�208PRU), four (16.6%) of the 24 fast metabolisers had HTPR before surgery, seven (29.2%) had HTPR one day after the procedure, and none had HTPR at eight weeks after surgery.When a more stringent cut-off was applied (�230PRU), two (8.3%) had HTPR before surgery, four (16.6) had HTPR one day after surgery, and none had HTPR after eight weeks.None of them received a loading dose of clopidogrel.
Discussion
This pilot study investigated perioperative fluctuation of individual platelet reactivity at four time points using three assays in patients undergoing CEA.We measured platelet reactivity before surgery, one day after surgery, five days after surgery and at least eight weeks up to one year postoperative platelet reactivity during long-term treatment with clinically stable carotid artery disease).Our study has three notable findings: First, individual perioperative platelet reactivity measured with the VerifyNow and VASP assay fluctuated.Second, HTPR status measured with the VerifyNow and VASP assay changed over time.Third, although this study was not powered to analyse the effects of genotype on platelet reactivity, we found that at eight weeks after surgery, the fast metaboliser genotype carriers are less likely to be identified with HTPR according to the VASP assay.
To our knowledge, no data is available on the optimal moment to assess platelet reactivity and determine HTRP status.Most studies exploring the potential benefit of personalised APT have used a single platelet reactivity test for treatment stratification.Often, test results were obtained 12 to 24 hours after surgery. 4Although studies have demonstrated that the mean platelet activation across a population is consistent, individual fluctuation of platelet reactivity over time has been shown [13][14][15][16].Thus far, only one post hoc analysis assessed platelet reactivity over time in individual patients.A sub-study of the ELEVATE-TIMI 56 (Escalating Clopidogrel by Involving a Genetic Strategy-Thrombolysis in Myocardial Infarction 56) trial evaluated platelet reactivity in patients using clopidogrel (75 or 150 mg) before and after 14 days of treatment 7 .Approximately one in every five patients changed responder status (ΔPRU >40, P<0.001) in individual analysis, indicating fluctuation in platelet reactivity even in patients with clinically stable cardiovascular disease.
Our study also showed fluctuation of perioperative platelet reactivity.The surgical procedure influenced VerifyNow measurements.Platelet reactivity was higher one day after surgery than before surgery.Also, the number of patients identified with HTPR changed directly after the procedure.Platelet reactivity and HTPR status measured with the VASP also fluctuated.The VASP showed lower platelet reactivity after eight weeks than before surgery and one day after.Postoperative platelet reactivity measurements with the VerifyNow assay carry a high risk of false-positive HTPR diagnoses.Previous studies based on postoperative PFT measurements to switch APT therapy may, therefore, be less reliable.Our data suggest that immediate postoperative assessment of platelet reactivity with the VerifyNow reflects a temporary elevation of platelet reactivity.As most studies investigating platelet function testing to guide antiplatelet therapy used postoperative VerifyNow measurements, the outcomes of these historical studies should be interpreted with caution.
Several studies on healthy subjects who did not receive APT have shown that platelet reactivity can vary significantly over time [17,18].This suggests that besides drug therapy (patho-) physiology, the patients' diet, stress level, and pre-analytic variables such as concomitant medication and anaesthesia also affect platelet function over time [19].In line with our findings, this supports the hypothesis that periprocedural platelet reactivity measurements are not unconditionally reliable.Beyond the scope of this pilot investigation, future studies should consider incorporating PPI usage data to elucidate potential interactions and provide a more nuanced analysis of perioperative platelet function in patients undergoing carotid endarterectomy.
In addition, diversity in baseline platelet reactivity resulting from carrying any of the various CYP2C19 gene pairs can influence individual thromboembolic risk.This does not obviate the urgency of finding the optimal time point for PFT assessment.Still, it might prove to be a tool to pre-select patients who benefit more from early identification of being non-or lowresponsive to the prescribed platelet inhibitors.
Finally, this pilot study has not addressed drug-drug interactions such as that between P2Y12 inhibitors and proton pump inhibitors [20].Future, focused studies are needed to investigate the influence of the abovementioned factors mitigating platelet function.
The VerifyNow assay showed fluctuation of platelet reactivity after surgery and after at least eight weeks compared to pre-operative and postoperative measurements, respectively.This substantiates the hypothesis that the surgical procedure influences platelet reactivity.Lower platelet reactivity and smaller proportions of patients identified with HTPR after eight weeks, shown by the VASP assay, also support this.The fluctuation of platelet reactivity might be caused by a prothrombotic state resulting from the surgical procedure itself.Consequently, it may be considered to intensify the perioperative antiplatelet regimen.Current guidelines suggest dual antiplatelet therapy (DAPT) for at least one month after stenting [21].Measurements, therefore, should be taken either before or sufficiently long after the procedure to ensure that the intervention does not impact the measurement.Although, the exact timing is yet to be determined, platelet function testing might be a valuable tool for individualized (D)APT duration.
Implementing a period of intensive thrombotic risk reduction from the moment of diagnosis of carotid stenosis (e.g., after a CVA) could alleviate the added thrombotic risk of HTPR [22].Potent agents are feasible, such as ticagrelor, DAPT, or direct-acting oral anticoagulants.The CAPRIE and recent COMPASS and VOYAGER trials have suggested intensifying antithrombotic therapy using either agents such as clopidogrel (CAPRIE) or adding low-dose directacting oral anticoagulants twice daily to platelet function inhibitors (acetylsalicylic acid plus rivaroxaban) [23][24][25].The added risk of major bleeding appears low with short-term treatment, though cost-effectiveness needs to be proven.After surgery, repeated PFTs at a regulated interval could guide phased downscaling of APT to an acceptable level within the therapeutic window between thrombotic risk and risk of bleeding.Platelet function assays must overcome their limitations for applicability in daily practice.Further research focussing on patients with carotid artery disease is needed to investigate the effect of this strategy on outcome.
Conclusion
In patients undergoing CEA, perioperative platelet reactivity measurements fluctuate over time.Additionally, the HTPR status of the individual patient, as measured with the VerifyNow and VASP assay, changed over time.Therefore, commonly available perioperative platelet reactivity measurements are unreliable for perioperative APT strategy adjustments.
|
2024-06-28T05:17:36.940Z
|
2024-06-26T00:00:00.000
|
{
"year": 2024,
"sha1": "f3fe5c2794a14b1501f2671ab267af35738dba40",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0304800",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f3fe5c2794a14b1501f2671ab267af35738dba40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
262731395
|
pes2o/s2orc
|
v3-fos-license
|
Self‐help application for obsessive‐compulsive disorder based on exposure and response prevention technique with prototype design and usability evaluation: A cross‐sectional study
Abstract Background and Aims Obsessive‐compulsive disorder (OCD) is a relatively common disorder that, due to its debilitating nature, significantly affects personal abilities, job performance, social adjustment, and interpersonal relationships. There are significant barriers to accessing evidence‐based cognitive‐behavioral therapy as a first‐line treatment for obsessive‐compulsive disorder. Mobile health applications (Apps) offer a promising way to improve access to evidence‐based therapies while overcoming these barriers. The present study was to design and evaluate a prototype of a self‐help application for people with OCD (the most common pattern of OCD) based on the exposure and response prevention (ERP) technique. Methods This work was developed in four different phases. (1) Needs assessment: a thorough literature review, reviewing existing related programs and apps, and interviewing patients and psychiatrists; (2) Creating a paper prototype: considering the functional features identified in the previous phase using wireframe sketcher software. (3) Creating a digital prototype: developing an actual prototype using Axure RP software based on the information obtained from an expert panel's evaluation of the paper prototype. (4) Prototype usability evaluation: through a heuristic evaluation with experts and usability testing with patients using the SUS questionnaire. Results After requirement analysis, requirements were defined in the areas of information and educational elements, and functional capabilities. Prototypes designed based on identified requirements include capabilities such as in‐app online self‐help groups, assessing the severity of the symptoms of the disorder, psychological training, supportive treatment strategies, setting personalized treatment plans, tracking treatment progress through weekly reports provided, anxiety assessment, and setting reminders. Conclusion The results of the heuristic evaluation with experts made it possible to identify how to provide information and implement the capabilities in a way that is more appropriate and easier for the user.
Obsessive-compulsive disorder (OCD) is a common and disabling anxiety disorder characterized by recurrent, intrusive, unwanted thoughts, images, or urges that cause anxiety or distress and are usually followed by repetitive behaviors, rituals, or mental acts used to decrease anxiety. 1 It is the fourth most common psychiatric disorder after phobias, addictions, and depression 2 and the 10th leading cause of disability, according to the World Health Organization. 3The prevalence estimated for OCD is about 1%-3% globally, with a rate higher in the developed world at 2%-3%, [4][5][6] and is associated with substantial reductions in health-related quality of life, 7,8 as well as impairments in education, social relations, and family functioning. 9According to a recent study that investigated significant anxiety disorders in Iran, the 12-month prevalence of OCD in Iran is 5.1% and is the second most common anxiety disorder. 10Contamination OCD is the most common pattern of obsessive-compulsive disorder, followed by excessive or ritualistic washing or severe avoidance of things that are assumed to be contaminated, 11,12 which accounts for approximately a quarter of all obsessive themes in the United States and is the most common OCD concern worldwide. 13spite the disability associated with symptoms, most individuals with OCD delay seeking treatment. 14If untreated, OCD usually involves a recurrent period and becomes chronic. 15Therefore, early diagnosis and treatment are essential for long-term outcomes and the prevention of prolonged suffering. 16e first-line nonpharmacological treatment for a type of OCD is a form of cognitive-behavioral therapy (CBT) called "exposure and response prevention (ERP)."Studies documenting the benefits of ERP treatment have found that over 75% of patients experience improved OCD symptoms during treatment.The majority show long-term improvement 2-3 years after treatment.Several meta-analyses and clinical significance analyses indicate that 60%-80% of patients who complete treatment with ERP, particularly those who engage in treatment with compliance and motivation, get significantly better. 17spite strong empirical support for ERP and its effectiveness in treating OCD patients, 14,18 many affected patients do not have access to this treatment. 19,201][22][23][24] Consequently, it is essential to innovate approaches to improve access to such evidence-based psychotherapy.
Remote treatment via technology-based interventions (TBIs),
including computer-based and Internet-based interventions (CBIs), as a way to improve access to evidence-based psychotherapies, has attracted a lot of attention. 25TBIs provide treatment to patients who otherwise may not have access to help and improve healthcare for those seeking treatment by providing immediate access to evidencebased interventions. 26growing body of literature supports using technology to implement evidence-based treatments (CBT and ERP) for OCD through self-help intervention with minimal therapist contact.1][32][33][34][35][36] Still, there are limitations, such as a lack of portability or access across the wide variety of contexts in which OCD symptoms occur (e.g., in the car, at work, while shopping) and limited access to techniques that may influence adherence to ERP, such as practice reminders.A generalized use of mobile phones, ease of use, and the multitude of functions performed by mobile phones and their mobility have made mobile technology the most powerful tool in providing health interventions. 37Mobile health applications may overcome the limitations of TBIs while improving access to evidence-based psychotherapy.
Despite recent efforts by researchers to use mobile applications for the dissemination and implementation of evidence-based psychotherapy, [38][39][40] research on mobile applications for patients with obsessive-compulsive disorder is still in its infancy.
Given that the success of the treatment of these patients is more related to the patient's efforts to change behavior, mobile applications that can help them on an ongoing basis are useful.Therefore, in this regard, this study aimed to design and evaluate the usability of a prototype of a self-help application based on the "exposure and response prevention" technique for people with OCD contamination.
| METHODS
The research framework was planned based on a prototyping model, which is one software development life cycle model (SDLC model) to meet the project goals.This methodology is based on the use of prototyping as a mechanism, aiming to create high-quality apps through a collaborative atmosphere where users participate actively in prototyping. 41e phases developed within this methodology were as follows: ▪ Needs assessment ▪ Building a paper prototype ▪ Building a digital prototype ▪ Usability evaluation
| Needs assessment
Initially, a review article on previous related research in this area was conducted.Also, free Android apps and web-based remote health programs focusing on the education and treatment of obsessivecompulsive disorder using the CBT technique were reviewed for selfhelp (to identify problems in the app that should be avoided and useful features that can be included in the design).To determine the user's needs, a user survey was conducted with 15 patients with OCD contamination who were referred to clinics and psychological centers at Shiraz University of Medical Sciences and selected the convenience sampling method.For this purpose, patients were interviewed semistructured based on a series of fixed questions to determine their specific needs or requirements in relation to the app and their ideas and expectations.In addition, an expert survey was conducted using a researcher-made checklist with five psychiatrists who were members of the psychiatry department of Shiraz University of Medical Sciences (psychiatrist or doctorate in clinical psychology) specializing in behavioralcognitive therapies to identify and determine app features and capabilities.
Inclusion criteria for psychiatric specialists: The checklist was prepared based on the results of the literature review and existing system evaluations.The total requirements extracted were discussed in a joint meeting with the expert panel (including two psychiatrists specializing in CBT, one medical education specialist, and one medical informatics specialist [with at least 5 years of experience in health education]) and examined following the behavioral-cognitive therapy guidelines for obsessive-compulsive disorder and the remote treatment guidelines.Finally, a list of requirements was prepared, including functional capabilities, information, and educational elements that should be included in the application.
| Paper prototype design
The final application's overall and simple design was created in the second phase, considering the requirements identified in the first phase using the wireframe sketcher tool.This wireframe includes user interface components, menus, and links.In addition to the components' location, how they work and interact was also determined.
At this phase, the educational content of the app was determined based on a self-help treatment approach and protocols of cognitivebehavioral therapy for obsessive-compulsive disorder, with an emphasis on the principles of "exposure and response prevention." This wireframe, along with the educational content related to all parts in the form of a paper prototype, was provided to an expert panel (including two psychiatrists specializing in CBT, one medical education specialist, one medical informatics specialist (with at least 5 years of experience in health education), and one specialist in the user interface [UI] and user experience [UX] design) for evaluation to review and approve the training content, identify problems and improvement points, and note their views and comments to be modified during the digital prototype design phase.
| Digital prototype design
In the third phase, the problems identified in the paper prototype were fixed, the necessary refinements were made, and a digital prototype was developed using the Axure RP 9 software for Android OS.
| Testing
Finally, in this phase, two types of usability evaluations were conducted: (1) a heuristic evaluation of the app prototype using informaticists with experience in interface design and/or humancomputer interaction; (2) end-user usability testing:
| Heuristic evaluation
Five medical informatics specialists that had at least a Master's degree in Medical Informatics trained in human-computer interaction and had a published article in medical informatics.Each expert independently examined the prototype user interface in terms of heuristic principles and entered the problems found in the data collection form, a standard form based on the heuristic method proposed by Nielsen. 42Three medical informatics specialists have confirmed the validity of the content of this form.This form consists of a table containing columns for problem description, place of problem, violation of the heuristic principle, severity rating of the problem, comments, and suggestions.Furthermore, for each problem identified, a degree of severity should be assigned according to Nielsen's severity rating scale, ranging from 0 (no problem at all) to 4 (catastrophe problem).Finally, the sum of the problems related to each heuristic principle was classified into one of five categories based on the average severity of the problem: 0-0.5 no problem, 0.6-1.5 minor problem, 1.6-2.5 small problem, 2.6-3.5 big problem, and 3.6-4 serious problems. 43
| Usability testing with end-user
Ten patients (with the initial diagnosis of mild to moderate contamination OCD based on DSM-5 and approved by a psychiatrist, individuals who had not undergone cognitive-behavioral therapy or "exposure and reaction prevention" therapy) who did not participate in the design process were asked to rate the prototype's usability using the System Usability Scale (SUS) questionnaire.Ten participants were selected because past research had shown that the minimum percentage of problems identified rose from 55% to 82% and the mean percentage of problems rose from 85% to 95% when the number of users was increased from 5 to 10. 44 The validity and reliability of this questionnaire were evaluated by Diyanat et al. 45
| Data analysis
The process of data analysis was done using SPSS 21 software and descriptive statistics.Descriptive statistics used included frequency, frequency percentage, standard deviation.
| RESULTS
Further on, the results of this study are presented following the same sequence of methodological phases as described previously.
| Requirement analysis
Following the requirements analysis performed by the expert panel, the approved requirements were classified into two areas according to Table 1:
| Description of the prototype
This app prototype was designed for remote education and treatment for people with OCD contamination.Therefore, educational and psychological information and intended capabilities based on the ERP technique were provided as follows: At the beginning of the app, an introduction to the app and its capabilities is provided.After creating an account and logging in to the app, on the home page, there is a capability called an online self-help group in which people with contamination OCD can exchange their experiences and ideas with other people with similar experiences, gather information, help each other, solve their problem easier, and, in addition to their loved ones, have solidarity with others (Figure 1).
In the treatment tab (Figure 2 After entering their exposures, rating their anxiety, and creating their personal exposure hierarchy, instructing them where to begin and when to move on to more anxiety-provoking exposures.For each exposure practice, assessing anxiety, setting reminders, setting timers, and scheduling exposure practices can be done. In the treatment aid toolbox tab, supportive strategies are provided to help a person when practicing exposure or experiencing anxiety.These strategies are tips for success in therapy, motivational messages, meditation, relaxation, and inspirational quotes (Figure 3).
In the treatment progress tracking tab, progress tracking is done.The progress track will reset every week.For weekly practices, information is provided about the total number of times each practice is performed, the average duration of exposure, the average anxiety rate before and after the practices, changes in the method or duration, and the frequency of compulsions.The person can also record and note their experiences during the practice and email weekly progress reports to their therapist (Figures 4 and 5).In addition, periodic assessments of OCD severity can be done by setting reminders throughout the treatment period based on the app's recommended schedule.
| Usability testing with end-user
The characteristics of the 10 patients with OCD who evaluated the prototype as potential app users were as follows.Eight of them were women, and two were men.Their mean age was 34-year-old.70% of patients with experience using mobile apps related to health or treatment had a university education, and 30% of patients without experience had a diploma or lower.
To analyze the results of usability evaluation, the mean score given by each evaluator to the questions in the SUS questionnaire using IBM SPSS Statistics 22.0 was calculated.The mean score was 76.75.Therefore, the usability of the app prototype was shown to be "good" from the end-user's perspective. 46The lowest score given by the evaluators is 65, and the highest score is 92.5.
The details of the patient's responses to the SUS statements in this usability evaluation are provided in Table 2. Based on the following results, features such as "frequent use of the system," "ease of use," "having good capabilities," "fast learning of using the system," and "confidence in using the system" had high scores.However, "the complexity," "the need for an expert to use," "the inconsistency in this app prototype," "the difficulty of using the system," and "the need to learn a lot before starting work" had a lower score, which indicates the usability of the prototype app is good.
| Heuristic evaluation
In prototype evaluation using components presented by Nielsen (heuristic evaluation), a total of 148 usability problems, respectively, were identified.Five evaluators identified 13, 23, 25, 31, and 40 problems, respectively.Duplicate, similar, unintentional problems and problems caused by prototype restrictions were removed, and disagreements about the violated heuristic principle for each problem were resolved.Finally, 39 problems were identified, some of which have been identified by several evaluators.Table 3 presents the frequency of identified usability problems based on their severity and the violated heuristics.
In prototype evaluation using components presented by Nielsen (heuristic evaluation), a total of 148 usability problems were identified.Five evaluators identified 13, 23, 25, 31, and 40 problems, respectively.Duplicate, similar, unintentional problems and problems caused by prototype restrictions were removed, and disagreements about the violated heuristic principle for each problem were resolved.
Finally, 39 problems were identified, some of which have been identified by several evaluators.Table 3 The results of the evaluation showed that of all the identified problems, most are related to components: aesthetic and minimalist design (22.9%), and the least are related to components: visibility and system status (1.4%), flexibility and efficiency of use (1.4%), and error prevention (0%).
The average severity rating of the identified problems ranged from 1 (small problem), related to the visibility of system status, to 3.1 (big problem), related to aesthetic and minimalist design.
Among the main problems identified which can be solved in the final application, we can mention problems related to the "aesthetic and minimalist design" principle.The content of the psychological tutorials and the treatment toolbox is monotonous, high, and boring, and the images, sounds, and multimedia can be used to make the training more attractive and effective.Also, the tutorials can be more concise and case-by-case, and the more important ones can be highlighted and presented in a variety of colors and fonts.Evaluators assigned a severity rating of four to this problem.Some of the most common problems related to each principle, along with evaluators' comments and suggestions for solving them, are listed in Table 4.
F I G U R E 5 Treatment progress tracking page (compulsion).
T A B L E 2 Usability testing with end-user result.
System Usability Scale Mean SD
I think that I would like to use this system frequently 3.2 0.36 I found the system unnecessarily complex 0.9 0.29 I thought the system was easy to use 3.2 0.16 I think that I would need the support of a technical person to be able to use this system 0.9 0.69 I found the various functions in this system were well integrated 2.9 0.09 Provide a brief explanation at the beginning of the slide that indicates that these capabilities are in the application
Match between the system and the real world
Using inappropriate icons for the app guide character, patient assessment, self-help group guidelines, start of exposure practice, details of exposure practice, and review of exercises 2 There are no "undo" and "redo" functions on some pages of the app 3 Use the appropriate icon or show a tooltip.Insert undo and redo buttons.
Help and documentation
Lacking instructions for using the application for the user and how to set up a treatment plan 4 Show a short tutorial video at the beginning of installing the app about how to use the app.Embed a description link in a part of the application so the user can refer to it whenever she wants and read the description and help.
Consistency and standards
Improper use of « and » in the button related to exit, end, and Yale-Brown test operations 3 Delete icon "«" and icon "»" or use the icons appropriate to the operation 6. Help users recognize, diagnose, and recover errors Lacking information on the page that sets a hierarchy of exposure about why exposures should be chosen from the least disturbing to the most worrisome 3 Provide adequate explanations for why it should start from the least disturbing
Error prevention
No problem was identified
Recognition rather than recall
If the user has difficulty setting up a treatment plan and the details of ERP practicing, he or she should seek help from his or her memory and may not remember where he or she got the info 3 Embed a description link in a part of the application so the user can refer to it whenever she wants and read the description and help
Flexibility and efficiency of use
Information on how to perform the compulsions on the page of rituals must be entered manually, which may be difficult for many users in terms of concept and writing and cause them not to complete this part 2 Provide examples of performing the compulsions in the tooltip format when the mouse is over this option.Record how to perform compulsions by voice.
Aesthetic and minimalist design
The texts are intertwined and very uniform 3 Use colors, tables, shapes, drop-down lists, or other design features to make the text attractive.Major.
Many current health interventions for healthcare-related topics are designed based on existing structures in the healthcare system.They may not be as effective as those that involve end users in the design process. 47Recently, user-centered design approaches have been used to build mobile health applications, focusing on chronic illness, lifestyle and mental health interventions, and remote patient monitoring.Studies have shown that using a user-centered approach in designing and evaluating a mobile phone application allows for a more useful design while increasing usability and user satisfaction.
Working with potential users of the final app allows them to identify the features and information that users need and implement them in the application in a way that is easier and more understandable to the user. 48In this study, user-centered design principles have been used in the design and development of prototype applications (through participatory design with the cooperation of end users in the needs assessment, design, and evaluation).
In the needs assessment phase, the results of interviews with patients showed that due to problems such as difficulty in accessing a qualified therapist, saving time and money related to face-to-face visits, social stigma, and sometimes due to the negative effects of drugs, they prefer more active participation and acceptance of more responsibility in their treatment process, both remotely and through psychotherapy.Wooton, in his study, showed that different types of remote treatments (including low-intensity and high-intensity therapy, self-guided, and therapist-guided therapies) for obsessive-compulsive disorder are effective in reducing symptoms, and the results of their study were not significantly different from face-to-face treatment. 49sed on the findings of a study by Hogg, it has been shown that selfhelp, especially when provided through computer software or the Internet, is effective in treating anxiety disorders and improving treatment outcomes through self-care.The results are similar to those in general psychological anxiety disorder therapies.Therefore, selfhelp should be a standard treatment for patients seeking help from public health services. 50Pearce also found in a review of self-help treatment interventions for obsessive-compulsive disorder that selfhelp programs, while improving access to treatment, significantly reduced the severity of symptoms and treatment dropout rates.However, the results showed that self-help interventions with minimal therapist contact and predominantly self-help interventions improved clinical outcomes more than self-guided self-help interventions. 28This study attempts to use the feature of widespread use and portability of mobile devices to improve access to treatment materials in all situations and when OCD symptoms may occur to create a selfhelp program for mobile devices that, while improving access to treatment for people who have difficulties accessing treatment, enables patients to participate more actively in their treatment process.
Application prototype: with four main stages including assessment, psychological training, personalized treatment planning, tracking treatment progress, and capabilities such as an in-app self-help group, setting reminders for exercises, periodic assessment of the severity of disorder by performing a Yale-Brown test, a treatment aid toolbox (including supportive strategies to motivate the patient and help them during exercise or when experiencing general anxiety), and tracking the progress of treatment by providing a weekly progress report.The main stages of the application are designed following the necessary components for implementing cognitive-behavioral therapy with the technique of exposure and prevention of response to obsessive-compulsive disorder mentioned in the study of Redi. 51me of these capabilities have been used in therapeutic interventions designed in studies by Boisseau et al. 38 and Lenhard et al. 52 and Greshkovic et al. 53 that have determined the effectiveness of these interventions in improving the symptoms of obsessive-compulsive disorder in the studied samples.In the context of mobile health interventions, combining design features such as practical and easyto-use content, program personalization, reminder setting, selfmonitoring, and feedback to help individuals chart their progress may enhance the ability to increase user experience and participation and, in turn, reduce the likelihood of dropouts from self-guided therapies. 54nally, to evaluate the usability of the application prototype, a heuristic evaluation has been performed with experts based on Nielsen principles to identify usability problems and a usability evaluation with the end user.One of the strengths of heuristic evaluation was the expertise and activity of the evaluators.Studies have shown that having expertise in both the evaluation and the system under study helps to identify problems better and increase the validity of evaluation results. 55,56All assessors had medical informatics expertise, experience in healthcare, experience in human-computer interaction, and heuristic assessment skills, as well as experience in designing mobile health interventions.Patients gave positive feedback about the application in response to open-ended questions about their view of the application and the extent to which it conforms to their needs, goals, and skills.They also found it useful.However, some patients stated that this application did not have all the required information, capabilities, and functions that they expected.Of course, their expectations were beyond the scope of the study.
This study was one of the first feasibility studies of OCD applications in Iran.Due to the participation of specialists and your patient in the design and development of this study, its reproducibility increases.It is also suggested to study other areas of OCD.
In future research, we intend to develop the prototype and create the final Android mobile app, add features such as online communication with the therapist, cover all types of obsessive-compulsive disorder, evaluate the effectiveness and satisfaction of patients with the disorder from the final application, and compare the results with traditional face-to-face therapy in clinical trial studies.
| Limitations
In the requirements identification phase, due to the lack of internal electronic treatment programs and access to all available external electronic treatment programs for obsessive-compulsive disorder (such as IOS-based applications and applications that are not free), there were limited patterns for design.
On the other hand, the treatment steps in this application are designed based on exposure and response prevention techniques.
Exposure and response prevention techniques are recommended as the first line of treatment for patients who are not too depressed, anxious, or ill or who prefer psychotherapy to medication.Therefore, this application does not include remote treatment for people with severe symptoms of obsessive-compulsive disorder and depression.
| CONCLUSION
The results of this study led to the design of a prototype of a self-help application for patients with obsessive-compulsive disorder based on exposure therapy and response prevention by a multidisciplinary team.
The requirements identification and analysis phase identified components and guidelines for designing a therapeutic application using the exposure and response prevention technique.We hope that by using the components and following the instructions in the application design, patients use this application for self-treatment using exposure and response prevention; a reduction in symptoms and severity of the disorder can also occur effectively.The application of the four main stages of assessing the severity of OCD symptoms, psychoeducation, setting a personalized treatment plan based on exposure and response prevention techniques, and tracking the progress of treatment can help in the self-treatment of people with OCD contamination.
By creating an application prototype and evaluating whether it meets the basic needs of patients or not, both a better and more practical understanding of patients' potential problems when working with the final application was provided.Ideas and feedback from evaluations performed by patients and heuristic evaluation specialists were obtained that can be used to develop the final application.The usability evaluation results with the application's potential users showed that they reported the application as usable.In addition, heuristic evaluation with experts led to the identification of problems and suggestions for solving them.In the final version, in addition to solving the problems identified by the evaluators in how to provide information and implement the capabilities of the application in a way that is more practical, appropriate, and convenient for the user, the problems caused by the limitations of the prototyping tool in the correct and complete implementation of the capabilities will be eliminated and the user interface will be improved.Therefore, it seems that this application, as an effective intervention, can be useful in helping patients with OCD contamination with self-medication with exposure and prevention methods.In future research, while developing a digital prototype and creating the final Android mobile app, evaluation of usability, effectiveness, satisfaction of patients with obsessive-compulsive disorder from the final application, and comparison of results with traditional face-toface treatment should be done in a clinical trial.We also intend to cover other types of obsessive-compulsive disorder and implement features such as online communication with the therapist for remote consultation, virtual meetings, and recording personal and therapeutic information in the patient profile to present to the therapist.The findings of this study and future research could contribute to emerging studies on the use of mobile applications in performing evidence-based psychotherapy.
1 . 3 .
Member of the Faculty of Psychiatry, Shiraz University of Medical Sciences (neuropsychologist or doctorate in clinical psychology) with at least 5 years of teaching experience in the field of health.2. Having work experience in the field of obsessive-compulsive disorder Having expertise in behavioral-cognitive therapies Patient inclusion criteria: 1.People with a primary diagnosis of obsessive-compulsive disorder (obsessive-compulsive disorder) based on DSM-5 and with the approval of a psychiatric specialist 2. Patients between the ages of 18 and 60 years Exclusion criteria: 1. Unwillingness to participate in the study Permissions and letters of recommendation were received from the Director of the Research Deputy of the School of Management and Medical Information Sciences, Shiraz University of Medical Sciences (SUMS) (IR.SUMS.REC.1398.449).Confirmations were also received from the security office of the university.T A B L E 1 Educational requirements and functional capabilities.
Definition of obsessive-compulsive disorder (especially contamination OCD) concerning the causes, features, and symptoms of the disease • Available therapies and their effectiveness • Introducing the components and principles of exposure and response prevention and its function in the treatment of obsessive-compulsive disorder App features and how to use it • Mentioning the purpose and mission of the program • Description of the app and How to use the program as a self-help tool • How to set up a personalized treatment plan: ✓ How to set up a hierarchy of exposures ✓ How to assess anxiety ✓ Teaching how to do exposure exercises and how to avoid the response • Providing useful tips for success in the treatment process Functional capabilities Creating a personalized treatment plan • Making a list of obsessive thoughts and actions and stimuli • Assessing anxiety based on the SUDS scale • Setting the hierarchy of exposures • Scheduling for doing exercises Providing support strategies • Mentioning the reasons for fighting OCD • Meditation and relaxation • Motivational messages recorded (by the user or provided by the expert) • Targeting and determining rewards for completing exercises • Writing notes Setting reminders • To do exposure exercises • For periodic evaluation of disease severity (based on Y-BOCS)Setting a timer to determine the duration of the exposure exercises or the time spent performing the compulsions Tracking treatment progress • Periodic evaluation of the severity of symptoms (with Y-BOCS) • Providing weekly progress reports • Preparation of LOG BOOK (number of times of exposure exercise, last time of exposure, last time spent to do a compulsion, last anxiety rate (SUDS) Online Self-Help group ), after an initial assessment of the patient using the Yale-Brown Obsessive-Compulsive Scale for identifying OCD severity and the provision of psychological training about contamination OCD and its causes and symptoms, cognitive behavioral therapies for this disorder and their effectiveness, the introduction of the components and principles of ERP and how it works on OCD, how to set an exposure hierarchy, and how to utilize the exposure practice tool, the person can select the personalized treatment program.Since symptoms (obsessions, compulsions, and triggers) in people with OCD vary from person to person, a personalized treatment and management plan is needed.As a self-help tool, the guides in each step of setting a treatment plan will help users create the most effective exposures for their OCD symptoms.F I G U R E 1 Home page.F I G U R E 2 Treatment page.REZAEE ET AL. | 5 of 12
F I G U R E 3
Treatment aid toolbox page.F I G U R E 4 Treatment progress tracking page (exposure).
presents the frequency of identified usability problems based on their severity and the violated heuristics.Out of the problems extracted, 2.6% (n = 1) were identified by five of our evaluators; 2.6% (n = 1) were identified by four evaluators; 10.25% (n = 4) by three evaluators; 20.5% (n = 8) by two evaluators; and 64.10% (n = 25) problems were identified by one evaluator.
Heuristic evaluation results.The most common problems identified.
T A B L E 3
|
2023-09-27T05:06:30.132Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "27f42e9406076aa081e7ce78137e8101b278167b",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "27f42e9406076aa081e7ce78137e8101b278167b",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258215842
|
pes2o/s2orc
|
v3-fos-license
|
The RAS-24: Development and validation of an adherence-to-medication scale for severe mental illness patients
Introduction: Several studies have found that most patients with severe mental illness (SMI) and comorbid (physical) conditions are partially or wholly nonadherent to their medication regimens. Nonadherence to treatment is a serious concern, affecting the successful management of patients with SMIs. Psychiatric disorders tend to worsen and persist in nonadherent patients, worsening their overall health. The study described herein aimed to develop and validate a scale (the Ralat Adherence Scale) to measure nonadherence behaviors in a culturally sensitive way. Materials and Methods: Guided by a previous study that explored the primary reasons for nonadherence in Puerto Rican patients, we developed a pool of 147 items linked to the concept of adherence. Nine experts reviewed the meaning, content, clarity, and relevance of the individual items, and a content validity ratio was calculated for each one. Forty items remained in the scale’s first version. This version was administered to 160 patients (21–60 years old). All the participants had a diagnosis of bipolar disorder, major depressive disorder, or schizoaffective disorder. The STROBE checklist was used as the reporting guideline. Results: The scale had very good internal consistency (Cronbach’s alpha = 0.812). After a factor analysis, the scale was reduced to 24 items; the new scale had a Cronbach’s alpha of 0.900. Conclusions: This adherence scale is a self-administered instrument with very good psychometric properties; it has yielded important information about nonadherence behaviors. The scale can help health professionals and researchers to assess patient adherence or nonadherence to a medication regimen.
Introduction
Nonadherence to treatment is a serious concern that affects the positive management and prognosis of patients with severe mental illness (SMI). Pharmacotherapy is essential for the successful management of these patients. For a person with a psychiatric illness, keeping to a prescribed regimen of medication is critical. The illness of a patient who fails to adhere will inevitably worsen and persist, which will lead to a deterioration of the patient's overall health; such nonadherence carries a substantial economic burden, as well [1,2]. This is a major problem for patients around the world. The World Health Organization (2020) has reported that the proportion of medication nonadherence is 50% in patients with chronic diseases [3]. Although researchers have reported a nonadherence rate of 20% to 60%, several studies have found that 40% to 60% of psychiatric patients are partially or wholly nonadherent to their prescription medications [4][5][6]. Lack of medication adherence is a common and potent (though modifiable) risk factor for poor outcomes [7]. Hispanics are more likely to be nonadherent to psychiatric medication and other treatments [8][9][10].
Individuals with SMIs are also susceptible to a variety of physical conditions considered to be risk factors for CVD, such as hypertension, metabolic syndrome, hyperlipidemia, type 2 diabetes, and abdominal obesity, among others [11,12]. These conditions tend to be poorly treated, also exerting detrimental effects on SMI patients [13]. Not only CVD prevalence rates but also the prevalence rates for its risk factors are about twice as high in SMI patients (for example, in those with affective disorders) as they are in the general population [14,15].
We could not find an instrument in Puerto Rico that assesses nonadherence or poor adherence to medication; while some surveys have been translated in other countries, none have been translated and validated for our population.
For that reason, we decided to develop a valid instrument for measuring treatment adherence/nonadherence in SMI patients with one or more of the following CVD risk factors: hypertension, obesity or abdominal obesity, and/or a generally unhealthy lifestyle (e.g., eats poorly, smokes cigarettes, is physically inactive). We included patients with these comorbidities because the leading cause of premature mortality in this population is related to CVD and CVD risk factors [16][17][18][19]. Up to now, no gold standard for measuring treatment adherence in Puerto Rican SMI patients with CVD-related comorbidities exists. Furthermore, the questionnaires that are available are Spanish translations of English-language instruments that have never been psychometrically validated for our population. There is a lack of appropriate and validated instruments that can be used to measure adherence-nonadherence behaviors, especially in Puerto Rico. We designed and tested just such an instrument that measures just such behaviors.
Instrument development requires a sample that has those attributes being assessed [20]. For that reason, we used a qualitative approach to provide social and cultural context in the construction of our instrument [21]. We used a psychometric approach to test the instrument for internal consistency and content validity. To create our instrument, we followed the eight-step process devised by DeVellis [16] for scale development. Our preliminary published data established that nonadherence behaviors are a complex phenomenon with a variety of patterns [21]. The categories of our study were used to create this scale. There are barriers to adherence to treatment that are related to the medications for a given individual's mental illness and barriers to adherence to treatment for those diseases and conditions that are themselves risk factors for CVD. Patients named stigma (toward those with a mental illness), patient-and medicine-related issues, poor family support, and factors related to the patient-provider relationship as barriers adhering to the drug regimen prescribed for their psychiatric condition. For the category of barriers to adherence to medications intended to reduce CVD risk, the participants revealed having certain patient-related reasons, mentioned the fact that healthcare personnel does not always provide adequate follow-up care, and named stigma and the lack of support as additional factors.
The aim of this research was to develop and validate a scale to measure adherence and nonadherence behaviors in a culturally sensitive way, taking into consideration the barriers that prevent a patient from taking his or her medication for the treatment of a psychiatric disorder with a comorbid physical condition. This new scale for SMI patients captures the adherence barriers that were determined in our previous study [21] and can be used by healthcare professionals in targeting interventions that encourage treatment adherence by considering the needs and characteristics of the individual patient.
Scale Development and Validation
The Institutional Review Board of the University of Puerto Rico, Medical Sciences Campus, approved the study. Informed consent was obtained from all the subjects involved in the study. The STROBE checklist was used as the reporting guideline. (See Supplementary Table 1). This study consisted of two phases. The scale was in Spanish. The first phase consisted of applying the first five of the eight steps of scale development that are recommended by DeVellis [16]. To that end, we first determined what we wanted to measure, as was previously detailed in our earlier article [21].
Second, we generated a set of test items forming a pool of 147 items, all related to the concepts of adherence and nonadherence, as established by the literature [6,7,21]. The items were linked to the categories related to the barriers mentioned in the literature and explored by previous research done by the main author.
Third, we determined the format of the measurement (i.e., a Likert scale). The options offered in that scale consist of "totally disagree," "partially disagree," "partially agree," and "totally agree." Fourth, we had the items in the initial item pool reviewed (in terms of content) by a group of nine experts. Each expert rated these items according to meaning, content, clarity, and relevance. Three psychiatrists, one cardiologist, two clinical psychologists, and three clinical social workers comprised the experts. In addition to being acknowledged experts in their individual fields, the members of our panel had multiple years working with the issue of adherence (Fig. 1). A standard review guide that included the definitions of adherence and nonadherence was developed. Our experts rated the relevant of each item in terms of what we intended to measure. They examined the content and face validity of the scale and then gave feedback about each item.
Fifth, we included in the preliminary scale the items selected by the experts after determining that each one's calculated content validity ratio (CVR) was acceptable [22]. The number of persons making up the panel of experts corresponded to the minimum value of the CVR for each. In this case, a given item needed a ratio of 0.78 to 1 for it to be retained. This procedure was essential to our maximizing the content validity of the scale. A content validity index (0.83) was calculated for the whole test after the items to be included in the first version had been identified; that index indicated that 83% of the items that were included in the instrument were acceptable. Our preliminary scale had 40 items; the CVRs of those items ranged from 0.78 to 1.0, which, according to Lawshe [22], is adequate.
The sixth through eighth steps were part of the second phase.
In that second phase, then, we, sixth, administered the scale to a sample. Seventh, we evaluated the items using factor analysis. Eighth, we choose the items of the final scale and obtained a Cronbach's alpha for that scale.
Participants
The first version of the scale was administered to 160 patients who had been recruited from a clinical psychology practice associated with a private academic institution in San Juan and from the regional branches of an outpatient-serving governmental health agency in several cities in Puerto Rico. A social worker or clinical psychologist at each facility invited a given possible candidate to participate in the study. After the initial approach, the PI was notified that she should contact the candidate. The participants answered a questionnaire that elicited sociodemographic information and questions intended to gather mental and physical health data. Data collection was conducted from February 2017 to December 2019. The participants had bipolar disorder (BD), type I, type II, or unspecified; major depressive disorder (MDD); or schizoaffective disorder (SD). This was a convenience sample. All subjects signed written informed consent.
The inclusion criteria were the following: To participate, the individual 1) had to be taking medication for both physical and mental illnesses; 2) could have one or more risk factors for hypertension, obesity, or abdominal obesity (i.e., a body mass index of 30 kgm 2 or more), diabetes, or high cholesterol or triglycerides; 3) could have high LDL levels; and 4) could have a generally unhealthy lifestyle that included having a poor diet, smoking cigarettes, and/or being physically inactive. People with substance abuse problems at the moment of the interview or who were in the midst of a suicidal crisis were excluded from the study. We used the minimental state examination, version 2 (MMSE-2) to rule out dementia and severe cognitive deterioration as part of the exclusion criteria.
Reliability, Validity, and Statistical Analysis
Descriptive statistics were used to calculate the demographic characteristics of the sample. A one-way X 2 test was used to analyze adherence/nonadherence through the scale. A two-way X 2 test was used to compare scale scores by sex, BD (type I and type II), MDD, and schizoaffective disorders and to compare the scale scores with the participants' perceptions of their problems with taking medication. A one-way between-groups analysis of variance (ANOVA) was used to analyze the variable of education as it relates to adherence and nonadherence behaviors. We hypothesized that there will be differences between adherence and nonadherence scale scores and several patients' variables related to sex, diagnosis, self-perception of adherence behaviors, and cognitive functioning. The scale was tested for internal consistency and content validity. We computed the Cronbach's alphas for the scale scores. An exploratory factor analysis was conducted to uncover the latent dimensions among the items. We calculated statistics using IBM SPSS version 21 software. Statistical significance was set at a = 0.05, two-sided.
Baseline Sample Characteristics
Forty-six percent of the participants had schizoaffective disorder; 32%, BD; and 22%, major depressive disorder. The mean age of the participants was 45.5 years (SD = 11.1; range 21-60 years). Sixtythree percent were female. In regard to education, 36% had completed high school. The majority of the participants were single (66.3%). Fifty-nine percent of the sample was from an urban area. (See Table 1 for all the sociodemographic characteristics of the sample.) All the participants (100%) reported that they were currently using psychiatric medication, and 21% of them reported having problems taking their medication and not adhering to their medication during the week that they were recruited for the study. (See Table 2 for the psychiatric diagnosis, medical comorbidities, treatment details, and lifestyle characteristics of the participants by gender.) Using X 2 analysis, statistically significant relationships were found in the following variables: obesity and gender ( Supplementary Fig. 1), stress and gender ( Supplementary Fig. 2), past drug use and gender ( Supplementary Fig. 3), and exercise and gender ( Supplementary Fig. 4). We found a statistical relationship between high levels of stress and having a depression disorder ( Supplementary Fig. 5). In addition, we found a statistical relationship between diagnosis and number of medications ( Supplementary Fig. 6).
Factor Analysis
As part of the exploratory factor analysis, we used principal component analysis to extract the maximum variance and put it into the first factor to obtain the minimum number of factors. The latter analysis revealed that the scale had five components with eigenvalues greater than 1, accounting for 57.5% of the total variance explained ( Table 3). The first component had an eigenvalue of 7413, followed by 2431 for the second, 1506 for the third, 1234 for the fourth, 1205 for the fifth, accounting for 30.89%, 10.13%, 6.28%, 5.14%, and 5.02%, respectively, of the variance. Supplementary Fig. 7 shows the scree plot. The Kaiser-Meyer-Olkin measure of sampling adequacy was 0.85 (good). The Bartlett's test of sphericity was p < 0.001. We included the distribution of answers for each question of the scale (Table 4). No Bias was found in this sample in terms of their responses to the RAS-24.
Reliability
The original scale of 40 items had high internal consistency (Cronbach's alpha = 0.812), making it a very good index (according to Kline, 2005) [23]. However, after the factor analysis, the scale was reduced to 25 items to compact it and make it more flexible for future use. We choose the items that correlated between 0.509 and 0.718, but not values that were too high because extreme multicollinearity and singularity would cause difficulties in determining the unique contribution of the variables to a factor. The Cronbach's alphas of the deleted items fluctuated from 0.888 to 0.900. When removing one of the items, the Cronbach's alpha of the final version of the scale was 0.900. Then, the final scale consisted of 24 items. The principal component analysis for the 24 items that remained revealed five components related to adherence and nonadherence behaviors. 4 Ralat and Rodríguez-Gómez 6 Ralat and Rodríguez-Gómez
Adherence/Nonadherence Behaviors
We analyzed the data using a one-way X 2 test. Different from other studies, in which frequencies of from 20% to a 60% were observed [4][5][6], the frequencies observed in this sample showed that 10% of our sample population had positive adherence behaviors and 90% had nonadherence behaviors, X 2 (1, N = 160) = 102.4, p < 0.001. Ten percent of men and 10% of women adhered to their medication regimens. Persons with BD had 12% adherence and 88% There was no statistically significant difference between males and females in the scale scores. There were no differences in the scale scores by diagnosis. However, we found significant differences between the scale scores and several patient variables as follows.
We used a one-way, between-groups ANOVA to analyze the variable of education in terms of adherence and nonadherence behaviors. Least significant difference post hoc comparisons examined differences between groups (p < 0.05). Participants were classified into four groups: individuals having 0 to 4 years of education, those having 5 to 8 years of education, those having 9 to 12 years of education, and those having an undergraduate or higher level of education. Data are presented as the mean ± standard deviation Participants with an undergraduate or higher degree were found to be more adherent to medication (1.82 ± 0.384 [95% CI, 1.75-1.89]; p = 0.016) in comparison to the group of participants with 5-8 years of education (2.00 ± 0.000 [95% CI, 1.84-2.16], p = 0.044) and that of the participants with 9-12 years of education (1.95 ± 0.229 [95% CI, 1.87-2.01]; p = 0.016). The differences between these groups were statistically significant, F(3, 156) = 2.80; p = 0.042; η 2 = 0.051. Having had more years of education contributes to positive adherence behaviors. Supplementary Fig. 8 shows the distribution of the different levels of education with their corresponding adherence/nonadherence behaviors Using a question that explored the difficulties of taking medication, we examined how the scores of our adherence scale matched up (or did not match up) with the perceptions of the participants regarding their own adherence or lack thereof. Before working with the adherence scale, each participant was asked whether he or she had any difficulties taking his or her medication. This question was part of the sociodemographic questionnaire. Our intention in using this question was to assess each participant's scores regarding adherence in light of his or her own perceptions of the difficulties in adhering to a medication regimen. We analyzed our data using a 2 (problem taking medication) × 2 (measures of adherence/nonadherence behaviors ascertained by the scale) X 2 test. Though only 21% of the participants reported having difficulties taking their medication, the scale scores indicated that 90% of them had nonadherence behaviors (X 2 (1, N = 160) = 4.80; p < 0.05; ϕ = 0.173).
Finally, there was a statistically significant relationship between participants with cognitive impairment and nonadherence behaviors. We used a 2 (adherence/nonadherence behaviors) × 2 (normal/impairment cognitive functioning) X 2 test. Fifty-one percent of nonadherent patient had cognitive impairment (X 2 (1, N = 160) = 3.83; p < 0.05; ϕ = −0.155). A more detailed analysis of cognitive impairment by diagnosis with this sample was published by the main author [24].
Discussion and Conclusions
We developed and validated a culturally sensitive adherence-totreatment scale. Compared with other studies [4][5][6], the number of nonadherence behaviors in this study differed from what has been found. In our study, there were more participants with nonadherence behaviors. Education was one of the variables with a significant effect on adherence and nonadherence behaviors different from other studies. Adherence rates were higher in those participants with higher levels of education. Cognitive impairment is another variable that could have an influence related to nonadherence behaviors. Using the MMSE-2, we detected that 51% of patients with nonadherence behaviors had cognitive impairment.
This study supports the notion that adherence to medication regimens can be estimated in SMI patients based on the education level of the individual as well as other variables. Different from other studies, in which participants with high levels of education were found to be more adherent (though not statistically significant so), our study found having a high level of education to be a statistically significant variable that was associated with both adherence and nonadherence behaviors [25,26].
Prior to their being interviewed, we asked the participants what -if any-difficulties they had in managing their own medication regimens. The goal was to later assess the differences between perceived and actual (as determined by our scale) adherence; the answers to this question revealed that nonadherence behaviors were more frequently practiced than the participants thought them to be. In the literature, several studies have indicated that selfreports (compared to other assessment methods) tend to overestimate adherence behavior [27,28]. Social desirability is one of the reasons that patients tend to overestimate their effectiveness in managing their own medication regimens. To help remedy this issue, social desirability must be addressed and a validated measure of adherence (one that focuses on the population of interest) used.
The psychometric properties of this particular scale were considered to be very good. To the best of our knowledge, this is the first scale in Puerto Rico to measure adherence and nonadherence to medication regimens in a population of patients with an SMI. Effectively identifying nonadherence behaviors is the first step in developing and, subsequently, promoting, psychosocial interventions that can enhance treatment adherence; such interventions would be an adjunct to pharmacotherapy [29]. Pharmacotherapy is the recommended first-line treatment for SMI patients, but medication adherence is frequently poor, causing relapses and worsening the psychiatric symptoms and general health of these patients [2,30]. Our scale identified adherence and nonadherence behaviors in a sample of SMI patients (Table 5). This scale will be known as the Ralat Adherence Scale (RAS-24) and will consist of 24 items aimed at assessing adherence and nonadherence in SMI patients. The RAS-24 will make it possible for healthcare professionals to explore adherence barriers in their patients.
Five principal components related to adherence and nonadherence behaviors were identified. The first component included four items related to adherence behaviors. We also identified the reasons for nonadherence, classifying these reasons as being patient-related, medication-related, stigma related, or related to a lack of support from family members. These five components will help healthcare professionals to identify not only the nonadherence behaviors but also the reasons for their existence, thereby enabling these professionals to offer interventions that promote adherence behaviors. When it comes to medication adherence and nonadherence, tailored psychoeducation has been proven to be more effective than generalized education. The scale described in this article offers information about which barriers a given patient has and helps the health professional to create a tailored psychosocial intervention to better manage that patient's issues with medication; the purpose is to increase the efficaciousness of this kind of intervention by taking into account the individual's personal characteristic, needs, beliefs, and attitudes [31].
This study has several limitations. First, the participants were not randomly recruited and are not representative of all the 8 Ralat and Rodríguez-Gómez Puerto Rican SMI patients. Second, patients with other chronic mental illnesses (e.g., schizophrenia) were not included in the sample. Third, the reliability of the scale over time (using a test-retest strategy) was not evaluated. More studies with greater numbers of patients and in other populations with the same and other chronic illnesses are needed to test the RAS-24. In the future, measuring test-retest reliability and performing subsequent validation with confirmatory factor analysis are recommended. Despite the limitations mentioned, the results provide relevant information about the psychometric properties of the RAS-24.
In conclusion, the RAS-24 is a self-administered instrument with very good psychometric properties; it has already yielded important information about nonadherence-adherence behaviors. The scale can help health professionals and researchers to assess patient adherence or nonadherence to a medication regimen. Identifying nonadherence behaviors and their causes in patients with an SMI will aid in the provision of psychoeducational and psychosocial interventions to both these patients and-when applicable-their caretakers, thereby promoting improvements in their (the patients) psychiatric and comorbid conditions. The scale can be used for future research examining both preventive measures and potential treatments. Data Availability Statement. The data that support the findings of this study are available from the corresponding author upon reasonable request. Medical Sciences Campus, University of Puerto Rico. The authors would like to express their gratitude to the individuals that participated in this study. The research reported in this publication was supported by the National Institute on Minority Health and Health Disparities (award numbers R25MD007607 and U54MD007600) and the National Institute of General Medical Sciences (NIGMS) of the National Institutes of Health (award number U54GM133807). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Disclosures. The author declares no conflict of interest.
Transparency Statement. We confirm that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained.
|
2023-04-20T13:05:04.812Z
|
2023-04-18T00:00:00.000
|
{
"year": 2023,
"sha1": "215fb5b0540fe1c6fc3a5a7606fed6fffab67491",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Cambridge",
"pdf_hash": "215fb5b0540fe1c6fc3a5a7606fed6fffab67491",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
109661949
|
pes2o/s2orc
|
v3-fos-license
|
Quantitative study on Mannesmann effect in roll piercing of hollow shaft
Mannesmann effect is studied using a hollow cylinder model, which is devised to analyze a hole expansion phenomenon in artificial roll piercing of a hollow cylindrical material by two barrel-type rolls without any mandrel and guiding tools. A rigidthermoviscoplastic finite element method is employed with a special mesh generation scheme which can control the mesh density especially on the small hole surface. No damage model is used to soften the material and the hole expansion simulation is conducted without any additional assumptions about material and process. Artificial roll piercing processes for a wide range of hole diameters with outer diameter fixed are simulated with emphasis on hole expansion. It has been shown that the relative hole expansion ratio of the maximum hole diameter to the initial hole diameter increases as the initial hole diameter decreases, indicating that the hole expansion phenomenon is next to the Mannesmann effect occurring in actual roll piercing. It has been also shown that the hole expansion is related to the cavity formation occurring just after the material passes the mandrel nipple, which leads to the decrease in the pushing force exerted on the mandrel in an actual roll piercing process. © 2014 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of Nagoya University and Toyohashi University of Technology.
Introduction
In roll piercing, the bars are pierced under the action of tensile stress developing at the centre due to the socalled Mannesmann effect. Elongating and reduction rolling ordinarily follows afterwards to make the pipes thin, longer with their diameter getting smaller as required. The effect is a kind of useful defect phenomenon occurring in the workpiece under the tensile plastic stress state, whereas it guides to create the cavity for a hole and refine the uniform microstructure, leading to a well-known enhancement of quality in the final seamless product.
In manufacturing seamless pipes, therefore, the roll piercing process, exerting a tremendous effect on the following procedures, bears one of the significant industrial interests. Widely known as Mannesmann piecing, the barrel-type roll piercing is the most representative with its roll a barrel shape (Mori et al., 1998). Several researches have been directed toward Mannesmann effect especially at the central region of workpiece under tension, still constituting the main and active research issue. Urbanski and Kazanecki (1994) analyzed the strain distribution in the piercing process by the 2-D finite element method, and Capoferri et al. (2002) studied the phenomenon of Mannesmann effect using the theory of maximum principal stress with 2-D finite element models as well, while Ceretti et al. (2004) extended the analysis to the 3-D cases. On the other hand, many researchers tried to visualize the softening phenomenon at the central cavity area based on the damage model. Mori et al. (1998) performed a simplified 3-D simulation of the roll piercing via the generalized plane deformation model, together with damage model based on the mean stress and effective stress schemes. Using the 3-D rigid-plastic finite element method, Komori (2005) further simulated the process of roll piercing in its steady state, which in fact did not touch the issue of Mannesmann effect itself, whereas he assumed the cavity creation in front of the mandrel. Meanwhile, Pater et al. (2006) hired thermo-mechanical finite element method to the analysis of non-steady state of roll piercing process with regrettably no consideration of Mannesmann effect, while Chiluveru (2007) used the damage model of porous materials to study the Mannesmann effect. Ghiotti et al. (2009) modified the Lemaitre damage model (Lemaitre et al., 2005) and performed a numerical and experimental analysis upon the Mannesmann effect. They reflected the fact that the central portion of material structure appears to be relatively vulnerable to fracture, endowing the workpiece with an artificial initial damage proportional to an early stage cavity. Their results reveal the analysis is severely dependent on the initial values. Shim et al. (2012) most recently simulated the roll piercing process by an intelligent remeshing scheme with no further consideration about the Mannesmann effect.
The analyses of Mannesmann effect by the damage model, as the various researches up to now indicates, requires an extensive dependence on several assumptions made and there seemingly exist no universal or prevailing research results. In the present paper Mannesmann effect is quantified to elaborately trace the geometric change of inner diameters in a hollow workpiece, which replaces the controversial damage model with a small initial hole along the axial direction in the workpiece. The apparatus of an artificial hole is supposed to properly describe the macroscopic behavior of Mannesmann effect. Fig. 1 shows a typical example of roll piercing process where two barrel-type rolls and a mandrel with two other guiding shoes engaged (for the details see Shim et al., 2011).
Simulation of Mannesmann effect
To introduce a central microscopic failure, Ghiotti et al. (2009), one of the typical outcomes based on damage model, assumed the high initial damage value near the central line to predict the Mannesmann effect. An analysis based on Ghiotti's previous scheme, performed by the authors using a general package for thermoviscoplastic finite element analysis is presented in Fig. 2, where a Mannesmann hole is generated along the axial line of the workpiece. As the figure depicts, an initial damage assumed a priori, of course, is the key feature of the Ghiotti's method. It is regrettably not an easy task to single out the mechanical part from the analysis in which both metallurgical and mechanical features are mixed together. To circumvent the difficulties in singling out mechanical features in the damage model, the current analysis model is replaced by an initial workpiece hollowness as to resultingly behave like a thick tube. And the current study carries out an analysis of a roll piercing with no mandrel engaged, in which the changes of inner radii are traced laboriously, leading to the close visualization and quantification of the Mannesmann effect. The same analysis conditions of the process including geometric and process information employed in the study of Pater et al. (2006) and Cho et al. (2011) are used.
An intelligent tetrahedral element system (Lee et al., 2007) is hired for the analysis with total number elements in the range of 50,000 to 80,000. The internal region of the hollow workpiece is densely discretized with finite elements for a better representation of plastic deformation inside the hole. Fig. 4 shows an analysis result for the change of an inner radius along the center line of the workpiece especially when r = 2.0 mm. The farmost right-hand side of the hollow workpiece, called a trough, deforms conspicuously in a concave manner, which stems from the edge effect. Looking closely at the change of inner radius, one may find that it slowly starts to change from the entry cross section of A 0 to the section A 1 where the roll gap becomes minimum. And then it experiences a sharp and drastic change to have the maximum value r max at the section A max , proceeding to the trough area. It is noteworthy that the whole change profile along the center line of the process soon gets to the steady-state deformation and the final inner radius of the hollow workpiece in the exit have the value a little smaller than r max . Comparing the result obtained with that in Fig. 2, one may easily recognize that they are nearly the same pattern of deformation and trend. Fig. 5 depicts predictions with emphasis on inner diameter expansion when initial values of inner radii r vary from 2.0 mm to 10.0 mm. In every case, all the inner radii are observed to have larger values. Meanwhile the expanded regions are near to the exit side of the process as the initial radius is relatively small, whereas the cases of bigger radii show a rather flattened tendency over the axial direction.
Regarding the expansion level of the hollow workpiece due to Mannesmann effect per each initial inner radius,
Work roll
Mandrel Guide shoe Workpiece x y z y the relative expansion ratio of the maximum radius r max to an initial value r 0 is investigated. Note that the maximum radius was measured just after the workpiece attained the steady-state deformation region. The graph for the relative expansion ratio is depicted in Fig. 6, which vividly reveals the fact that it is remarkable as the initial inner radius gets smaller, while it rapidly becomes negligible as the radius gets much larger.
In an actual roll piercing the existence of mandrel is essential for the final seamless pipe product, while the mandrel is purposely excluded in the current analysis. With a typical mandrel engaged, an intriguing result like that presented in Fig. 8 can be obtained. It can be seen from the figure that only the tip of mandrel nipple touches the material and that there is a distinct non-contacting region at an end of mandrel nipple where coolant holes exist in usual course. It is quite notable that the creation of cavity between the mandrel and workpiece, arising from the chosen geometric shape in a mandrel and the characteristic expansion of the inner radius described above, ultimately play a role in lessening decisively the force exerted on the material-die contact surface. Both features from the mandrel and expansion of the hollow workpiece, of course, constitute the whole story of Mannesmann effect.
Conclusions
For the purose of describing and quantifying the Mannesmann effect, the current study from the pure mechanical viewpoint predicts and analysizes the inner radius expansion in the roll piercing process, replacing the damage model by an artificial hollow workpiece with the mandrel not in play, called as an artificial hollow cylinder model. Any other mechanical assumptions were not made for the analysis model and no damage model was hired. It should be noted that the predictions of Mannesmann effect are too much affected by the specific damage model used. The finite element predictions for various hole sizes with outer workpiece diameter fixed revealed that the artificial hollow cylinder model can provide meaningful information about Mannesmann roll piercing processes more easily and clearly compared to the exsiting damage model. Outcomes also show that the inner radius gradually starts to change from the entry cross section to the section of minimal roll gap and then experiences a sharp expansion near the exit region proceeding then to a steady-state deformation.
Meanwhile it is noteworthy that the expansion ratio is remarkable as the initial inner radius gets smaller, which is supposed to provide a clue to understand the Mannesmann effect. It has been also shown that the hole expansion is related to the cavity formation occurring just after the material passes the mandrel nipple, which leads to the decrease in the pushing force exerted on the mandrel in an actual roll piercing process.
|
2019-04-12T13:57:52.513Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "2f1ab6ce334e6411cfdf58d9c24244ef87497289",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.proeng.2014.09.150",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "adc94d4e41077d95b4dbad49bd8b6ddb4fc95f25",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
243500711
|
pes2o/s2orc
|
v3-fos-license
|
Perceived Risk of Sexual Harassment and Protective Behaviours among Zambian Soldiers in Selected Military Camps
The aim of the thesis was to explore the link between Perceived Risk of being sexually harassed and protective behaviours adopted by soldiers in selected military camps. In order to understand the lived experiences and interpretations related to the topic, the research made use of a qualitative inquiry that was grounded in the Hermeneutic Phenomenology Design. Purposive Sampling was based on the selection criteria of the Sexual Harassment Experience Questionnaire (SHEQ) inorder to come up with the desired sample size of 95 (74 victims and 21 non-victims). The Self-Report Response strategy was used at participant individual level so as to ensure protection based on anonymity and confidentiality. Data were collected using the Semi-Structured Interview Guide. The findings were categorized in line with the objectives of the study which where to:(a) explore perceived risk perceptions of being sexually harassed (b) determine the risk factors associated with sexual harassment.(c) assess the protective behaviors against the risk factors of sexual harassment.(d) examine how perceived risk of being sexually harassed motivates protective behaviors. The findings on perceived risk situation of being sexually harassed revealed a high magnitude for both unwelcome verbal and non-verbal actions without any punishment. Only ranks between private and corporal continued being affected. The ranks above sergeant only experienced these actions between the ranks of private and corporal. This situation gave rise to a high perceived risk of being sexually harassed among the current affected category victim and non-victim soldiers. The findings on the risk factors associated with sexual harassment were measured as (i) perpetrator (ii) individual weakness and (iii) military characteristics respectively. This was also done from the perspective of both victims and non-victim soldiers. Male/Female bully and discriminatory was among the recorded perpetrator characteristics. Furthermore, non-reporting of the perpetrators was recorded as an individual victim weakness which was not the case for the non-victims. Lack of written sexual harassment mitigation measures was recorded as a military characteristic that was also a risk factor for the prevalence of sexual harassment. The findings on protective behaviors against risk factors of sexual harassment deviated from existing theory. It was established victims never adopted protective behaviors fearing revenge, but non-victims protected themselves as they feared getting affected health-wise. In both cases the new explanation of phenomenon was a motivation against high perceived or awareness of risk factors of sexual harassment. This was taken from an initial measurement of (low, medium and high) levels risk. It was calculated using a 3*3 risk analysis matrix of which the risk had two components possibility and severity. This led to (Matakala 2021)‟s theory and others. The findings from the non-victim on how the high perceived risk of being sexually harassed motivated protective behaviors revealed three protective initiatives. These were escape and evasion, stick and avoidance of lone movement protective initiatives. The escape and evasion was based on the principle of being able to see the known perpetrator first so that it is easy to avoid them. The avoidance of lone movement principle was meant not to come in the presence of the known perpetrator while alone. Lastly, the stick principle was made to ensure movement to the known perpetrator was done in a group of four making it difficult for harassment to take place. The research originality, which is a major significant contribution to the knowledge base, concluded the study, with a scholarly demonstration through the use of the non-victim protective initiatives against sexual harassment, which were later subjected to validation by selected victim soldiers, who initially lacked the said initiatives, but had the awareness of risk factors. While the findings of soldiers could also be extrapolated to officers, it is important future studies consider the category.
Context of the Topic (Sexual Harassment and Zambian Soldiers)
Sexual harassment is currently a growing global gender concern with a gendered dimension that embraces the sex of both males and females. It includes physical, verbal and non-verbal unwelcome actions that affect a number of organisations including the military in Africa (UNSC, 2008). According to UNSC (2008), 50% female and 30% male soldiers are sexually harassed in Southern Africa. Conformity, obedience, and the different hierarchical gender power relations between those in the lower ranks and higher ranks are a major factor that has contributed to the existence of sexual harassment in the military organizations (UNSC, 2008). The working morale and execution of military duties by the lower ranks is lowered because of being sexually harassed by the higher ranks. Sexual harassment is also a public health issue which is linked to other severe long term health problems that could put one at the risk of high blood pressure, anxiety, depression and insomnia (WHO, 2014). This because sexual harassment involves unwelcome physical, verbal and non-verbal actions on a victim.
Zambia is one of the Southern African countries that have tried to deal with the problem of sexual harassment in the country as a whole including military organizations. This has been done by enacting the section 137(A) Statutory Instruments No. 15 of 2005. This (SI) highlights the types of sexual harassment and the punishment against perpetrators of sexual harassment. These include unwelcome verbal, non-verbal and physical sexual harassment with sentences ranging from 3 to 15 years for perpetrators (GRZ, 2005). The State has therefore, criminalized sexual harassment in Zambia which also includes the military. An extract from section 137, Sub-Section 1, 2 and 3 of the penal code with relevant laws on workplace sex harassment is highlighted below:
137A
(1) Any person who practices sexual harassment in a work place, institution of learning or elsewhere on a child or an adult commits felony and is liable. Upon conviction, to an imprisonment for a term of not less than (03) three years and not exceeding (15) fifteen years imprisonment.
(2) A child who commits offence under subsection (1) is liable to community service or counseling as the court may determine in the best interests of the child.
( 3) In this section, sexual harassment means (a) a seductive sexual advance being unsolicited sexual comment, physical contact or other gesture of sexual nature which one finds objectionable or offensive or which cause discomfort in one"s studies or job and interferes with academic performance or conducive working or study environment; (b) sexual bribery in the form of soliciting or attempting to solicit sexual activity by promise of reward; (c) sexual threat or coercion which includes procuring or attempting to procure sexual activity by threat of violence or victimization; or (d) sexual imposition using forceful behavior or assault in an attempt to gain physical sexual contact.
The implementation of the statutory instrument act No. 15 of 2005 section 137 of the penal code chapter 87 of the laws of was one of the legislation measures that was considered by the Zambian government because of the admonition by the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW). This is because all the signatories of which Zambia is a part agreed to come up with pieces of legislation that would avoid discrimination of women in line with social and cultural issues that bring about sexual harassment and gender based violence (GRZ and GIDD, 2007).
The Zambian government through the Ministry of Gender has also come up with the National Gender Policy which has also admonished the headquarters of government ministries such as the Ministry of Defence to ensure that vulnerable groups such as women and girls are protected from sexual harassment (GRZ, 2014). Section 6.1.3.5, In particular admonished the Ministry of Defence and its three security wings which are Zambia Air force (ZAF), Zambia National Service (ZNS) and Zambia Army to ensure that vulnerable and marginalised groups such as women and girls within the security wings and outside are protected from sexual harassment. This should be done in the most effective This shows that soldiers are still exposed to sexual harassment. This therefore, raises questions about the current situation on the perceived risk perceptions of being sexually harassed. This further raises questions regarding awareness of risk factors of sexual harassment. This situation also raises questions on whether or not soldiers who perceive the risk of sexual harassment are aware of and/or adopt protective behaviours. The answers to the raised questions lie in the exploration of lived experiences based on the participant perspectives of the present study.
Although, past studies have been conducted on sexual harassment, only few studies have used the concept of perceived risk. Scientific Research/Theory shows that Perceived Risk which is measured against three awareness levels (no, low and high) is an important construct because it helps in determining whether or not victims of a health/social threat of (being sexually harassed) protect themselves, so that mitigation measures can be proposed (Ferrer e"tal, 2016 and Fontalvo e"tal). It is therefore, important to explore and understand, link between Perceived Risk of being sexually harassed and protective behaviours adopted by soldiers if mitigation measures are to be proposed for the military camps, as it is an under explored area.
Purpose of the Study
This Phenomenological study therefore, attempts to make a contribution to the knowledge base by showing the link between Perceived Risk of being sexually harassed and protective behaviours adopted by soldiers.
Specific Objectives
(1)To explore perceived risk perceptions of being sexually harassed among Zambian soldiers in selected military camps.
(2) To determine the risk factors associated with sexual harassment among Zambian soldiers in selected military camps.
(3) To assess the protective behaviors against the risk factors of sexual harassment among Zambian soldiers in selected military camps.
(4) To examine how perceived risk of being sexually harassed motivates protective behaviors among Zambian soldiers in selected military camps.
Research Questions
(1) How is the situation of perceived risk perceptions of being sexually harassed among Zambian soldiers in selected military camps?
(2) What risk factors are associated with sexual harassment among Zambian soldiers in selected military camps?
(3) What protective behaviors do Zambian soldiers adopt against the risk factors of being sexually harassed in selected military camps?
(4) How do perceived risks of being sexually harassed motivate protective behaviors among Zambian soldiers in selected military camps?
Significance of the Study
The study is important because it gives an insight into the different protective behaviors that Zambian Soldiers who perceive risk of sexual harassment in selected military camps utilize. It also shows the awareness levels of risk factors associated with sexual harassment. It is also hoped that the study will act as a stepping stone for further research.
Ethical Consideration
Ethical clearance was obtained from the ethics committee of the University of Zambia. Having initially conducted a similar study in 2015, permission was once again sought from the military authorities" inorder to access entry. Permission was also sought from the actual participants. The "Sexual Harassment Experience Questionnaire" (SHEQ) used for identification and accurate description of participants, whose answers required an affirmative response regarding the definition of study concepts such as perceived risk and sexual harassment were assured of confidentiality because it was done at individual level using the Self-Report Response Strategy. Having been identified through the "Sexual Harassment Experience Questionnaire" (SHEQ) was at individual level soldiers were also assured of anonymity, confidentiality and being untraceable before and after the study because the actual interviews were also done at individual level using Self-Report Response Strategy. The assurance and trust in the researcher was further strengthened because they were told that no actual names would be recorded but pseudonyms. All efforts were further taken to ensure that the rights of participants as per research ethics are protected and respected. Participants are assured that they are free to ask for clarification at any point of the exercise and to inform the researcher if they feel uncomfortable about any procedure in the research. It is against this background that the few female participants that were not comfortable to reveal certain information based on sex differentiation with the researcher were availed with a trained female research assistant. This was based on their verbal consent. They were also told that they were free to withdraw from the study at any time that is if a participant does not want to respond to a question. He/She was free to say so or to leave it and go to the next question, interviews were held at a convenient place and time of their choice.This made it easy for the soldiers to easily open up. This in turn led to an in-depth understanding of the phenomenon because trust was enhanced. Soldiers were further told that participation is voluntary and as such they have the right to pullout when the feel uncomfortable. Verbal or Written consent was then obtained.
Theoretical Frameworks
This study was guided by three theories identified through the use of the narrative literature review based methodology as shown hereunder;
Tripartite Perceived Risk Model
It gives an insight into the perceived risks of a health threat and peoples subjective judgements about whether/whether or not they should develop protective behaviours based on possibility and severity knowledge.
Key for this study is that people with Low perceived risk; are not likely to come up with protective behaviors. While those with High perceived risk; are likely to come up with protective behaviors. (Ferrer e"tal, 2016).
Perceived Risk model is relevant to sexual harassment as it falls within the social and health definition for which this model was developed for. Sexual harassment is a public health issue which is linked to other severe long term health problems that could put one at the risk of high blood pressure, anxiety, depression and insomnia that may arise from physical, verbal and unwelcome actions (WHO, 2014).
The justification for the use of this validated descriptive theory or model lies in its ability of describing and classifying the concept of perceived risk in terms of the corresponding subjective individual protective behavioural characteristics by patients against a health threat as an explanation. Descriptive model/theories are important in areas were nothing or very little is known such as the understanding of Non-victim/ Victim protective behaviours in relation to the perceived risk of being sexually harassed among Zambian soldiers (Carey, 2012).
As per research principles the present research can replicate the model by utilising the questions that it used for drawing its conclusions with a view to confirm, reject or reform the model (Moody, 1990).
The adapted risk model is further contextualized with gender power relations theory so as to understand how conformity, obedience, and hierarchical power relations between those in the lower ranks and higher ranks may contribute to sexual harassment. The relationship is the determination of how they protect themselves when they perceive the risk.
The adapted risk model is further contextualized with the token theory so as to understand whether the minority situation for females can lead to sexual harassment. The relationship is the determination of how they protect themselves when they perceive the risk.
Theory of gender power relations
Those who hold power and authority from either sex are likely to sexually harass the juniors due to differential hierarchical gender power relations existing within the different rank structures. The observations are based on large military camps headed by field rank because they are potential sexual harassment areas (Foucault, 1975).
Theory of tokenism
The theory of tokenism refers to the discrimination and marginalization of the members of a group in a minority position. This theory proposes that members of any social group ranging from political areas, military organizations, schools and other groups up to less than 15% are discriminated. This theory further states that if these minority groups are not represented by the governing of the affairs of that organization their complaints will be dying a natural death. The theory of tokenism was developed using evidence from women"s experiences of sexual harassment and marginality in male occupations (Kamir, 1998).
In this study attention is drawn to the theory of tokenism as a guide to the present inquiry. This is because the main thrust of the theory is that groups in the minority are more likely to be marginalized by a group in the majority. The justification on the use of the theory as a guide is because the study also will look at female soldiers who are in the minority. Furthermore, the justification is that the theory has been used and it worked in a study conducted by (Zandonda, 2010) in Ndola under similar conditions. The theory also worked for other studies by (Matakala;.The present inquiry therefore uses the aforementioned theory because of the principle of replication which advocate that an inquiry can use a theory used by another study in similar conditions hence being transmittable to the present inquiry Bless and Achola (1988). The findings of this study being similar can be compared to the theory in terms of whether or not they agree with the theory.
Conceptual Framework of the study
A conceptual framework in a qualitative research is one that involves the synthesis of concepts/constructs from existing literature in relation to the research problem of the thesis inorder to come up with research objectives/ questions that best explain the natural progression of the phenomenon of sexual harassment that is being studied in relation to perceived risk concept.
This framework is therefore, based on the synthesis of existing literature by the researcher, from the perceived risk model and other sexual harassment studies. Taken together these act as a conceptual road map for the present inquiry.
(Dependent Variable or Construct). The study further assumes that measurement of Low Risk, Medium Risk and High Risk attributes for Perceived Risk perceptions of soldiers as a (Moderating Independent Variable or Construct) will help in strengthening the initial relationship between the independent and dependent variable above. Lastly the study assumes that Practically-Validated Protective Behaviors for soldiers may be the (Resultant Research Output).
Measurements or Observations of the major constructs in the relationships of the Study Conceptual Framework.
The conceptual framework constructs are based on validated-reliable theoretical evidence. These are the Tririsk model for perceived risk (Ferrer et al, 2016), Tripartite model for sexual harassment (Fitzgerald et al, 1995) c) Sexual Harassment Measurement or Observations in the present study are based on three validated categories which include physical, verbal and non-verbal unwelcome actions on the (The harassed) all of which were validated through the confirmatory analysis taking into account gender harassment, sexual coercion and unwanted sexual attention.These will form part of the thematic questions as observations in relation to other concepts of perceived risk and risk factors. The source for replication of the measurements or observations is based on the validated works of the tripartite model for sexual harassment (Fitzgerald et al, 1995).
The researcher identified constructs that formed relationships explaining the natural progression of this study based the identified research problem/research questions in line with the qualitative conceptual framework principles (Miles and Huberman, 1999 & Guba and Lincoln, 1994). Furthermore, this explanatory conceptual framework which is the researcher"s perception has been demonstrated in word or narrative form. This is according to the research principles hence the justification (Holloway and Wheeler, 1996).
Research Paradigm
This study uses phenomenology and ethno methodology as paradigms to locate the study. Research questions number one and two will be studied using phenomenology.
Phenomenology Paradigm.
Phenomenology is interested in meanings of the things that people do. The lived experiences simply mean everyday encounters that people go through. This has been clearly described by the two variants Husserlian and Haiddegerian. The Husserlian being ascribed to Edmund Hursell while Haiddegerian is ascribed to Martin Heiddger.Phenomenology was chosen for this study because the research desires to appreciate, how the concept of Perceived Risk of sexual harassment among Zambian soldiers is associated with protective behaviours in selected military camps with respect to two sub-population groups-Victims and Non-Victims in relation to their experiences. This fits well with the assumptions of phenomenology according to Moustakas (1994). Phenomenology also helps in describing phenomena from the participants" lay verbal accounts (1) How is the situation of perceived risk of being sexually harassed among Zambian soldiers in selected military camps?
(2) What risk factors are associated with sexual harassment among Zambian soldiers in selected military camps?
Whereas Ethnomethodology are only interested in what makes sense for them and the methods they use to protect themselves (Garfinkel, 1967). It was popularized by Harold Garfinkel. It will be combined the Tripartite Model of Risk Perception as it has two types of risk perceptions which are low and high that will be used inorder to understand how they are associated with protective behaviours which in this case are against sexual harassment. Key for this work is that soldiers with a low perceived risk perception are less likely to take up any protective behaviour against sexual harassment. While those with a high perceived risk perception are more likely to take up any protective behaviour against sexual harassment. This is the main focus of this study based on the principles of the Tripartite Model of Risk Perception (Ferrer E"tal, 2016
Hermeneutic Phenomenology Design
Hermeneutic Phenomenology study design is justified because it was used to uncover the lived experiences regarding the link between perceived risks of being sexually harassed and protective behaviors adopted by Zambian soldiers. The Hermeneutic Phenomenology design is further justified because it further allowed the researcher to offer an interpretation to the study findings (Van-Manen 1990 in Carey 2012). The design is qualitative in nature and uses semi-structured interview guides inorder to gain knowledge. Perceived risk in this phenomenology study, which was based on the lived experiences was measured by examining the perspective of participants, for an explanation on whether they perceive; no risk, low risk or high risk of sexual harassment. Further lived experiences regarding the awareness of the potential risk factors were also explored. Lastly, experiences on whether those that perceived risk of being sexually harassed in terms of whether they were aware and/or adopted protective behaviors. The Self-Report Response Strategy based on semi-structured interviews was used as it not only allowed for further probes, but also allowed for the protection of the lived experiences from the participants.
The lived experiences demanded physical interviews because exploration of the link between perceived risk of being sexually harassed and protective behaviors among soldiers demanded the insiders view.
The underpinning Ontology Philosophy supporting the Hermeneutic Phenomenology Study Design
The Nominalist Ontology Philosophy was used as a solid foundation in strengthening of the Hermeneutic Phenomenology research design. This was largely because the Nominalist Ontology Philosophical implications are linked to the "emic perspective", in which the "Social Reality" that was being pursued was seen to be created in the minds of the researched. This therefore supported the design because the information that was needed was, also not tangible it was based on the latent thought processes of the researched (Burrell and Morgan in Carey, 2012). This therefore, supported (lived experiences) which are also based on latent thought processes of the participants under the phenomenology design.
The philosophy of science in the research considered the "emic perspective" inorder to support the scientific research design which considered the "lived experiences".
The Nominalist Ontological question, showing the link between Perceived Risk of being sexually harassed and protective behaviours adopted by soldiers further required a philosophy that would theorise the production of such knowledge. The philosophy of science, which theorise production of knowledge in research is the epistemology which in this study was linked to the qualitative approach.
Importance of Qualitative Approach in relation to the Hermeneutic Phenomenology Study Design
The main thrust of the qualitative approach, within the scientific research process lies in the production of knowledge, from participant subjective meanings about a particular phenomenon within their natural settings using physical interviews. In this case the physical interviews in this study were tied to the Semi-Structured Interviews. Phenomenology is under the umbrella of the Qualitative Approach.
Perceived risk of sexual harassment and protective behaviors can be studied qualitatively or quantitatively. The reviewed literature in chapter two showed how perceived risk of sexual harassment was studied quantitatively by Fontalvo et al (2019). The reviewed literature in chapter two also showed how perceived risk of gender based violence was studied qualitatively by Kaufman et al (2019).
However, perceived risks of being sexually harassed and protective behaviors adopted by soldiers globally appear to be under explored, Zambia inclusive. An under explored phenomenon can effectively be studied using the qualitative approach which is also the main umbrella for phenomenology (Carey, 2012). Therefore, the qualitative approach which also embraces exploratory motives was useful because the concept of perceived risk of sexual and protective behaviors adopted by soldiers appear to lack prior knowledge. Further the justification for the use of the qualitative approach is because it is able to allow for follow ups to the experiences on the verbal ideas, responses as well as allowing for further investigation to motives and feelings about the phenomenon. This is done within the natural settings of the participants. The resultant observations or interviews lead to the identification of new emerging patterns and trends whose summaries may help in the revision or development of a new theory or proposals of statements. These in turn helped to understand and explain what was discovered. This is important for under explored phenomenon.
The underpinning Epistemology Philosophy to the Qualitative Approach
The Interprativist Epistemology Philosophy was used as a solid foundation in strengthening of the qualitative research approach. This because the Interprativist Epistemology Philosophical implications supported the knowledge generation, suggesting that "there are no fixed truths" meaning the understanding and meanings participants regarding the perceived risk of sexual harassment among soldiers and their adopted protective behaviors were different. When viewed from the perspective of the interpretive epistemology philosophy the "lack of no fixed truths in participants" is tied to subjective experiences of participants under the qualitative approach (Burrell and Morgan in Carey, 2012). .
The philosophy of science in the research, suggested that "there are no fixed truths" when considering the "emic perspective", inorder to support the qualitative research approach which also looked at participant subjective meanings about the "lived experiences" of a particular phenomenon within their natural settings using physical interviews.
This because the generation of such knowledge is dependent on the subjective independent motives about how the participants felt about the phenomenon. The philosophy therefore supports the assertion that the knowledge about the phenomenon can be studied in the natural settings of the participants which in this case was the selected military areas. Further the philosophy supported the qualitative approach by confirming that the knowledge of the exploratory motive and probes about the problem can only be enhanced by assuring the individual participants of protection which was achieved through assurance of protection of anonymity and confidentiality. This was enhanced by the use of pseudonyms.
Study Sites
The study was conducted in three selected targeted military camps headed by field ranks of Ndola, Lusaka and Chipata. Large camps headed by field ranks are potential sexual harassment areas hence the choice of the selected areas were the camps are found (Foucault, 1975).
Target Population
Inclusion of Victims and non-victims of sexual harassment was important in the understanding of protective behaviours that soldiers use when they perceive risks of being sexually harassed. Non-Victim perspectives are important as their lived experiences may be used to help both potential and actual victims at individual level.
Sample Size
(96) Participants both victims and non-victims of sexual harassment were initially meant to be selected by this research, with not more than seven participants in all the seven non-commissioned rank categories that define a soldier (1.Privates, 2. Lance Corporal, 3.Corporal, 4. Sergeant, 5. Staff Sergeant, 6.Warrant Officer Class Two and 7.One). Although (Patton 1990 in Holloway and Wheeler,1994) argues that no sample size guidelines exist in qualitative research, each of the categories of ranks was sufficient because it did not have more than seven participants which is the maximum number for the commencement of data saturation Guba and Lincoln (1994). After this no more new information came up and in depth understanding of a phenomenon were achieved Guba and Lincoln (1994). In the present research the data saturation was constant and was arrived at three participants that were selected from each category. This was because the categories of ranks had homogenous or similar characteristics (Carey, 2012). It was against that background that selection was then stopped for each particular category of rank so that analysis for the different findings take place (ibid, 2012). The principle of data saturation was first postulated by Barney Glasser and Anselm Straus in 1967 when they introduced grounded theory. The duo argued that data saturation is a strategy used in qualitative as a criterion for discontinuing data collection and/or analysis which may be achieved between 3-7 participants (Glasser and Strauss ,1967).
However, contemporary qualitative researchers such as Van Manen a Phenomenologist argued that the principal had limitations because there was a possibility that there could be backflow of data which may not be reached as a result of this discontinuation. He therefore suggested that it would be important to move forward and backwards to ensure that saturation had been achieved (Van-Manen 1990 in Carey 2012).
Therefore this study adopted the two principles inorder to ensure that data saturation from participants had really been attained, by going ahead through going forward and backward to confirm if there is and/no backflow of data. The study later confirmed that saturation was consistent at 3 participants.
(75) Victims were to represent both the male and female soldiers. 35 male victims were selected. Five participants represented each of the seven categories of ranks that define a Soldier. 40 female victims were earmarked for selection. Six participants represented each of the seven categories that define a soldier. However, the last category for the females had only four because few reach the last rank bringing a total of 74 victims.
However, the findings from the victims showed that the senior categories of warrant officer class one and two and staff sergeants only suffered sexual harassment in the lower ranks. For males it included the rank of sergeant. This meant that the ranks were to be excluded when considering the nonvictims.
Therefore, the junior categories of ranks that were included were private, lance corporal, and corporal for both female and male non-victim soldiers. Additionally, the rank of sergeant for females was included. This is because this category continued to suffer sexual harassment in the present ranks.
Because of the precedence in terms of data saturation 3 participants were earmarked for each of the categories of junior ranks mentioned as shown hereunder: (21) Non-victims were the selected to add to the sample size.9 male non-victims and 12 female nonvictims. Each category had not more than three which were all sufficient to attain data saturation hence the justification.
Sampling Procedures
All soldiers (Victims and Non Victims) were eligible. A Sexual Harassment Experience Questionnaire (SHEQ) was used to identify the participants based on their perceived risk situation, understanding of sexual harassment and its forms and the protective behavior situation. The instrument was supplemented by the purposeful sampling. Both techniques were used due to lack of an availed sampling frame for both categories of participants.
According to Fitzgerald et al (1999) the Sexual Experience Questionnaire (SEQ) was meant to come up with measurements for sexual harassment and is also used to establish the prevalence and frequency rates. The establishment of the prevalence rates then leads to the identification of those people that have been harassed from those that have not been harassed, based on the understanding of what constitutes sexual harassment.
The Sexual Harassment Experience Questionnaire (SHEQ) therefore becomes a stepping stone for an in-depth understanding of how the problem occurs based on the lived experiences by victim participants. This was to be achieved by interrogating subjective views using the semi-structured interviews or focus group discussion interviews.
Based on the validated Sexual Harassment Experience Questionnaire (SHEQ) roles the present study replication was justified.
The Sexual Harassment Experience Questionnaire (SHEQ) was therefore, used to identify the participants based on their perceived risk situation, understanding of sexual harassment and its forms and the protective behavior situation. This was achieved by random departmental visitations of the selected military camps until the desired sample was attained. Understanding of sexual harassment and its forms entailed that soldiers could perceive risk of sexual harassment and either come up with protective behaviors or not depending on their subjective judgments. Those that did not understand sexual harassment and its forms were excluded as they were not able to perceive the risk. Purposeful Sampling was used to select participants that met the criteria for this study based on the Sexual Harassment Experience Questionnaire (SHEQ). The participants that were selected from the camps came from the non-commissioned ranks that define a soldier which are (1.Privates, 2. Lance Corporal, 3.Corporal, 4.Sergeant, 5.Staff Sergeant, 6. Warrant Officer Class Two and 7. One) in ascending order from the junior to the senior most category of rank. Literature shows that this is the category that is affected (UNSC, 2008). Although, literature is silent on the commissioned ranks it is possible that they can also be affected. However, ethically the researcher being a soldier cannot study commissioned officers because of differential power relations. Therefore, the experiences of the present study can also be extrapolated to the officers.
Data Collecting Procedure and Quality Assurance
In this study information was collected from both secondary sources of data and primary sources of data.
Secondary data collection: In its quest to collect data related to the topic under this study, the researcher got the information from research text books, dissertations, theses and journal articles. Inorder to identify gaps leading to the identification of the methodology and methods used by this research, the narrative review of literature, which is the traditionally acceptable type within the qualitative research and most relied because it is a pragmatic and less structured approach was used (Hart, 2013).
The narrative type review of literature did not only help in the discovery of new insights about the topic but also helped in designing the research methodology and methods which is the third stage in a formal research process.
Primary data collection: The study used the Semi-Structured Interview guides to collect raw data in the field. However, before they were used, they were subjected to some form of preliminary test inorder to establish whether the questions would yield the required information.
Primary Data Collection and Quality Assurance
Pretesting was used as a preliminary test involving baseline knowledge as well as preparedness for the research process which was done to ensure quality data is obtained (Creswell, 2012 andCarey, 2012). Guba"s four trustworthy strategies which include Credibility, Dependability, Confirmability and Transferability were used inorder for the assurance of data quality as well as the study academic acceptability (Guba and Lincoln, 1994).
This was achieved in four stages during the data collection procedure.
First stage of ensuring qualitative primary data collection is of good quality
The first stage in ensuring that data collection was of good quality began with the identification and accurate description of potential participants. Inorder to identify the potential participants, in the absence of an availed sampling framework, the study made use of a researcher/self-administered Sexual Harassment Experience Questionnaire (SHEQ). Affirmative response toward the defined concepts the study under the definition of concepts section page (i) before the introduction through Sexual Harassment Experience Questionnaire (SHEQ) appendix 1 page (91) meant automatic purposeful selection. This was with regard to sexual harassment and perceived risk.
Further the potential participants were classified into two groups based on their response towards the question on the protective behavioral status also indicated under the definition of concepts section page (i) before the introduction through Sexual Harassment Experience Questionnaire (SHEQ) appendix 1.This was from the Non-commissioned ranks that define a soldier also indicated on the same pages. The accurate description of the participants was later facilitated by purposeful selection.
Based on the qualitative principles the researcher was assured of credible data through the use of a suitable design meant for the lived experiences which according to qualitative principles was the Hermeneutic Phenomenology and the Semi-Structured Interview Guide. The Semi-Structured Interview Guide was later subjected to a pretest in the second stage.
Second stage of ensuring qualitative primary data collection is of good quality
The Semi-Structured Interview Guide was then subjected to a pretest so that it had the capacity of having standardized questions that could easily be answered by the accurately defined participants.
However, during the preliminary test of the research instruments for standardization of questions, evidence showed that the researcher needed a female research assistant. This was noted from a few female participants who said they could only participate in the study if they were interviewed by a fellow female. Inorder, to ensure data quality and in-depth understanding of the topic a female researcher was identified and given research skills training.
This was to cater for few female participants that required a female inorder to open up. Research assistants therefore, came from a military background, with at least a first degree/research or better knowledge in a similar field. Further inorder to ensure data quality an age of thirty-five was appropriate for research assistants as it sits between youth and adulthood. The ideal rank was of staff sergeant who also fits the age category as well as service. Both old and young participants were able to freely express themselves freely in the few recorded cases. The duty of a female research assistant was to ensure that data that might be affected by virtue of the sex of the principal investigator is handled by her so that females can open up to ensure quality. Furthermore, inorder to ensure data quality the research assistant was guided on the prescribed interview process using the proposed research questions.
As a way to ensure further data was of quality participants were subjected to the self-report response strategy at individual level. They were told that this was going to ensure protection of the participants by remaining anonymous both before and after data collection which only involved the researcher and the individual participant. They were also told they would be identified by a pseudonym. As a result the participants were willing to open up.
This made the researcher to be assured that he was going to collect dependable data during the process of actual data collection.
Third stage of ensuring qualitative primary data collection is of good quality
During the actual face to face interviews the researcher made sure that participant"s data was secured by recording it the actual way in which it was reported. This was achieved by not going into the actual discussion with preconceived ideas that would influence data quality. This meant that data could be confirmed by other researchers. The information that came from the participants was in accordance with their lived experiences without interference from the researcher..
Therefore, the extent of the findings was authentic or valid meaning another researcher could get similar findings hence confirmable data quality was assured (Guba and Lincoln, 1994).
Forth stage of ensuring qualitative primary data collection is of good quality
Finally, after having considered the three preliminary qualitative principles for ensuring data was of quality the researcher assumed that it was now possible that final accumulated data could be transferable.
This means that the findings are now able to be viewed from a representative sample of the nonvictim and victims of sexual harassment to the population of Zambian soldiers. This means the work can be cited by other studies as well as be able to be contextualized with other academic studies because it has satisfied the preliminary information above
Data Collecting Instruments
Semi-Structured interview guides were used to collect data. It is an important tool as it is the most common form of interview in the qualitative research because it allows for pre-planned questions for participants to avoid losing focus. It also allows the researcher to come up with spontaneous questions or further probing to ideas, motives and feelings which are useful in under explored phenomenon. Further, using the formal face to face interview guide it is easy to get a depth understanding because the participants will be assured of confidentiality, anonymity and being untraceable during and after the study making it easier for them to open up (Carey, 2012).
Interview Process to the research questions for both Pretesting and Actual Data Collection
The researcher began each of the sessions with the individual victims and non-victims of sexual harassment for both male and female soldiers through a greeting. He went on to introduce himself. He told the participants that he was a student at the University of Zambia and that he had chosen the topic under discussion as a fulfillment of the requirements for the award of the Doctor of Philosophy Degree in Gender Studies because the topic captured his interest.
Before the interview session with each of the victim participants could commence, the researcher used the informed consent form to make the participants aware of the research and the procedures to be followed. He told the participants that the study was meant for academic purposes and that he wished to understand both the individual experiences of sexual harassment from a gendered perspective.
He told the participants that he wished to uncover there lived experiences on how Perceived Risk of sexual harassment is associated with protective behaviours from their verbal accounts of both nonvictim and victim soldiers in selected camps of Ndola, Chipata and Lusaka in Zambia.
Participants were further assured of confidentiality and that they were going to remain anonymous and untraceable in the research process thereby being addressed by a number and not by name, or pseudo name. The participants were then told that they were free to pull out at any point if they felt uncomfortable as it was there right during or before the session, which was voluntary.
They were then asked for a verbal or written consent. Those that gave consent in the affirmative were later administered with the Semi-Structured Interview guide using formal face to face questions.
He began with female and male victims before engaging female and male non-victims for all the four questions.
Perceived Risk of Sexual Harassment and Protective Behaviours among Zambian Soldiers in Selected Military Camps
International Journal of Humanities Social Sciences and Education (IJHSSE) Page | 85
Data Analysis
The study uses the Interpretive Phenomenology Analysis (IPA) alongside the Qualitative Risk Assessment or Analysis.
a) Interpretive Phenomenology Analysis (IPA)
The study used the Interpretive Phenomenology Analysis (IPA) because it is a methodological qualitative tool that is influenced by phenomenology, with a purpose of examining how people make sense of their major life or lived experiences. The Interpretive Phenomenology Analysis further draws its strength from the hermeneutics (the study of interpretation) (Smith et al, 2009).
The Interpretive Phenomenology Analysis is a powerful method for analyzing large amounts of Phenomological data collected from experiences of participants through semi-structured interviews or focus group discussions interviews (Smith and Osborne 2014). Further it has the capacity to condense raw data into categories based on valid inference and interpretation.
Based on the principles that fitted in the present study, this research was justified by adopting the Interpretive Phenomenology Analysis whose data presentation steps were replicated. There are three steps needed to be utilized since the primary data needed to be transcribed directly into verbal texts, which were adopted by the present research..
The first stage (Sorting Stage)
The first stage (Sorting Stage) began by arranging the interview protocols into the major themes by way of sorting them in the chronological order of the research questions. This lead to the order shown hereunder: (1) How is the situation of perceived risk perceptions of being sexually harassed among Zambian soldiers in selected military camps?
(2) What risk factors are associated with sexual harassment among Zambian soldiers in selected military camps?
(3) What protective behaviors do Zambian soldiers adopt against the risk factors of being sexually harassed in selected military camps?
(4) How do perceived risks of being sexually harassed motivate protective behaviors among Zambian soldiers in selected military camps?
Having completed the sorting out stage the second stage involved reading of the interview protocols for the purpose of editing.
The second stage (Editing Stage)
The second stage (Editing Stage) meant going through each of the interview protocols by way of reading so as to edit the data. The researcher upon having sorted out the data from all the participants based on what had been spoken proceeded to read through each transcript. The choice of the content was justified by what the researcher desired to know. This is because the researcher needed the desired information that is going to be useful with regards to the objectives and research questions of the study. Having completed the editing stage the last stage will involve reading and transcribing of the interview protocols.
The third stage (Coding Stage)
The third stage (Coding Stage) involved coding of important concepts and constructs inorder to come up with understandable themes and sub themes by way of reading and transcribing the data into Verbatims that are easily understood based on what the participants had said. This was achieved by way of exploration of the relationship between patterns and trends through reading.
The researcher then transcribed the data from the desired form into Verbatims. These were reported in the main thesis. This is the last stage in the research process.
b) Qualitative Risk Assessment or Analysis
The study as already stated in the Preambo also used the Qualitative Risk Assessment or Analysis alongside the Interpretive Phenomenology Analysis.
First Step (Risk Identification)
Qualitative Risk Assessment or Analysis was used to describe the overall process or method used to identify the threat or risk which was (sexual harassment) as well as potential risk factors that have a possibility of occurring (Risk Identification).
Second Step (Risk Analysis)
The overall risk analysis for sexual harassment of both non-victims and victims was calculated by multiplying the possibility with the severity, based on the balanced qualitative score card that had generic measurements low (1), medium (2) and high (3) predetermined ratings/levels.
Third Step (3 by 3 Risk Analysis Matrix for sexual harassment).
The risk matrix used two major components that included the (a) Possibility and (b) Severity in assessing and analyzing the risk of sexual harassment. Both components use used three standard colour codes which are Red (high), Orange(medium) and Green (low) in coming up with the overall qualitative risk analysis (Graves, 2000). The red colour was centered on the top right corner of the risk matrix. It represented high severity and high possibility for the occurrence of sexual harassment among soldiers. For example if the possibility on the left side of the boxes increasing upward is high (3) and the severity below the boxes increasing to the right is high (3) then the overall risk is (9). This means a high risk scale with a red colour is very likely to be given priority in terms of adoption of protective behavior in this case against the sexual harassment risk factors. Additionally, if the possibility on the left side of the boxes increasing upward is high (3) and the severity below the boxes increasing to the right is high (2) then the overall risk is (6). This also means a high risk scale with a red colour is likely to be given priority in terms of adoption of protective behavior in this case against the risk factors.
The green colour was centered on the bottom left corner of the risk matrix. It represented very low severity and very low possibility for the occurrence of sexual harassment among soldiers.
While the orange colour (low) was centred between the red colour (high) and green colour (very low).
The principal in all cases is the same the product of a risk =Possibility by Severity as indicated above.
Vertical (Y) Axis = Horizontal (X) Axis begins with 1. Low 2. Medium or 3.High increasing to the right.
The underpinning Methodology Research Philosophy supporting the qualitative methods of collection and analysis of data
The Idiographic Methodology Philosophy in this research was used as a solid foundation in strengthening of the qualitative methods. This was largely because its main thrust is to focus on the understanding of individual behavior from a small sample. The resultant emerging themes were established from the Perceived Risk of being sexually harassed and protective behaviours adopted that are adopted by soldiers. Small samples in the practice of this philosophy led to the easy analysis of the emerging themes.
The small sample that is aimed at understanding of individual behavior advocated by idiographic methodology philosophy supports the qualitative data saturation issues within the sample size which are tied to non-random selection methods under the qualitative research methods. For the qualitative approach the methods were further tied to physical interviews done by semi-structured interviews and the interpretive phenomenology analysis as shown under the methods (Burrell and Morgan, 1979; in Carey 20
Research Questions
(1) How is the situation of perceived risk of being sexually harassed among Zambian soldiers in selected military camps?
(2) What risk factors are associated with sexual harassment among Zambian soldiers in selected military camps?
(3) What protective behaviors do Zambian soldiers adopt against the risk factors of being sexually harassed in selected military camps?
(4) How do perceived risks of being sexually harassed motivate protective behaviors among Zambian soldiers in selected military camps?
The participants were identified and accurately described through the researcher/self-administered Sexual Harassment Experience Questionnaire (SHEQ). Affirmative response toward the defined concepts of the study under the definition of concepts section page (i) before the introduction through Sexual Harassment Experience Questionnaire (SHEQ) appendix 1 page meant automatic purposeful selection. Since the participants accepted the definition the researcher is was confident that they would further give a credible in-depth understanding of the concepts of sexual harassment (physical, verbal and non-verbal) unwelcome actions and perceived risk (low, medium or high) awareness situation on question one.
Further since the participants were classified into two groups" non-victims and victims based on their response towards the question on the protective behavioral status an in-depth understanding was also yielded through question three and four. The accurate description of the participants was later facilitated by purposeful selection hence; credible data was assured through the semi-structured interview guide that made use of the prescribed interview process below.
Interview Process to the Research Questions
The researcher began each of the sessions with the individual victims and non-victims of sexual harassment for both male and female soldiers through a greeting. He went on to introduce himself. He told the participants that he was a student at the University of Zambia and that he had chosen the topic under discussion as a fulfillment of the requirements for the award of the Doctor of Philosophy Degree in Gender Studies because the topic captured his interest.
Before the interview session with each of the victim participants could commence, the researcher used the informed consent form to make the participants aware of the research and the procedures to be followed. He told the participants that the study was meant for academic purposes and that he wished to understand both the individual experiences of sexual harassment from a gendered perspective.
He told the participants that he wished to uncover there lived experiences on how Perceived Risk of sexual harassment is associated with protective behaviours from their verbal accounts of both nonvictim and victim soldiers in selected camps of Ndola, Chipata and Lusaka in Zambia.
Participants were further assured of confidentiality and that they were going to remain anonymous and untraceable in the research process thereby being addressed by a number and not by name, or pseudo name. The participants were then told that they were free to pull out at any point if they felt uncomfortable as it was there right during or before the session, which was voluntary.
They were then asked for a verbal or written consent. Those that gave consent in the affirmative were later administered with the Semi-Structured Interview guide using formal face to face questions.
He began with female and male victims before engaging female and male non-victims for all the four questions.
Research Question One
(1) How is the situation of perceived risk of being sexually harassed among Zambian soldiers in selected military camps?
Research Findings to the Research Question One (General summary of findings).
The study combined responses from both the victim and non-victim participants. This was with regards to the situation of perceived risk of being sexually harassed among Zambian soldiers in selected military camps. The study further went on to establish that most of female victims who were of the ranks of private, lance corporal, corporal and sergeant suffered from verbal and nonverbal sexual harassment. The study found out that the perpetrators were male soldiers within the same range of ranks but either senior by date of promotion or a step ahead. On the other hand, it was also established that the male victims of the ranks of private, lance corporal and corporal suffered from verbal sexual harassment. The study found out that the perpetrators were female soldiers within the same range of ranks but either senior by date of promotion or a step ahead. On the other hand female non-victim soldiers within the same category also affirmed the verbal and non-verbal sexual harassment. However, the non-victims confirmed that they do not suffer from sexual harassment affirmed existence of verbal/non-verbal sexual harassment. The male and female non-victim soldiers in this category also affirmed the verbal sexual harassment. There was sufficient combined individual evidence from both victims and non-victims showing that they experienced highperceived or awareness towards risk of being sexually harassed. This came from both male and female soldiers. It must also be noted that the remaining category of ranks for both males and females victims also reported that physical sexual harassment came in form of both verbal and non-verbal from senior ranks among their interactions. However, the categories further added that these physical experiences were not in their present ranks but in the similar ranks who"s Verbatims have since been recorded. The ranks between sergeant to warrant officer class one for both male and female soldiers said they also perceived high risk of sexual harassment but not in their present ranks. Their verbatim are similar to the categories above rank categories. This means the category does not also suffer sexual harassment. It is against this background the category was excluded when dealing with the nonvictim category.
Sub-Theme Number
Further since the data saturation was consistent due to the homogenous characteristics which was ending at three participants each non-victim category of rank was represented by three participants. "Sir "meaning the researcher "I am aware that some of my friends are harassed because of lack of any counseling military authorities so that the problem is stopped….. because of this I also perceive or awareness that is high towards the risk of being verbally sexually harassed….. however, I manage to protect myself………………………………………………"
Studies outside Africa (Similarities).
A study conducted by Gallagher (2008), in the United States of American Army also found that the type of sexual harassment among the female officer cadets under training was verbal and nonverbal. The findings of the study are similar with those for the present inquiry in the selected military camps of Zambia.
The reason for the similarity in the findings between the present inquiry and the American study is because the military characteristics for the harassers and the victims may be similar across other Armies.
Furthermore, another study by in the United States of American Army also found that the type of sexual harassment among the interactions its service personnel was high and that the type of sexual harassment was verbal. In the American study the victims of sexual harassment were female military personnel.
The reason for the similarity in the findings of the American study with the Zambian study may be alluded to the fact that the characteristics of the of the male harassers might be in one way or the other similar due to the upbringing in terms of the way they are trained.
Furthermore, Panday (2008) in the Indian Army, established that the hostile work environment referred to unwelcome sexual, advances, vulgar language and requests for sexual favours or other verbal and non-verbal conduct which interferes with work performance. The reason for the similarity in findings is that the female victims in both the Indian and the Zambian study may have similar characteristics that lead to being harassed. Furthermore, the reasons for also not recording prevention strategies might be ascribed to the uniform way of management of military personnel by military commanders.
Furthermore, Buchanan (2014) in the United States Army established that the most common type of sexual harassment was the gender harassment which involved unwelcome verbal and non-verbal actions from senior military ranks.
The reason for the similarity in findings may be ascribed to the fact that both studies had participants or victims who are female that may have similarities in the way females interact with the male senior ranked personnel at different levels. It may be further stated that reason for the findings on the females could be that the males discriminatory behavior that makes try to come up with unwanted comments on the victims.
Another study conducted by Pryor in (2010) having considered dimensions, and correlates of psychosocial harm that women in the military faced he went ahead to conduct a retest they concluded that behavioral domain with respect to physical, verbal and non-verbal unwelcome actions of sexual harassment fell in three major classes. These included sexual coercion, gender harassment and unwanted sexual attention in the military communities among soldiers.
The reason for the similarity in findings is that the female victims in both the United States and the Zambian study may have similar characteristics that lead to being harassed. Furthermore, the reasons for also not recording prevention strategies might be ascribed to the uniform way of management of military personnel by military commanders.
A study conducted by Fitzgerald (et al 1999) established that Gender harassment referred to as a broad range of verbal and non-verbal behaviors that were not aimed at sexual cooperation but those which conveyed insulting, hostile, and degrading attitudes affected female soldiers. Some elicited examples included threatening, intimidating or hostile acts that are all unwelcome.
The reason for the similarities in findings is ascribed to the fact that the military culture is some similar with only minor variations.
Studies within Zambia (Similarities)
A landmark study by Matakala (2015) that was conducted in Ndola with female soldiers found out that the type of sexual harassment for female soldiers was verbal and nonverbal sexual harassment among the interactions of its service personnel was also high. The verbal findings were similar to the present study. For example The reason for the similarity in this finding is because both studies targeted Non-Commissioned Ranks for soldiers -1.Private, 2.Lance Corporal 3. Corporal, 4.Sergeant, 5.Staff Sergeant, 6.Warrant Officer Class Two and 7.One in ascending order from the junior to the highest rank in this category. These have similar characteristics in terms of the way they interact.
It may also be assumed that the females in both studies are associated with a highly patriarchal structure that may be similar in terms of the characteristics of the male harassers. Furthermore, another study by Matakala e"tal (2018) that was conducted in Zambia among soldiers came up similar findings that the type of sexual harassment was verbal and nonverbal for females based on the gendered perspective of the interactions of the soldiers. The verbal findings were similar to the present study.
For example on (page 11) a female soldier said " Senior male soldiers tell us in Nyanja that "imwe bakazi kulibe buffalo ikazi mwabwela kuononga nchito meaning in English that There is no female buffalo in our system you females have come to destroy our work." Both Zambian studies that were conducted by the aforementioned scholar are similar to the present study. This is because the present inquiry also found out from both male and female soldiers that unwanted actions that were verbal and nonverbal.
The reason for the similarity in the findings is because it is ascribed that the cultural aspects in terms of the behaviors of the participants that are similar in the military interactions for both the males and females are the same. Further the similarity may be ascribed to similar way of administration.
Discussion Of Research Findings To The Research Question One (Dissimilarities with other reviewed literature) (New Knowledge)
There was sufficient combined individual evidence from non-victims showing that they experienced high perceived risk of being sexually harassed. This came from both male and female soldiers. This situation regarding the high perceived risk was ascribed to individual, sexist/perpetrator and military potential risk factors.
The reason for the new knowledge is assumed that an interaction with literature seem to have shown no study other study that looked non-victims. This is with regard to the situation of high perceived risk that was ascribed to individual, sexist/perpetrator and military potential risk factors.
Therefore the study as indicated under the significance is important because it has given an insight into the different perceive risks of sexual harassment for soldiers in selected military camps.
Research Question Number Two (02).
(2) What risk factors are associated with sexual harassment among Zambian soldiers in selected military camps?
Thematic Sub Questions:
How do you think sexist/perpetrator/sexual harassers contribute to the potential risks that could increase sexual harassment among the interaction of soldiers?
What do you think are the major individual victim potential risk factors that are associated with the occurrences of sexual harassment?
In your own opinion describe the military potential risk factors that you think are associated with the occurrences of sexual harassment.
Research Question Number Two
(2) What risk factors are associated with sexual harassment among Zambian soldiers in selected military camps? The study combined responses from both the victim and non-victim participants.
This was with regards to the potential risk factors that are associated with sexual harassment among Zambian soldiers in selected military camps.
The study findings showed that both the male and female victim/non-victim soldiers showed that there were three major ways in Sexist/Perpetrator/Sexual Harassers as a first of the potential risks factors increases sexual harassment among the interaction of soldiers.
Bully, Discriminatory and Stalking characteristics were established as the three major classifications in which senior male and senior female Sexist/Perpetrator/Sexual Harassers contributed to the potential risks that increased sexual harassment among the interaction of soldiers.
The study further went on to establish that female and male victims had one major way in which they thought individual potential risk factors increases sexual harassment among the interaction of soldiers. This was through Non-Reporting of the senior male and female Sexual Harassers to relevant authorities by victims hence being the major contributor to the occurrences of sexual harassment among the interaction of soldiers. However, the non-victims were not asked on the second sub-theme as they knew the characteristics of the harassers and said they were not harassed hence the exclusion from the question on how individual potential risk factors may increase sexual harassment.
Lastly, the study found out from both the male and female victim/non-victim soldiers that there were three major ways in which military potential risk factors increases sexual harassment among the interaction of soldiers. Obedience and Discipline as (Military Culture), Lack of written military prevention measures against sexual harassment and Differential seniority in rank structure were the three major ways in which military potential risk factors increases sexual harassment among the interaction of soldiers.
Theme Number One: Sexist/Perpetrator/Sexual Harassers contributing to the potential risks that could increase sexual harassment among the interaction of soldiers: Male and Female Victim Verbatims.
Bully Senior Male Harasser Potential Risk Characteristics (Sub-Theme One).
According to an individual Female "Victim" Soldier of the Rank of Private by the name of Josephine Mtonga , Josephine Phiri, Lance Corporal by the name of Pascalina Chilongoshi, Amina Abili, Corporal by the name of Jeannie Shapa, Sheba Babie, Sergeant by the name of Joyce Kalebwe, Brenda Kaloshi and Joyce Mulemwa (Pseudo "not real" Names) when asked onthe Sexist/Perpetrator/Sexual Harassers contributing to the potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks individually and severally had this to say: "Having chatted with you was at individual level last time … Once again I am very happy to be given a chance to provide an answer on howSexist/Perpetrator/Sexual Harassers contribute to the potential risks that could increase sexual harassment ………. This is when a senior man becomes a bully because they will keep on doing this verbally or non-verbally knowing that the junior will not do anything about it……………………………………….. It happens to me every time………………………."
Discriminatory Senior Male Harasser Potential Risk Characteristics (Sub-Theme Two).
Another Individual Female "Victim", Soldier of the Rank of Private by the name of Josephine Phiri Lance Corporal by the name of Pascalina Chilongoshi, Amina Abili, Corporal by the name of Jeannie Shapa, Sheba Babie, Sergeant by the name of Joyce Kalebwe and Joyce Mulemwa (Pseudo "not real" Names) when asked on the Sexist/Perpetrator/Sexual Harassers contributing to the potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks Individually and severally had this to say: " I wish to thank for allowing me to Providing an answer onthe Sexist/Perpetrator/Sexual Harassers contributing to the potential risks that could increase sexual harassment among the interaction of soldiers………… as a victim it is important to state that the male seniors keep saying that there is no female buffalo as can be seen from the Zambian army symbol…….. They verbally say this to me and I can say this is discrimination of it highest level……………they normally say these words to me because they are senior………………..i now know that each time I meet him he will say the words to me……………………………….."
Stalking Senior Male Harassers Potential Risk Characteristics (Sub-Theme Three).
Furthermore, another individual Female "Victim", Soldier of the Rank of Private by the name of Josephine Mtonga and Josephine Daka, Lance Corporal by the name of Amina Ability, Corporal by the name of Janet Shapa (Pseudo "not real" Names) when asked on the Sexist/Perpetrator/Sexual Harassers contributing to the potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks Individually and severally commented that:
"Sexist/Perpetrator/Sexual Harassers contribute to the potential risk that could increase sexual harassment among the interaction of soldiers through catching me unaware verbally and nonverbally especially when I am alone………………………………………………………".
The next category looks at the male victims.
Bully Senior Female Harassers Characteristic Identification (Sub-Theme One).
According to Male (Victim) soldier of the Rank of Private by the name of Thomson Kunda and a Lance Corporal by the name of Stanley Chipowe and Corporal by the name of Abraham Mubita (Pseudo "not real" Names) when asked on the Sexist/Perpetrator/Sexual Harassers contributing to the potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks individually and severally said that: "Sir meaning the "researcher" Once again I am very happy to be given another chance to provide an answer on howSexist/Perpetrator/Sexual Harassers contributes to the potential risks that could increase sexual harassment among the interaction with the some senior females soldiers …………… I want to say that our senior female always want to take advantage of us junior ranks verbally because they are senior……………they are bullies in uniform sir……………………………………….. they just want to take advantage of me because of being senior"
Perceived Risk of Sexual Harassment and Protective Behaviours among Zambian Soldiers in Selected Military Camps
Another Male (Victim) soldier of the Ranks of Private by the name Thomson Kunda, Rodrick Palula, Antony Phiri Lance Corporal by the name of Alexander Msiska, Stanley Chipowe, Floyd Kamanga, Corporal by the name of George Muyeba, James Banda and Abraham Mubita (Pseudo "not real" Name) when asked on Sexist/Perpetrator/Sexual Harassers contributing to the potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks Individually had this to say: "Sir meaning the "researcher" Am happy to once again be given another chance to speak ……. This time around on my personal view concerning howSexist/Perpetrator/Sexual Harassers contribute to the potential risks that could increase sexual harassment…… I become aware of the threat of sexual harassment among the interaction with the females. I wish to say that most of the senior females have a feeling of being less important to me as a male…. And because of that they have different verbal actions that are not wanted by myself …………. When they say something they will not want to get an opinion from a man who is junior even if it is correct…………They will want to come up with faults as revenge because of this feeling on the junior men………………….Sir they suffer from inferiority complex………………………………………………………" Theme Number Two: Individual victim potential risk factors that are associated with the occurrences of sexual harassment among the interaction of soldiers: Male and Female Victim Verbatims.
Non-Reporting of the senior male Sexual Harassers to relevant authorities by victims was said to be the major contributor to the occurrences of sexual harassment (Sub-Theme One)
among the interaction of soldiers in the Non-Commissioned Ranks individually and severally had this to say:
I am very happy to be given a chance to talk about military potential risk factors that could lead to the occurrence of sexual harassment…… I have a high a strong perception that because the military culture that is characterized with obedience and discipline especially for me as junior to my seniors leads to occurrences of sexual harassment…………. This is because……
We are not supposed to say anything against a senior person whether they are right or wrong a senior person is always right………………………………………………..ours is to obey……..because discipline is a culture……………………………………………….. "
Lack of written military prevention measures against sexual harassment (Sub-Theme Two).
Another individual Female "Victim" Soldier of the Rank of Private by the name of Josephine Phiri, Lance Corporal by the name of Pascalina Chilongoshi, , Corporal by the name of Jeannie Shapa, Sheba Babie, Janet Shapa Sergeant by the name of Joyce Kalebwe, Brenda Kaloshi and Joyce Mulemwa (Pseudo "not real" Names) when asked onthe military potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks individually and severally had this to say:
Differential seniority in rank structure (Sub-Theme Three).
Furthermore, another individual Female "Victim" Soldier of the Rank of Private by the name of Josephine Mtonga , Josephine Phiri, Josephine Phiri, Lance Corporal by the name of Pascalina Chilongoshi, Amina Abili, Amina Ability, Corporal by the name of Jeannie Shapa, Sheba Babie, Janet Shapa (Pseudo "not real" Names) when asked onthe military contributing to the potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks individually and severally commented that: "Strict differential seniority of ranks ……… is a tradition that results in the major occurrences of sexual harassment…….those in the higher ranks are the only ones that can be safe………………… This is because they hold higher ranks……….For me as a smaller rank it is very easy for me to be harassed" The next category is that of the male victims.
Obedience and Discipline as [Military Culture], (Sub-Theme One).
According to an individual Male "Victim" Soldier of the Rank of Private by the name of Thomson Kunda, Rodrick Palula, Antony Phiri, Lance Corporal by the name of Alexander Msisika, Stanley Chipowe, Floyd Kamanga, Corporal by the name of George Muyeba, James Banda and Abraham Mubita (Pseudo "not real" Names) when asked onthe military potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks individually and severally had this to say: "I am very happy to be given a chance to talk about military potential risk that are linked to being sexually harassed by some senior female verbal sexual harassers……Most of them are in a tendency of bullying me …………. This is because…… We are not supposed to say anything against a senior person whether they are right or wrong a senior person is always right, ours is to obey, because discipline is a culture in our system………………….. "
Lack of written military prevention measures against sexual harassment (Sub-Theme Two).
Another individual male victim soldier of the rank of lance corporal Floyd Kamanga (Pseudo "not real" Names) when asked onthe military potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks individually and severally had this to say:
Bully Senior Male Harasser Potential Risk Characteristics (Sub-Theme One).
According to an individual Female "Non-Victim" Soldier of the Rank of Private by the name of Judy Musoko, Joyce Musenge, Tizzy Lemba, Lance Corporal by the name of Patricia Chosi, Chonya Amasi, Chama Abby, Corporal by the name of Chisoni Musonda,Chola Stella, Chipo Stella, Sergeant by the name of Aggie Musonda, Chilila Stella, Chilekwa Stella (Pseudo "not real" Names) when asked onthe Sexist/Perpetrator/Sexual Harassers that may contribute to the potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks individually and severally had this to say:
Discriminatory Senior Male Harasser Potential Risk Characteristics (Sub-Theme Two).
Another Individual Female "Non-Victim", Soldier of the Rank of Corporal by the name of Chisoni Musonda, Chola Stella, Chipo Stella, Sergeant by the name of Aggie Musonda, Chilila Stella, Chilekwa Stella (Pseudo "not real" Names) when asked on the Sexist/Perpetrator/Sexual Harassers may contribute to the potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks Individually and severally had this to say:
Stalking Senior Male Harassers Potential Risk Characteristics (Sub-Theme Three).
Furthermore, another individual Female "Non-Victim", Soldier of the Rank of Private by the name of Judy Musoko, Joyce Musenge, Tizzy Lemba, (Pseudo "not real" Names) when asked on the Sexist/Perpetrator/Sexual Harassers contributing to the potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks Individually and severally commented that: "Sexist/Perpetrator/Sexual Harassers contribute to the potential risk that could increase sexual harassment through unwanted actions were senior males are said to be targeting junior women soldiers that are alone with unwanted actions……………I have information from a friend of mine…………………………………………………….".
Bully Senior Female Harassers Characteristic Identification (Sub-Theme One).
According to Male (Non-Victim) soldier of the Rank of Private by the name of Taiza Lembani, Corporal by the name of Chola Chitalu (Pseudo "not real" Names) when asked on the Sexist/Perpetrator/Sexual Harassers contributing to the potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks individually and severally said that: "I have heard that there are some senior female soldiers who are in the habit of using there seniority to verbally sexually harass male juniors………… I know them and I also know there characteristics so that it is even easy for me to avoid them….. "
Inferiority Complex Senior Female Harassers Characteristic Identification (Sub-Theme Two).
Another Male (Non-Victim) soldier of the Ranks of Private by the name James Muko, Joe Senge, Taiza Lembani Lance Corporal by the name of Joseph Chisi, Chonya Mwenya, Chama Chanda, Corporal by the name of Chewe Musonda, Chola Stanley, Chola Chitalu (Pseudo "not real" Name) when asked on Sexist/Perpetrator/Sexual Harassers contributing to the potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks Individually had this to say: "Sir meaning the "researcher" I must tell you that senior female soldiers always have a negative attitude on me because I am a man……………They think I downplay them as women hence they end up harassing me verbally due to that reason………. I don"t know where this inferiority complex comes from………………………………………. Because I am aware of these characteristics I am able to tactically avoid them………………….. " Theme Number Two: military potential risk factors that you think are associated with the occurrences of sexual harassment : Male and Female Non-victim Verbatims.
Obedience and Discipline as [Military Culture], (Sub-Theme One).
According to an individual Female "Non-Victim" Soldier of the Rank of Private by the name of Judy Musoko, Tizzy lemba, Lance Corporal by the name of Patricia Chosi (Pseudo "not real" Names) when asked onthe military potential risks that could increase sexualharassment among the interaction of soldiers in the Non-Commissioned Ranks individually and severally had this to say:
Lack of written military prevention measures against sexual harassment (Sub-Theme Two).
Another individual Female "Non-Victim" Soldier of the Rank of Private by the name Judy Musoko, Joyce Musenge, Tizzy Lemba Lance Corporal by the name of Patricia Chosi, Chonya Amasi, Chama Abby, Coporal by the name of Chisoni Musonda, Chola Stella, Chipo Stella Sergeant by the name of Aggie Musonda, Chilila Stella, and Chilekwa Stella (Pseudo "not real" Names) when asked onthe militsry potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks individually and severally had this to say: "Am happy to be asked once again …….. there are no written measures on prevention of sexual harassment hence some of my friends being harassed……….. For me lackly I know the characteristics of these people hence avoiding being a victim………………………."
Lack of written military prevention measures against sexual harassment (Sub-Theme One).
According to an individual male "Non -Victim" Soldier of the Rank of Private (Pseudo "not real" Names) when asked onthe military potential risks that could increase sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks individually and severally had this to say:
"Am happy to be asked once again on my view is that although it appears that there are no prevention measures I know how harassers behave and as such I avoid them………… "
The next sub-paragraph discusses the similarities with respect to question two.
Discussion of Research Findings to the Research Question Two (Similarities with other reviewed literature)
There was sufficient evidence to show that the verbal and non-verbal sexual harassment that is experienced by junior female victim soldiers was as a result of bully perpetrator characteristics or behaviors that were exhibited by the senior male harassers. Additionally, there was also overwhelming evidence to show that female non-victim soldiers that initially said they understand sexual harassment had also seen or heard that bully characteristics or sexist behavior that were exhibited by the senior male harassers. This type of risk later led to the female junior soldiers being victims.
There was also sufficient evidence to show that the verbal and non-verbal sexual harassment that is experienced by junior female victim soldiers was as a result of oppressive perpetrator characteristics or behaviors that were also exhibited by the senior male harassers. This type of risk also led to the female junior soldiers being victims. The non-victims soldiers that understand sexual harassment have also heard or seen the same events.
These findings are consistent with other studies. This is because there are studies that came up with similar findings in which bully or oppressive behavior as potential risks led to sexual harassment of the victims. This similarity was also seen from other studies at Global, African and Zambian perspectives.
For example when when examining childhood maltreatment as a risk factor for sexual harassment also established perpetrator bully and oppressive behaviors as leading to the problem in the United States of American Army.
Additionally, another study by Rosen and Martin (2000) whose aim was to examine personality characteristics that had the capacity of increasing sexual harassment also found that this would be through the risk of male discriminatory behavior against the females in the united states of American Army.
Another study that had similar findings to that of this study was conducted by Leskinen and group in 2011. According to Leskinen e"tal (2011) whose aim was to broaden the understanding with regard to gender harassment also found out that there was discrimination and oppressive behavior towards the females in the United States Army. This is believed to have led to non-verbal and verbal sexual harassment seen through non-sexual cooperation between the male and women.
The reason for the similarity in the findings with other studies regarding the perpetrator behaviors that lead to the risk of sexual harassment is largely because of similar environmental characteristics. These similar environmental internal characteristics may be seen through the differences in between the senior ranks and junior ranks which emphasize strict obedience and discpline. This researcher" perspective is also consistent with other studies. For exampleConformity, obedience, and the different hierarchical gender power relations between those in the lower ranks and higher ranks are a major factor that has contributed to the existence of sexual harassment in the military organizations (UNSC, 2008).
There was also sufficient evidence to show that the verbal and non-verbal harassment that was experienced by victims was usually not reported for fear of revenge from the senior soldiers. This individual risk was recorded from the victims. These findings by this study were also consistent with other reviewed literature.
For example when examining the childhood maltreatment history as a risk factor for sexual harassment in the American Army also found that the victims were not reporting the senior perpetrators to the relevant authorities about the problem.
Another study by the duet of Valerie and Cynthia (2016) aimed at understanding sexual harassment in the military by reviewing policy and trends in relation to why it is more pronounced than the civilians also that the victims were not reporting the cases to the relevant authorities.
Additionally Jana (2003) also found that the victims were not reporting the cases of being sexually harassed.
The reason for these similarities with other studies from the perspective of this study may be assumed to be as a result of fear of being punished through revenge by the perpetrator. This researcher perspective or view is also consistent with Jana" study in 2003 above.
There was also sufficient evidence to show that military risk factors such as lack of proper written and documented measures against sexual harassment resulted in the occurrence of the problem.This finding by this study was also consistent with other reviewed literature.
For example when when examining effects of the three types of unwanted sexual harassment experiences on the psychological wellbeing of soldiers also found that lack of written preventive measures from the organization led to the problem. Further another study by the duet of Valerie and Cynthia (2016) when reviewing the policy and trends in relation to how they increased sexual harassment also found that the United States Army did not have written preventive measures hence the problem. Furthermore, another study by Harris et al (2017) also concluded that both individual and organizational climates and factors are important. However, organizational climate or context had less to do with culture or unit cohesion, but more to do with tolerance of sexism.
The research findings by Miller et al (1997) in the United States Army for this study showed that some military men believed that military women are the powerful gender. The research further recorded that unwelcome verbal and non-verbal actionwere experienced from bully female harassers.
Finally having discussed the findings in relation to the similarities with other studies it is also important to look at the dissimilarities.
Discussion of Research Findings to the Research Question Two (Dissimilarities with other reviewed literature) (New Knowledge)
There was sufficient combined individual evidence from both male and female victim/non-victim soldiers showing that Stalking Sexist/Perpetrator/Sexual Harassers as a potential risks factors increases sexual harassment among the interaction of soldiers.
The reason for the new knowledge is that, it is assumed that no other study has looked at both victims and non-victims showing empirical evidence of Stalking harassers as a potential risk factor that increases sexual harassment among the interaction of soldiers .
Therefore the study as indicated under the significance is important because it has given an insight into how stalking harassers as a potential risk factor increases of sexual harassment among the interaction of soldiers in selected military camps.
Discussion of Research Findings to the Research Question Two (Theoretical Frameworks from reviewed literature)
Based on the research findings of this study there was sufficient evidence to show that male discriminatory harassers did so because the female victim soldiers are in the minority.
Having been anchored on the theory of tokenism (1998) whose main thrust is that groups in the minority are more likely to be discriminated and marginalized by a group in the majority, it was established it was also in support of the findings.
These findings are also supported (Matakala 2015) on page 66 who also found that due to being in the minority female soldiers were likely to be sexually harassed. He also used the token theory.by Kamir (1998) Furthermore, based on the research findings of this study there was sufficient evidence to show that male discriminatory harassers victimized the female victim soldiers in large camps. Because of the difference in the ranks and authority it was easy for the senior ranked males to harass the junior female soldiers.
The findings of the present study are supported by the theory of gender power relations by Michael Foucault (1975). This is because he postulated that the ability to have firm control and suppression of others through sexual harassment lies in the differential hierarchical rank structure based on who holds authority and power. The theory further states anyone who holds a high military rank in terms of authority and power has the capacity to control others in large military camps held by a senior field ranks.
The strength of this theory is that it also examined the large Hessian Military camps headed by senior field ranks which have the potential risk for sexual harassment occurrence.
Discussion of Research Findings to the Research Question Two (Conceptual Framework of the study in relation to study findings)
There was sufficient evidence to show that Discriminatory, Oppression and Inferiority Complex Harassers as perpetrator potential risk factors are responsible for sexual harassment among the interaction of soldiers. There was also sufficient evidence to show that Individual non-reporting as a potential risk factor was responsible for sexual harassment among the interaction of soldiers. Lastly, there was also sufficient evidence to show that lack of written preventive measures against sexual harassment as a military potential risk factor is responsible for sexual harassment among the interaction of soldiers.
These findings have as a result validated the assumption for the study Explanatory Conceptual Framework. This is with regard to main relationship between the independent and dependent variables respectively. This is because the explanatory conceptual framework, assumed that the , measurement of Military organizations, Sexist/Perpetrators and Individual Potential Risk Factors as an (Independent Variable or Construct) may be responsible for Sexual Harassment which is the (Dependent Variable or Construct).
Research Question Number Three (03).
(3) What protective behaviors do Zambian soldiers adopt against the risk factors of being sexually harassed in selected military camps?
Sub Thematic Questions:
Do you protect yourself against (low, medium or high) perceived or awareness levels of risk factors towards the possibility of being sexually harassed?
In your own opinion why do you adopt or not adopt protective behavior against the (low, medium or high) awareness level towards the possibility of being sexually harassed in relation to (low, medium or high) severity or unpleasantness?
What type of protective behaviors do you use against risk of sexual harassment?
Research Findings to the Research Question Three (General summary of findings).
(3) What protective behaviors do Zambian soldiers adopt against the risk factors of being sexually harassed in selected military camps?
The study combined responses from both the victim and non-victim participants. This was with regards to the adoption of protective behaviors by Zambian soldiers against the risk factors of being sexually harassed in selected military camps.
The study findings showed that both the male and female victim soldiers, did not adopt any protective behavior against the high perceived or awareness level towards possible risk factors of being sexually harassed.
A follow up question to this prevailing situation showed that the female victims feared to protect themselves against the low severity of being sexually harassed because they would end up with some form of revenge from the male perpetrators. Additionally, it was also established that the females lacked knowledge of the useful protective initiatives. Furthermore, the follow-up question to the male victims regarding why they do not adopt protective behavior"s to the low severity of sexual harassment also showed that they lacked knowledge of the useful protective initiatives.
The study findings go ahead to show that both the male and female non-victim soldiers, adopted protective behavior against the high perceived or awareness levels towards the possible risk factors of being sexually harassed.
A follow up question to this prevailing situation showed that the female non-victim soldiers justification for adopting protective initiatives against the high severity of being sexually harassed was duo in nature. This was with regard to fear of affecting personal health and prior-knowledge of the perpetrator characteristics. The protective initiatives for the female non-victim soldiers included escape and evasion, avoidance of lone movement and stick friendship protective strategies respectively. This was done against the male perpetrator ranks of privates, lance corporals, corporals and sergeants that are senior in terms of promotion or a step ahead in rank.
Protective behavioral initiative types used against the high perceived risk of being sexually harassed (Theme Three).
According to an individual Male "Non-Victim" Soldier of the Rank of Private by the name of James Muko, Joe Senge, Taiza Lembani, Lance Corporal by the name of Joseph Chisi, Chonya Mwenya, Chama Chanda, Corporal by the name of Chisoni Musonda, Chewe Musonda, Chola Stanley and Chola Chitalu (Pseudo "not real" Names) when asked onthe Protective behavioral initiative types used against the high perceived risk of being sexually harassed among the interaction of soldiers in the Non-Commissioned Ranks individually and severally had this to say: "I use the escape and evasion protective initiative towards the high perceived risk of being sexually harassed……………………………………………… This is because I know the senior females that suffer from inferiority complex and how they behave towards junior male soldiers……………………………………………………………………………………………….. "The next sub-paragraph discusses the above findings by contextualizing them with similar studies.
Discussion of Research Findings to the Research Question Three (Similarities with other reviewed literature)
There was sufficient evidence to show that the junior female non-victim soldiers feared to have their personal health affected if they did not adopt protective behaviors against severity of sexual harassment from the senior male sexual harassers. Additionally, there was also overwhelming evidence to show that female victim soldiers also feared that male soldiers would revenge hence the fear of adopting protective behaviors against the severe sexual harassment from senior male sexual harassers.
These findings are consistent with other studies. This is because there are studies that came up with similar findings in which their personal health affected when sexual harassment takes place. This similarity was also seen from other studies at Global, African and Zambian perspectives.
For example when when examining childhood maltreatment as a risk factor for sexual harassment also established perpetrator bully and oppressive behaviors as leading to the problem in the United States of American Army. This would later lead to chronic headache which is a personal health complication by the affected victims.
Additionally, another study by Rosen and Martin (2000) whose aim was to examine personality characteristics that had the capacity of increasing sexual harassment also found that this would be through the risk of male discriminatory behavior against the females in the united states of American Army. Some victims said they suffered hypertension which can also be regarded as a personal health complication, which they said was as a result of being sexually harassed.
According to Kim e"tal (2016) whose aim was to understand the influence of sexual harassment on mental health among female military personnel in the Korean Armed Forces, it was established that the victims had severe headache. This also was a personal health complication.
Another study that had similar findings to that of this study was conducted by Leskinen and group in 2011. According to Leskinen e"tal (2011) whose aim was to broaden the understanding with regard to gender harassment also found out that there was discrimination and oppressive behavior towards the females led to insomnia in the United States Army. This was also another personal health complication.
The reason for the similarity in the findings with other studies regarding the personal health complication that arise as a result of the risk of sexual harassment is largely because of similar environmental military characteristics. These similar environmental are also tied to the negative health effects of sexual harassment. This researcher" perspective is also consistent with other studies. For exampleThe health definition of sexual harassment considers, it as a "Public health issue as it does not only affect the mental stability of an individual but it is also linked to other long term health problems that could put one at risk of high blood pressure, anxiety, depression and insomnia" (WHO, 2014). The study noted that work place sexual harassment did predispose victims to high risk of stroke and heart attacks as compared to those who did not experience the incidence (ibid,2014).
There was also overwhelming evidence to show that female victim soldiers also feared that male soldiers would revenge if they reported the senior male sexual harassers to the authorities.These findings by this study were also consistent with other reviewed literature.
For example when examining the childhood maltreatment history as a risk factor for sexual harassment in the American Army also found that the victims were not reporting the senior perpetrators to the relevant authorities about the problem because they feared that the repercussions would not be good.
Another study by the duet of Valerie and Cynthia (2016) aimed at understanding sexual harassment in the military by reviewing policy and trends in relation to why it is more pronounced than the civilians also that the victims were not reporting the cases to the relevant authorities they feared of being revenged.
Additionally Jana (2003) also found that the victims were not reporting the cases of being sexually harassed they feared a bad outcome once the perpetrator is called by the seniors.
The reason for these similarities with other studies from the perspective of this study may be assumed to be as a result of fear of being punished through revenge by the perpetrator. This researcher perspective or view is also consistent with Jana" study in 2003 above.
Finally having discussed the findings in relation to the similarities with other studies it is also important to look at the dissimilarities.
Discussion of Research Findings to the Research Question Three (dissimilarities with other reviewed literature) (New Knowledge)
There was sufficient individual evidence from both male and female victim soldiers showing that they do not adopt protective behavioral initiative against high perceived or awareness possible risk factors of sexual harassment among the interaction of soldiers. This was largely because the severity was of being sexually harassed was medium and they lacked the knowledge of any protective initiatives.
There was also sufficient individual evidence from both male and female non-victim soldiers showing that they do adopt protective behavioral initiative against high perceived or awareness to possible risk factors of sexual harassment among the interaction of soldiers. This was largely because the severity of being sexually harassed was high and they had knowledge of the correct protective initiatives. The protective initiatives for the female non-victim soldiers included escape and evasion, avoidance of lone movement and stick friendship protective strategies respectively. This was done against the male perpetrator ranks of privates, lance corporals, corporals and sergeants that are senior in terms of promotion or a step ahead in rank. The protective initiatives for the male non-victim soldiers were the escape and evasion. This was done against the female perpetrator ranks of privates, lance corporals, corporals and sergeants that are senior in terms of promotion or a step ahead in rank.
The reason for the new knowledge is that, it is assumed that no other study has looked at both victims and non-victims regarding the situation on whether or not soldiers who perceive the risk of sexual harassment are aware of and/or adopt protective behaviours against sexual harassment.
Therefore, the study as indicated under the significance is important because it has given an insight into the different protective behaviors that Zambian Soldiers who perceive risk of sexual harassment in selected military camps utilize.
Discussion of Research Findings to the Research Question Three (Theoretical Frameworks from reviewed literature)
Based on the research findings of this study there was sufficient evidence to show that both male and female victim/non-victim soldiers had either protected themselves or not depending on the level of their perceived or awareness towards the risk factors of sexual harassment .
It was established that both male and female victim soldiers had high perceived or awareness of possible risk factors of being sexually harassed with low severity but never adopted any protective initiatives.
It was also established that both male and female non-victim soldiers that had high perceived or awareness of possible risk factors of being sexually harassed with high severity adopted protective initiatives.
The study was anchored on the Tripartite Perceived Risk Model(Ferrer e"tal, 2016) which gives an insight into the perceived risks of a health threat and peoples subjective judgements about whether/whether or not they should develop protective behaviours based on severity knowledge. Key for this study is that people with Low perceived risk; arenot likely to come up with protective behaviors. While those with High perceived risk; are likely to come up with protective behaviors.
It must be noted that the findings of male and female non-victim soldiers had high perceived risk of being sexually harassed but adopting protective initiatives, is supported by the Tripartite Perceived Risk Model. This is because key for the model is people with High perceived risk; are likely to come up with protective behaviors (Ferrer e"tal, 2016).
However, It must be noted that the findings of male and female victim soldiers had high perceived risk of being sexually harassed but did not adopt any protective initiatives, is not supported by the Tripartite Perceived Risk Model because it has made a deviation. This is because key for the model is that only people with Low perceived risk; arenot likely to come up with protective behaviors (Ferrer e"tal, 2016).
The Tripartite Perceived Risk Model which was designed to be used on health risks. However, in this case it was used for the first time on sexual harassment. This is because the model is relevant to sexual harassment as it falls within the social and health definition for which this model was developed for. Sexual harassment is a public health issue which is linked to other severe long term health problems that could put one at the risk of high blood pressure, anxiety, depression and insomnia (WHO, 2014).
It is against this background that having deviated from the initial model there will be no modification but generation of a new theory. This is because sexual harassment is a new phenomenon on perceived risk concept. This assertion is supported by the fact in science, research is one of the main process by which data are collected to support, reject or modify theory or to develop new ones. Additionally a theory is an interpretation of phenomenon (Parahoo, 1997) Furthermore since the model, looked at health risks that was tested on cancer patients I feel I have made a significant contribution because I used it for the first time on sexual harassment which is also considered as a public health condition (WHO, 2014).
I therefore feel my new way of explain phenomenon should be ascribed to my name.
This is because inorder to draw such conclusions I have clarified academic arguments, and given reasons for this final documentation of my thinking (Mouton, 2016 andMoody 1990).
Therefore I wish to ascribe my new findings to myself as shown below: Matakala et al 2021: Theory of Non-Victim and Victim High perceived risk of being sexually harassed and emerging protective behaviors among Zambian Soldiers; The new theory is also ascribed to university of Zambia dons namely Dr Anne Namakando-Phiri a retired army colonel and professor Mubiana Macwang"i.
Matakala (2021)'s Theory Of Non-Victim/Victim High Perceived Risk Perception of Sexual Harassment among Zambian Soldiers in Relation To Motivation of Protective Behaviours.
Key for this theory is that the sub-population of both maleand female victim soldiers that had an overall high perceived risk of being sexually harassed, calculated from their high perceived or awareness of possibility with a medium severity risk never adopted any protective initiatives.
Individual available evidence from both male and female victim soldiers, showed that they do not adopt protective behavioral initiative against high perceivedrisk factorsof sexual harassment because of the fear of revenge and the general lack of knowledge about protective initiatives.
Key for this theory also indicate that the sub-population of both male and female non-victim soldiers that had an overall high perceived risk of being sexually harassed, calculated from their high perceived or awareness of possibility with a high severity in mind adopted protective initiatives.
Individual available evidence from both male and female non-victim soldiers showed that they adopt protective behavioral initiative against high perceived risk factors of sexual harassment among the interaction of soldiers. This was largely because they have knowledge of the correct protective initiatives against the high severity of sexual harassment. This knowledge arose from the fact that they feared to be affected with health risk such as insomnia and heart attacks if they are harassed. The protective initiatives for the female non-victim soldiers therefore, included escape and evasion, avoidance of lone movement and stick friendship protective strategies respectively. This was done against the male perpetrator ranks of privates, lance corporals, corporals and sergeants that are senior in terms of promotion or a step ahead in rank. The protective initiatives for the male non-victim soldiers were the escape and evasion. This was done against the female perpetrator ranks of privates, lance corporals, corporals and sergeants that are senior in terms of promotion or a step ahead in rank.
The new theory deviated from existing theory which state that only people with low perceived risks are likely to protect themselves from a health threat (Ferrer et al 2016). The new theory comes up with a new explanation that victim soldiers with high perceived risk did not protect themselves with an academic support by (Moody 1990).
The new theory confirms and agrees with existing theory that people with high perceived risks are likely to protect themselves from a health threat (Ferrer et al 2016), However, the new explanation is based on a new phenomenon sexual harassment which is also a public health (WHO, 2014).
In line with risk analysis, a high risk of sexual harassment will always demand top priority in terms of adopting protective behaviors. A risk has two facets which are possibility and severity characteristics. Both non-victims and victims of sexual harassment had a high perceived risk. This was calculated through the 3*3 risk matrix. The matrix shows that any colour in red represents a high risk of sexual harassment hence top priority when it comes to protection. The product of a risk= possibility multiply by severity.
Horizontal Axis=begins with 1. low, 2.medium and 3.high increasing to the right which is the severity level for being sexually harassed Vertical Axis = begins with 1. low, 2.medium and 3. high increasing to the upwards which represents possibility level of being sexually harassed.
Both overall risks that is 6 and 9 are represented by the red colour hence high risk which demand top priority as shown hereunder.
Figure1. 3*3 Risk Analysis Matrix showing risk status for both non-victims and victims of sexual harassment in relation to the generated theory (Matakala 2021)
Source: Field Data KEY/LEGEND Red = High [3] (risk of sexual harassment) Orange=Medium [2] (risk of sexual harassment) Green= Low [1] (risk of sexual harassment) Research Question Number Four (04) 4) How do perceived risks of being sexually harassed motivate protective behaviors among Zambian soldiers in selected military camps? Thematic Sub Questions: In your own opinion how does (non-victim high perceived risk) of sexual harassment motivate protective behaviors?
How effective are the adopted protective behaviors do you use against risk of sexual harassment?
Research Findings to the Research Question Four (General summary of findings).
(4) How do perceived risks of being sexually harassed motivate protective behaviors among Zambian soldiers in selected military camps?
The study looked at responses non-victim participants. This was with regards to how do perceived risks of being sexually harassed motivate protective behaviors among Zambian soldiers in selected military camps. The study findings showed that both the male and female non-victim soldiers with high perceived risk of being sexually harassed were motivated to adopted protective behaviors. Male Bully/Discriminatory perpetrator awareness motivated the adoption of female non-victim protective initiatives. The protective initiatives for the female non-victim soldiers included escape and evasion, avoidance of lone movement and stick friendship protective strategies respectively. It was established that the non-victim females did so through the principle of see without being seen which gave rise to the escape and evasion protective initiative. It was also established since they knew the "………senior female sexual harassers have inferiority complex and I know them I use the principle of seeng without being seen which helps me to do the escape and evasion protective initiative….. " The next sub-paragraph discusses the findings in relation to similar studies.
Similarities Of Research Findings With Reviewed Literature
There was no sufficient evidence to show similar findings from the reviewed literature.
The findings of the present study are therefore, not consistent with other studies. This is because it appears there are no studies that have raised questions as a research gap on whether or not soldiers who perceive the risk of sexual harassment are aware of and/or adopt protective behaviours against sexual harassment.
Finally having discussed the findings in relation to the similarities with other studies it is also important to look at the dissimilarities.
Dissimilarities/ Lessons Learnt from the Research Findingsof the Research Question Four (New Knowledge)
There was sufficient individual evidence to show that female non-victim soldiers that had perceived high risk male bully/discriminatory awareness of being sexually harassed were motivated to take up protective initiatives. It was established that they did so through the principle of see without being seen which gave rise to the escape and evasion protective initiative. It was also established since they knew the characteristics of the senior male sexual harassers they avoided lone movements which was a protective initiative when moving in areas where they are found. It was also established that they moved in a group of four when approaching such people as a protective initiative.
There was also sufficient individual evidence to show that male non-victim soldiers that had perceived high risk female inferiority complex awareness of being sexually harassed were motivated to take up protective initiatives. It was established that they did so through the principle of see without being seen which gave rise to the escape and evasion protective initiative.
The reason for the new knowledge is that, it is assumed that no other study has looked at both female and male non-victims regarding the situation on how non-victim high perceived or awareness of possible risks of being sexually harassed motivate protective behaviors among Zambian soldiers in selected military camps.
Therefore the study as indicated under the significance is important because it has given an insight into the different protective behaviors that Zambian Soldiers who perceive risk of sexual harassment in selected military camps utilize.
Theoretical Frameworks and Lessons Learnt from the Research Findingsof the Research Question Four
Based on the research findings of this study there was sufficient evidence to show that both male and female non-victim soldiers that had perceived high risk male bully/discriminatory awareness of being sexually harassed were motivated to take up protective initiatives. It was established that they did so through the principle of see without being seen which gave rise to the escape and evasion protective initiative. It was also established since they knew the characteristics of the senior male sexual harassers they avoided lone movements. Which was a protective initiative \when moving in areas where they are found? It was also established that they moved in a group of four when approaching such people as a protective initiative.
There was also sufficient individual evidence to show that male non-victim soldiers that had perceived high risk female inferiority complex awareness of being sexually harassed were motivated to take up protective initiatives. It was established that they did so through the principle of see without being seen which gave rise to the escape and evasion protective initiative.
Each of the interviewed individual non-victims felt that each time they adopted the protective initiatives among their interactions they do \not become victims of verbal and non-verbal sexual harassment. It is against this background that they felt this was what was workable and that it made great sense. This is because the protective initiatives are helpful.
These findings are supported by the ethno-methodology paradigm. This is because the Ethno methodology paradigms are also interested in what makes sense for individuals or groups of people in a given community and the methods they use to cope or protect themselves (Garfinkel, 1967). It was popularized by Harold Garfinkel.
This chapter is in two parts
Part A is the general presentation on the originality of the Thesis. Part B is the presentation of the non-victim protective initiatives against the high perceived risk of being sexually harassed and how they were practically validated by victims as a test of effectiveness. The chapter closes with a summary.
General Research Originality of the Thesis
The concept of originality in the doctoral research tries to outline ways in which the contribution towards academic knowledge base can be demonstrated. It is the perspective of the Doctor of Philosophy candidate. Further, in this thesis the originality is marked as the final embodiment of the research project based on the documentation of the researcher"s thinking. It is a statement that is accompanied by the act of advancing and clarifying arguments, reasons and evidence for reaching certain conclusions based on the principles of logic of validation in every research thesis. Depending on the faculty or school of thought the statement of research originality is situated as a separate chapter after the discussion of findings or after the conclusion of the study. This because the key findings to the research questions of the study, will have been achieved at this stage, enough to draw conclusions as well as coming up with specific validated recommendations. This study therefore, situates the statement of research originality after the discussion of findings as a separate chapter before the conclusion chapter. It is important that i address the issue of research originality in this thesis, not only because it is some form of criteria for assessing quality in Doctoral Research, but also because it ensures that the study made significant contribution to the body of knowledge. It is therefore, important for this thesis to demonstrate critically how and in what ways the significant contribution to the body of knowledge was achieved. The dual significant contribution to the body of knowledge was based on the identified research gap. This is because it appeared no other study tried to understand the protective behaviours that soldiers who perceive risk of being sexually harassed adapt with respect to two sub-population groups Victims and Non-Victims. The researcher went on to produce new knowledge based on his own perspective using the existing ideas of other studies to back up his argument and clarified these with reasons in relation with what other studies had said on sexual harassment before using the resultant evidence for drawing of conclusions being stated as well as leading to a proposal of recommendations signified by the logic validation. (Creswell, 2009 "what we did not know that we now know" in relation to the initial identified research gap. The research gap was based on the "understanding of both the non-victim and victim protective behaviours in relation to their perceived risk for sexual harassment among Zambian soldiers". The research established that victim soldiers that had knowledge about the high perceived risk factors of being sexually harassed did not protect themselves because they lacked the knowledge of how to do so. However, it was also established that non-victim soldiers that had knowledge about the high perceived risk factors of being sexually harassed did protect themselves in three ways that are shown hereunder; 1. There was sufficient evidence to show that female non-victims that exhibited a high perceived risk of being sexually harassed adopted three ideal protective initiatives. These female non-victim protective initiatives among others included escape and evasion, avoidance of lone movement and stick friendship protective strategies respectively. It was established that they did so through the principle of see without being seen which gave rise to the escape and evasion protective initiative. It was also established since they knew the characteristics of the senior male sexual harassers they avoided lone movements. Which was a protective initative when moving in areas where they are found? It was also established that they moved in a group of four when approaching such people as a protective initiative.
This was to be done against the male perpetrator ranks of privates, lance corporals, corporals and sergeants that are senior in terms of promotion or a step ahead in rank in case of a suspected threat.
2.
There was also sufficient evidence to show that male non-victims that exhibited a high perceived risk of being sexually harassed adopted three ideal protective initiatives have since established one ideal protective behavior. These included escape and evasion protective strategy. It was established that they did so through the principle of see without being seen which gave rise to the escape and evasion protective initiative.
This was done against the female perpetrator ranks of privates, lance corporals, corporals and sergeants that are senior in terms of promotion or a step ahead in rank in case of a suspected physical threat.
Equipped with the aforementioned non-victim protective initiatives as what we did not know that is now known I assumed that the protective initiatives may be of help to both victims and nonvictims of sexual harassment subject to logic research validation.
The effectiveness of the resultant non-victim protective initiatives as my proposed recommendations against the risk of sexual harassment among soldiers in Zambia was later subjected to a logic research validation test.
In order to test the effectiveness i purposefully selected few victim soldiers that had knowledge about the high perceived risk factors of being sexually harassed but did not protect themselves because they lacked the knowledge of how to do so.
I then proceeded to brief victim soldiers about how I felt they could protect themselves based on what I had established from the non-victim perspectives.
At the end of each individual victim briefing I asked if they had any questions before asking them to go and practice what they had heard for a period of four weeks. I told them that after this briefing I would call for another interview. When the non-victims reported back they testified that they never suffered sexual harassment again.
With this demonstration of my research originality I therefore, I had made significant contribution to the body of knowledge The demonstration was in two phases shown hereunder:
Effectiveness (Test) of The Non-Victim Protective initiatives against the high perceived risk factors of being Sexually Harassed by Perpetrators among Zambian Soldiers: A Zambian Perspective Guide.
The Non-Victim Protective initiatives against the high perceived risk of being Sexually Harassed by Perpetrators among Zambian Soldierswere subjected to an effectiveness test that was divided into two parts.
First part
Inclusion criteria for the victims of sexual harassment used in the practical-validation of the Non-victim Protective Initiatives/Preliminary instructions The first part was based on the information provided by the researcher to the few selected individual female and male victims that participated in the study. The selected participants represented the various affected ranks which are privates, lance corporals, corporals and sergeants for the females. The male affected ranks included ranks of private, lance corporal and corporal. This was with specific reference to research question number three. The question looked at the protective behaviors Zambian soldiers adopt against the risk factors of being sexually harassed in selected military camps .This category was chosen because it was established that despite their high perceived risk of being sexually harassed they never adopted any protective initiatives. It was established that the reason for not adopting protective initiatives is because the lacked knowledge of the correct protective initiative. It is against this background that they were excluded from the forth research question. Only nonvictims were included. It is from the non-victims that the proposed recommendations have arisen from.
Therefore the female and male victim individuals were admonished to go and experiment from the proposed protective initiative recommendations based on the non-victim perspectives. This was to ascertain the workability of the proposed recommendations. The victims were told by the researcher that since they knew the perpetrator characteristics it would be in order to establish the effectiveness of the proposed recommendations. The selected victims were admonished to use the under mentioned protective initiatives or strategies; 1) Use of the male/female escape and evasion protective initiative against the perpetrator sexual harassers: Once the high perceived risk bully, discriminatory characteristics as well as the actual perpetrators are identified it becomes easy to use the protective initiative. This is because it will be easy to use the principle of "see without being seen." This means when the male or sexual harasser is seen first it becomes easy to do the escape and evasion protective initiative.
2) Use of the female stick friendship protective initiative for use against the senior male perpetrators of sexual harassment: Once the high perceived risk bully, discriminatory characteristics as well as the actual perpetrators are identified it becomes easy to use the protective initiative. Always moving in a group of four or stick friendship initiative makes it difficult for the senior male sexual harassers to strike.
3) Use of the female avoidance of lone movement protective initiative against the senior male perpetrators of sexual harassment: Once the high perceived risk bully, discriminatory characteristics as well as the actual perpetrators are identified it becomes easy to use the protective initiative. Always not moving alone in the presence of the known perpetrators based on characteristics they possess makes it difficult for the senior male harassers to have the opportunity of striking. Chipowe, Floyd Kamanga, Corporal Abraham Mubita, George Muyeba and James Banda (Pseudo "not real" Names) when asked on the effectiveness of the proposed formal proactive protective initiatives against high perceived risk of sexual harassment among the interaction of soldiers in the Non-Commissioned Ranks Individually and severally said that:
Revelations From Individual Female And Male Victim Soldiers
"Once again I thank you for the opportunity …. My contribution is that knowledge of perpetrator characteristics as well as the existing military culture that is characterized with obedience and discipline gives rise to a high perceived risk perception in me………………last time I said I could not protect myself………for fear of being Indiscipline due to the existing military culture as well as lack of the knowledge of the correct protective initiatives ………… however, since you advised on what to do the last four weeks have not made me to be sexually harassed……………… this is because I used because I used the escape and evade protective behavior were I should be the first to see the enemy…………… " In closing the session, the researcher thanked all the individual male and female victims or their valuable information and contributions. As a result of the aforementioned views this thesis makes a considerable contribution to the body of knowledge and to some extent literature review which is scant in military studies that have looked at sexual harassment. The conclusions drawn by this thesis are not imposed on the study, but reflect the experiences for both non-victims and victims for sexual harassment among the interactions of soldiers in selected military camps from the non-commissioned ranks. Therefore, this thesis has also contributed to the awakening debate by other countries including the Zambian Government on how to prevent sexual harassment not only in the military communities but also the countries as a whole. In Zambia this was done through the enactment of the SI number 15 of 2005, section 137A that criminalizes sexual harassment as well as coming up with gender desks for sensitization of in various headquarters of ministries. These also included the Ministry of Defense (MOD) and its security wings together with the Army. To the best of the my knowledge some of the information and data gathered by this thesis have not been documented anywhere hence it can be said to be new knowledge. The justification lies in the fact that the answers to this thesis have been contextualized with other The new knowledge on the formal adopted proactive recommendations against sexual harassment based on the perspective of the non-victims is the first of its kind among Zambian soldiers.
The next chapter is going to look at the conclusion, recommendations and suggestions for future research.
Conclusion
This thesis is important because it bridges the gap in knowledge with other studies that only included victims in sexual harassment investigations among soldiers. It was therefore, important to explore and understand, the link of Perceived Risk of sexual harassment and protective behaviours adopted by both the victim and non-victim soldiers if a suitable interventions had to be proposed for the military camps, as it is an under explored area. Four major themes were explored inorder to draw study conclusions as shown hereunder; (1) Situation on perceived risk of being sexually harassed among Zambian soldiers.
The findings on the perceived risk perceptions of being sexually harassed revealed a high magnitude for both verbal and non-verbal actions. Only ranks between private and corporal continued being affected while the ranks above sergeant are also said to have experienced these actions within the same category. This situation gave rise to a high perceived or awareness towards the risk factors of being sexually harassed among the current affected category victim and non-victim soldiers.
(2) Risk factors associated with sexual harassment among Zambian soldiers.
The findings on the risk factors associated with sexual harassment were measured as (i) perpetrator (ii) individual weakness and (iii) military characteristics respectively. This was also done from the perspective of both victims and non-victim soldiers. Male/Female bully and discriminatory was among the recorded perpetrator characteristics. Furthermore, non-reporting of the perpetrators was recorded as an individual victim weakness which was not the case for the non-victims. Lack of written sexual harassment mitigation measures was recorded as a military characteristic that was also a risk factor for the prevalence of sexual harassment.
(3) Protective behaviors Zambian soldiers adopt against the risk factors of being sexually harassed.
The new findings on protective behaviors against the risk factors of sexual harassment deviated from existing theory. It was established victims never adopted protective behaviors fearing revenge, but non-victims protected themselves as they feared getting affected health-wise. In both cases the new explanation of phenomenon was a motivation against high perceived or awareness towards the risk factors of sexual harassment, hence Matakala"s theory and others.
(4) Perceived risks of being sexually harassed and how they motivate protective behaviors among Zambian soldiers.
The findings from the non-victim on how the high perceived or awareness towards riskof being sexually harassed motivates protective behaviors revealed three protective initiatives. These were escape and evasion, stick and avoidance of lone movement protective initiatives. The escape and evasion was based on the principle of being able to see the known perpetrator first so that it is easy to avoid them. The avoidance of lone movement principle was meant not to come in the presence of the known perpetrator while alone. Lastly, the stick principle was made to ensure movement to the known perpetrator was done in a group of four making it difficult for harassment to take place.
Lastly, the resultant non-victim protective initiatives were later practically-validated by the victim soldiers as they also knew the bully/discriminatory perpetrator characteristics. The non-victim protective initiatives also worked effectively for the victims as shown from their testimonies.
Suggestions for Future Research Studies
Despite that the findings of this study can be extrapolated to the commissioned ranks, it is recommended that the next research study should include the said category. This should also be with respect to the concept of perceived risk and protective behaviors. This is because the present study has acted as a stepping stone for further research.
Topic: Perceived Risks Of Sexual Harassment And Protective Behaviour Among Zambian Soldiers In Selected Military Camps.
Dear participants, This serves to give you an understanding of the research and procedures that will be followed.
Similar information in this form will be read to you alongside the questions with regard to each objective and its research instrument.
Further the implications for your participation are explained below, finally you are asked to sign this form to indicate that you have agreed to participate in this exercise.
Thanking you in advance.
Description
This is an educational research. The researcher is a student at the University of Zambia pursuing a Doctor of Philosophy Degree in Gender Studies.
This research is a major requirement for the researcher to complete this program. Therefore, this study is purely academic.
Purpose
The researcher"s topic is: Perceived Risks Of Sexual Harassment And Protective Behaviour Among Zambian Soldier In Selected Military Camps.
The researcher is interested in understanding the how both the harassed (victims) and non-harassed (non-victims) Perceived Risks of sexual harassment and the resultant protective behaviours. The evidence could be used to prevent or reduce sexual harassment among Zambian Soldiers in selected military camps. He therefore is interested in learning how participants perceive the risk of sexual harassment from both the harassed (victims) and non-harassed (non-victims) and there protective behaviours.
Consent
Participation in the exercise is voluntary. You are free to decline to participate in this exercise.
Confidentiality/Sharing of Findings
All data collected from this research is treated with utmost confidentiality. Participants are assured that they will remain anonymous and untraceable in this research.
It is against this background that participants will only be identified through a number and not by the actual name but a number or pseudo names will be used.
Rights of Participants
All efforts will be taken to ensure that the rights of participants as per research ethics are protected and respected. Participants are assured that they are free to ask for clarification at any point of the exercise and to inform the researcher if they feel uncomfortable about any procedure in the research. Your consent to this request will be highly valued and appreciated.
Uses of Information
The information that will be got from you will be got from you may help in decision making for the military authorities aimed at improving the situation of the topic under study. Most of all it may also equip military individuals with formal skills for sexual harassment individual protective behaviors.
Individual Risks
There are no risks because as per research ethics anonymity and confidentiality will be assured both during and after the research has been conducted. Participants will therefore remain untraceable..
Benefits to Participants
There will be no direct benefits but your participation is likely to help us in understanding how both the harassed (victims) and non-harassed (non-victims) Perceived Risks of sexual harassment and the resultant protective behaviours.
Reimbursements
The researcher will not provide any incentive to take part in the research. However, we will give you [provide a figure if money is involved] for time and travel expenses if and when applicable.
Duration/ Research Intervention
This research will take up to three months in total. During this time we shall visit you up to three or four times with each of the face to face oral interviews taking up to one and half hours each. This therefore will involve your physical participation both at individual or group level. You have been purposefully selected to help in understanding of the topic under discussion.
Computer Number 2016144605
Declaration of Consent
I have read or head and fully understand this document concerning the research and its procedures. Therefore, voluntarily I have agreed to participate in this study.
Appendix 3: Semi-Structured Interview Guide For Soldiers That Include Both Male And
You have been purposefully selected to this individual face to face formal conversation for the study inorder to understand the topic above. The researcher fills your affirmative response from theSexual Harassment Experience Questionnaire [SHEQ] towards the concepts of perceived risk and sexual harassment will give to an in-depth understanding in relation to your protective behavioral situation as a female or male soldier, in the Non-commissioned ranks.
In addition, information collected through this study is strictly for academic purposes only and therefore shall be kept confidential and no name or any identity shall be attributed to you. Furthermore, you are free to choose to participate in this research and you can also choose to pull out any time you feel uncomfortable.
I would appreciate if you could spare sometime to answer some questions in my interview guide because your participation is highly valued.
During the interview further probing or follow up questions based on your answers will also be asked.
Question One:
(1) How is the situation of perceived risk of being sexually harassed among Zambian soldiers in selected military camps?
Question Items (for question1)
Please tell me about yourself.
Kindly tell me your actual sexual harassment experience?
|
2021-08-27T17:13:55.685Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "03a98d05fbe14ad4642d2121af6b293e29ddaa98",
"oa_license": null,
"oa_url": "https://doi.org/10.20431/2349-0381.0804009",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6a1702862c3c61eff0fed41da7c2edad14af59dc",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
119270833
|
pes2o/s2orc
|
v3-fos-license
|
Strong extensions for $q$-summing operators acting in $p$-convex Banach function spaces for $1 \le p \le q$
Let $1\le p\le q<\infty$ and let $X$ be a $p$-convex Banach function space over a $\sigma$-finite measure $\mu$. We combine the structure of the spaces $L^p(\mu)$ and $L^q(\xi)$ for constructing the new space $S_{X_p}^{\,q}(\xi)$, where $\xi$ is a probability Radon measure on a certain compact set associated to $X$. We show some of its properties, and the relevant fact that every $q$-summing operator $T$ defined on $X$ can be continuously (strongly) extended to $S_{X_p}^{\,q}(\xi)$. This result turns out to be a mixture of the Pietsch and Maurey-Rosenthal factorization theorems, which provide (strong) factorizations for $q$-summing operators through $L^q$-spaces when $1 \le q \le p$. Thus, our result completes the picture, showing what happens in the complementary case $1\le p\le q$, opening the door to the study of the multilinear versions of $q$-summing operators also in these cases.
Introduction
Fix 1 ≤ p ≤ q < ∞ and let T : X → E be a Banach space valued linear operator defined on a saturated order semi-continuous Banach function space X related to a σ-finite measure µ. In this paper we prove an extension theorem for T in the case when T is q-summing and X is p-convex. In order to do this, we first define and analyze a new class of Banach function spaces denoted by S q Xp (ξ) which have some good properties, mainly order continuity and p-convexity. The space S q Xp (ξ) is constructed by using the spaces L p (µ) and L q (ξ), where ξ is a finite positive Radon measure on a certain compact set associated to X.
Corollary 5.2 states the desired extension for T . Namely, if T is q-summing and X is p-convex then T can be strongly extended continuously to a space of the type S q Xp (ξ). Here we use the term "strongly" for this extension to remark that the map carrying X into S q Xp (ξ) is actually injective; as the reader will notice (Proposition 3.1), this is one of the goals of our result. In order to develop our arguments, we introduce a new geometric tool which we call the family of p-strongly qconcave operators. The inclusion of X into S q Xp (ξ) turns out to belong to this family, in particular, it is q-concave.
If T is q-summing then it is p-strongly q-concave (Proposition 5.1). Actually, in Theorem 4.4 we show that in the case when X is p-convex, T can be continuously extended to a space S q Xp (ξ) if and only if T is p-strongly q-concave. This result can be understood as an extension of some well-known relevant factorizations of the operator theory: (I) Maurey-Rosenthal factorization theorem: If T is q-concave and X is q-convex and order continuous, then T can be extended to a weighted L q -space related to µ, see for instance [3,Corollary 5]. Several generalizations and applications of the ideas behind this fundamental factorization theorem have been recently obtained, see [1,2,4,5,9]. (II) Pietsch factorization theorem: If T is q-summing then it factors through a closed subspace of L q (ξ), where ξ is a probability Radon measure on a certain compact set associated to X, see for instance [6,Theorem 2.13].
In Theorem 4.4, the extreme case p = q gives a Maurey-Rosenthal type factorization, while the other extreme case p = 1 gives a Pietsch type factorization. We must say also that our generalization will allow to face the problem of the factorization of several p-summing type of multilinear operators from products of Banach function spaces -a topic of current interest-, since it allows to understand factorization of q-summing operators from p-convex function lattices from a unified point of view not depending on the order relation between p and q.
As a consequence of Theorem 4.4, we also prove a kind of Kakutani representation theorem (see for instance [7,Theorem 1.b.2]) through the spaces S q Xp (ξ) for p-convex Banach function spaces which are pstrongly q-concave (Corollary 4.5).
Preliminaries
Let (Ω, Σ, µ) be a σ-finite measure space and denote by L 0 (µ) the space of all measurable real functions on Ω, where functions which are equal µ-a.e. are identified. By a Banach function space (briefly B.f.s.) we mean a Banach space X ⊂ L 0 (µ) with norm · X , such that if f ∈ L 0 (µ), g ∈ X and |f | ≤ |g| µ-a.e. then f ∈ X and f X ≤ g X . In particular, X is a Banach lattice with the µ-a.e. pointwise order, in which the convergence in norm of a sequence implies the convergence µ-a.e. for some subsequence. A B.f.s. X is said to be saturated if there exists no A ∈ Σ with µ(A) > 0 such that f χ A = 0 µ-a.e. for all f ∈ X, or equivalently, if X has a weak unit (i.e. g ∈ X such that g > 0 µ-a.e.).
The Köthe dual of a B.f.s. X is the space X ′ given by the functions Here, as usual, B X denotes the closed unit ball of X. Each function h ∈ X ′ defines a functional ζ(h) on X by ζ(h), f = hf dµ for all f ∈ X. In fact, X ′ is isometrically order isomorphic (via ζ) to a closed subspace of the topological dual X * of X.
From now and on, a B.f.s. X will be assumed to be saturated. If for every f, f n ∈ X such that 0 ≤ f n ↑ f µ-a.e. it follows that f n X ↑ f X , then X is said to be order semi-continuous. This is equivalent to ζ(X ′ ) being a norming subspace of X * , i.e. f X = sup h∈B X ′ |f h| dµ for all f ∈ X. A B.f.s. X is order continuous if for every f, f n ∈ X such that 0 ≤ f n ↑ f µ-a.e., it follows that f n → f in norm. In this case, X ′ can be identified with X * .
For general issues related to B.f.s.' see [7], [8] and [10, Ch. 15] considering the function norm ρ defined as In this case, M p (X) will denote the smallest constant C satisfying the above inequality. Note that M p (X) ≥ 1. A relevant fact is that every p-convex B.f.s. X has an equivalent norm for which X is p-convex with constant M p (X) = 1, see [7,Proposition 1.d.8].
The p-th power of a B.f.s. X is the space defined as endowed with the quasi-norm f Xp = |f | 1/p p X , for f ∈ X p . Note that X p is always complete, see the proof of [8,Proposition 2.22]. If X is p-convex with constant M p (X) = 1, from [3, Lemma 3], · Xp is a norm and so X p is a B.f.s. Note that X p is saturated if and only if X is so. The same holds for the properties of being order continuous and order semi-continuous.
3. The space S q Xp (ξ) Let 1 ≤ p ≤ q < ∞ and let X be a saturated p-convex B.f.s. We can assume without loss of generality that the p-convexity constant M p (X) is equal to 1. Then, X p and (X p ) ′ are saturated B.f.s.'. Consider the topology σ (X p ) ′ , X p on (X p ) ′ defined by the elements of X p . Note that the subset B + (Xp) ′ of all positive elements of the closed unit ball of (X p ) ′ is compact for this topology.
Let ξ be a finite positive Radon measure on In the case when f ∈ X, since |f | p ∈ X p , it follows that φ f is continuous and so measurable. For a general f ∈ L 0 (µ), by Lemma 2.1 we can take a sequence (f n ) n≥1 ⊂ X such that 0 ≤ f n ↑ |f | µ-a.e. Applying monotone convergence theorem, we have that φ fn ↑ φ f pointwise and so φ f is measurable. Then, we can consider the integral and define the following space: .
In general, · S q Xp (ξ) is not a norm. For instance, if ξ is the Dirac measure at some h 0 ∈ B + (Xp) ′ such that A = {ω ∈ Ω : h 0 (ω) = 0} satisfies µ(A) > 0, taking f = gχ A ∈ X with g being a weak unit of X, we have that Note that, for N fixed, (A N n ) n≥1 increases. Taking limit as n → ∞ and applying twice the monotone convergence theorem, it follows that Then, and so, from (3.1), Hence, n≥1 f n ∈ L 0 (µ). Again applying the monotone convergence theorem, it follows that Fixed h ∈ B, we have that |f − f n | p h ↓ 0 µ-a.e. and |f − f n | p h ≤ |f | p h µ-a.e. Then, applying the dominated convergence theorem, Ω |f (ω) − f n (ω)| p h(ω) dµ(ω) ↓ 0.
p-strongly q-concave operators
Let 1 ≤ p ≤ q < ∞ and let T : X → E be a linear operator from a saturated B.f.s. X into a Banach space E. Recall that T is said to be q-concave if there exists a constant C > 0 such that The smallest possible value of C will be denoted by M q (T ). For issues related to q-concavity see for instance [7, Ch. 1.d]. We introduce a little stronger notion than q-concavity: T will be called p-strongly q-concave if there exists C > 0 such that In this case, M p,q (T ) will denote the smallest constant C satisfying the above inequality. Noting that r p and q p are conjugate exponents, it is clear that every p-strongly q-concave operator is qconcave and so continuous, and moreover T ≤ M q (T ) ≤ M p,q (T ). As usual, we will say that X is p-strongly q-concave if the identity map I : X → X is so, and in this case, we denote M p,q (X) = M p,q (I).
Our goal is to get a continuous extension of T to a space of the type S q Xp (ξ) in the case when T is p-strongly q-concave and X is p-convex. To this end we will need to describe the supremum on the right-hand side of the p-strongly q-concave inequality in terms of the Köthe dual of X p . Proof. Given (f i ) n i=1 ⊂ X, since X p is order semi-continuous, as X is so, and (ℓ q/p ) * = ℓ r/p , as r p is the conjugate exponent of q p , we have that
Lemma 4.1. If X is p-convex and order semi-continuous then
In the following remark, from Lemma 4.1, we obtain easily an example of p-strongly q-concave operator.
Remark 4.2. Suppose that X is p-convex and order semi-continuous. For every finite positive Radon measure ξ on B + (Xp) ′ satisfying (3.1), it follows that the inclusion map i : X → S q Xp (ξ) is p-strongly q-concave. If T is p-strongly q-concave and X is p-convex and order semi-continuous, then there exists a probability Radon measure ξ on B + (Xp) ′ satisfying (3.1) such that Proof. Recall that the stated topology on (X p ) ′ is σ((X p ) ′ , X p ), the one which is defined by the elements of X p . For each finite subset (with possibly repeated elements) Note that ψ M attains its supremum as it is continuous on a compact set, so there exists . Then, the p-strongly q-concavity of T , together with Lemma 4.1, gives Consider now the continuous map φ M : (Xp) ′ ) * and α ∈ R such that ξ, φ < α ≤ ξ, φ M for all φ ∈ A and φ M ∈ B. Since every negative constant function is in A, it follows that 0 ≤ α. Even more, α = 0 as the constant function equal to 0 is just φ {0} ∈ B. It is routine to see that ξ, φ ≥ 0 whenever φ ∈ C(B + (Xp) ′ ) is such that φ(h) ≥ 0 for all h ∈ B + (Xp) ′ . Then, ξ is a positive linear functional on C(B + (Xp) ′ ) and so it can be interpreted as a finite positive Radon measure on B + (Xp) ′ . Hence, we have that for all finite subset M ⊂ X. Dividing by ξ(B + (Xp) ′ ), we can suppose that ξ is a probability measure. Then, for M = {f } with f ∈ X, we obtain that and so (4.1) holds.
Actually, Theorem 4.3 says that we can find a probability Radon measure ξ on B + (Xp) ′ such that T : X → E is continuous when X is considered with the norm of the space S q Xp (ξ). In the next result we will see how to extend T continuously to S q Xp (ξ). Even more, we will show that this extension is possible if and only if T is p-strongly qconcave.
Theorem 4.4. Suppose that X is p-convex and order semi-continuous. The following statements are equivalent: (a) T is p-strongly q-concave. (b) There exists a probability Radon measure ξ on B + (Xp) ′ satisfying (3.1) such that T can be extended continuously to S q Xp (ξ), i.e. there is a factorization for T as where T is a continuous linear operator and i is the inclusion map.
That is, from Lemma 4.1, T is p-strongly q-concave with M p,q (T ) ≤ T .
A first application of Theorem 4.4 is the following Kakutani type representation theorem (see for instance [7,Theorem 1.b.2]) for B.f.s.' being order semi-continuous, p-convex and p-strongly q-concave. Proof. (a) ⇒ (b) The identity map I : X → X is p-strongly q-concave as X is so. Then, from Theorem 4.4, there exists a probability Radon measure ξ on B + (Xp) ′ satisfying (3.1), such that I factors as where I is a continuous linear operator with I = M p,q (X) and i is the inclusion map. Since ξ is a probability measure, we have that f S q Xp (ξ) ≤ f X for all f ∈ X, see the proof of Proposition 3.1. Let 0 ≤ f ∈ S q Xp (ξ). By Lemma 2.1, we can take (f n ) n≥1 ⊂ X such that 0 ≤ f n ↑ f µ-a.e. Since S q Xp (ξ) is order continuous, it follows that f n → f in S q Xp (ξ) and so f n = I(f n ) → I(f ) in X. Then, there is a subsequence of (f n ) n≥1 converging µ-a.e. to I(f ) and hence f = I(f ) ∈ X. For a general f ∈ S q Xp (ξ), writing f = f + − f − where f + and f − are the positive and negative parts of f respectively, we have that f = I(f + ) − I(f − ) = I(f ) ∈ X. Therefore, X = S q Xp (ξ) and I is de identity map. Moreover, for all f ∈ X.
(b) ⇒ (a) From Remark 4.2 it follows that the identity map I : X → X is p-strongly q-concave.
Note that under conditions of Corollary 4.5, if X is p-strongly qconcave with constant M p,q (X) = 1, then X = S q Xp (ξ) with equal norms.
Recall that a linear operator T : X → E between Banach spaces is said to be q-summing (1 ≤ q < ∞) if there exists a constant C > 0 such that Denote by π q (T ) the smallest possible value of C. Information about q-summing operators can be found in [6].
One of the main relations between summability and concavity for operators defined on a B.f.s. X, is that every q-summing operator is q-concave. This is a consequence of a direct calculation which shows that for every (f i ) n i=1 ⊂ X and x * ∈ X * it follows that see for instance [7,Proposition 1.d.9] and the comments below. However, this calculation can be slightly improved to obtain the following result.
Proposition 5.1. Let 1 ≤ p ≤ q < ∞. Every q-summing linear operator T : X → E from a B.f.s. X into a Banach space E, is pstrongly q-concave with M p,q (T ) ≤ π q (T ).
Proof. Let 1 < r ≤ ∞ be such that 1 r = 1 p − 1 q and consider a finite subset (f i ) n i=1 ⊂ X. We only have to prove Fix x * ∈ B X * . Noting that q p and r p are conjugate exponents and using the inequality (5.1), we have Taking supremum in x * ∈ B X * we get the conclusion.
From Proposition 5.1, Theorem 4.4 and Remark 4.2, we obtain the final result.
Corollary 5.2. Set 1 ≤ p ≤ q < ∞. Let X be a saturated order semicontinuous p-convex B.f.s. and consider a q-summing linear operator T : X → E with values in a Banach space E. Then, there exists a probability Radon measure ξ on B + (Xp) ′ satisfying (3.1) such that T can be factored as where T is a continuous linear operator with T ≤ π q (T ) and i is the inclusion map which turns out to be p-strongly q-concave, and so q-concave.
Observe that what we obtain in Corollary 5.2 is a proper extension for T , and not just a factorization as the obtained in the Pietsch theorem for q-summing operators through a subspace of an L q -space.
|
2015-06-30T09:53:31.000Z
|
2015-06-30T00:00:00.000
|
{
"year": 2016,
"sha1": "6293a13d5d58fb67699f07df0042215fa42f6d10",
"oa_license": "CCBYNCND",
"oa_url": "https://riunet.upv.es/bitstream/10251/150046/3/Delgado;S%C3%A1nchez%20-%20Strong%20extensions%20for%20q-summing%20operators%20acting%20in%20p-convex%20Banach%20function%20sp....pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6293a13d5d58fb67699f07df0042215fa42f6d10",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
5481888
|
pes2o/s2orc
|
v3-fos-license
|
On the relative coexistence of fixed points and period-two solutions near border-collision bifurcations
At a border-collision bifurcation a fixed point of a piecewise-smooth map intersects a surface where the functional form of the map changes. Near a generic border-collision bifurcation there are two fixed points, each of which exists on one side of the bifurcation. A simple eigenvalue condition indicates whether the fixed points exist on different sides of the bifurcation (this case can be interpreted as the persistence of a single fixed point), or on the same side of the bifurcation (in which case the bifurcation is akin to a saddle-node bifurcation). A similar eigenvalue condition indicates whether or not there exists a period-two solution on one side of the bifurcation. Previously these conditions have been combined to obtain five distinct scenarios for the existence and relative coexistence of fixed points and period-two solutions near border-collision bifurcations. In this Letter, it is shown that one of these scenarios, namely that two fixed points exist on one side of the bifurcation and a period-two solution exists on the other side of the bifurcation, cannot occur. The remaining four scenarios are feasible. Therefore there are exactly four distinct scenarios for fixed points and period-two solutions near border-collision bifurcations.
Introduction
A piecewise-smooth map on M ⊂ R N is a discrete-time dynamical system where the regions M j form a partition of the domain M, and each F j : M j → M is a smooth function. The boundaries of the M j , termed switching manifolds, are assumed to be either smooth or piecewise-smooth surfaces. Piecewise-smooth maps are used to model oscillatory dynamics in systems involving abrupt events, such as mechanical systems with impacts [1], power electronics with switching events [2], and economics systems with non-negativity conditions or optimisation [3]. As parameters are varied, a fixed point of (1.1) may intersect a switching manifold. If, near the intersection, the switching manifold is smooth, (1.1) is continuous, and the derivatives D X F j are bounded, then the intersection is known as a border-collision bifurcation [4]. Dynamics near a border-collision bifurcation of (1.1) are well-approximated by a piecewise-linear, continuous map, which can be put in the form where, throughout this Letter, s = e T 1 x denotes the first component of x ∈ R N . In (1.2), A L and A R are real-valued N × N matrices, b ∈ R N , and µ ∈ R is the primary bifurcation parameter: the border-collision bifurcation occurs at x = 0 when µ = 0. The requirement that (1.2) is continuous implies for some ξ ∈ R N . A fixed point of (1.2) must be a fixed point of one of the two half-maps of (1. 2): As long as 1 is not an eigenvalue of A L and A R , f L and f R have unique fixed points, The point x L is a fixed point of (1.2), and said to be admissible, if s L ≤ 0. Similarly, x R is admissible if s R ≥ 0. Since x L and x R are linear functions of µ, generically x L and x R are each admissible for exactly one sign of µ. In general, for the purposes of characterising the behaviour of (1.2), it suffices to consider only the sign of µ, because the structure of the dynamics of (1.2) is independent to the magnitude of µ.
Other invariant sets may be created in border-collision bifurcations, such as periodic solutions, invariant circles, and chaotic sets [4,5,6,7,8,9], as well as exotic dynamics such as multidimensional attractors [10], and infinitely many coexisting attractors [11]. This Letter concerns only fixed points and period-two solutions. Period-two solutions were first explored by Mark Feigin in the 1970's [12,13], and were described more recently in [4,14]. The creation of a period-two solution in a border-collision bifurcation has different scaling properties than a perioddoubling bifurcation, and such differences can have important physical interpretations [15].
In generic situations, (1.2) either has no period-two solution for either sign of µ, or has an LR-cycle (a period-two solution consisting of one point on each side of s = 0) for exactly one sign of µ [12]. In [13], Feigin showed that the relative coexistence of the fixed points x L and x R is determined by a simple condition on the eigenvalues of A L and A R , and that a similar condition indicates whether or not an LR-cycle exists for one sign of µ. This is one of the most far-reaching results in the bifurcation theory of nonsmooth dynamical systems, because it applies to maps of any number of dimensions. Centre manifold analysis, which is the key tool for dimension reduction, requires local differentiability and so usually cannot be applied to bifurcations specific to nonsmooth dynamical systems, such as border-collision bifurcations [16].
By directly combining the two generic cases for the nature of both fixed points and periodtwo solutions, it appears that border-collision bifurcations can be categorised into five basic scenarios. In the absence of an LR-cycle there are two scenarios: either x L and x R are admissible for different signs of µ, Fig. 1-A, or x L and x R are admissible for the same sign of µ, Fig. 1
-B.
If there exists an LR-cycle, and x L and x R are admissible for different signs of µ, then, trivially, the LR-cycle coexists with exactly one fixed point, Fig. 1-C. Finally, if there exists an LR-cycle, and x L and x R are admissible for the same sign of µ, it appears that there are two scenarios. The LR-cycle could either coexist with x L and x R , as in Fig. 1-D, or coexist with neither x L or x R . In [13], Feigin noted that the latter scenario is not possible in one-dimension (N = 1) in view of Sharkovskii's theorem [17]. Feigin further stated that this scenario is not possible for N = 2 (but did not provide a proof), and conjectured that the scenario is not possible for any N ∈ Z + . The purpose of this Letter is to prove this conjecture.
Each of the four scenarios of Fig. 1 is possible for (1.2) in any number of dimensions. In Fig. 1 the scenarios are illustrated for (1.2) with N = 1, for which (1.2) is written as The remainder of this Letter is organised as follows. Calculations for fixed points and periodtwo solutions of (1.2) are given in §2 and §3, respectively. The basic border-collision bifurcation scenarios formed by considering all generic possibilities for fixed points and period-two solutions are described in §4. In §5 it is proved that a non-degenerate period-two solution of (1.2) must coexist with a fixed point. Finally, §6 presents a brief summary and outlook.
Fixed points
In order to compare the values of s L and s R (the first components of x L and x R (1.5)), we let where adj(A) denotes the adjugate of a square matrix A. Recall, if A is nonsingular, then Since A L and A R differ in only their first columns (1.3), adj(I − A L ) and adj(I − A R ) have the same first row [9,14], that is, By (2.2) and (2.3), if ̺ T b = 0, then s L = s R = 0 for all µ. In this instance the fixed points do not move away from the switching manifold as µ is varied from zero, which runs counter to our Recall, x L is admissible if s L ≤ 0, and x R is admissible if s R ≥ 0. Therefore, if σ + L + σ + R is an even number, then (−1) σ + L = (−1) σ + R , and hence by (2.5), sgn s L = sgn s R . Thus in this case x L and x R are admissible for different signs of µ (persistence of a fixed point). Alternatively if σ + L +σ + R is an odd number, then x L and x R are admissible for the same sign of µ (a nonsmooth-fold).
Period-two solutions
Let us first consider a period-two solution of (1.2) consisting of two points with s < 0. Points of this solution are fixed points of f L • f L (x i ) = A 2 L x i + (I + A L )bµ. If A L does not have an eigenvalue of 1 or −1, as is generically the case, this period-two solution is unique, and therefore must coincide with x L . Hence the period-two solution is really a fixed point, and we do not need to consider it further. We can similarly dismiss period-two solutions of (1.2) consisting of two points with s > 0. Therefore it remains to consider an LR-cycle {x LR , x RL }, where Expressions for the first components of x LR and x RL are given by the following lemma. Lemma 3.1 is a special case of a result for general periodic solutions of (1.2) derived in [9,18], and the reader is referred to these sources for a proof.
Lemma 3.1. If 1 is not an eigenvalue of A R A L , then the LR-cycle is unique (but not necessarily admissible) and As in [13], we let σ − J denote the number of real eigenvalues of A J that are less than −1. If −1 is not an eigenvalue of A J , then sgn (det(I + A J )) = (−1) σ − J , J = L, R .
The LR-cycle is admissible if s LR ≤ 0 and s RL ≥ 0. Therefore by (3.4), if σ − L + σ − R is even, then sgn s LR = sgn s RL and so the LR-cycle is not admissible for all µ = 0. Alternatively if σ − L + σ − R is odd, then the LR-cycle is admissible for one sign of µ.
Feigin's classification
If the LR-cycle is admissible for one sign of µ, we would like to determine which fixed points it coexists with. To this end, we let σ + LL denote the number of real eigenvalues of A 2 L that are greater than 1. If 1 is not an eigenvalue of A 2 L , then In view of the simple factorisation I − A 2 L = (I − A L )(I + A L ), by (2.4), (3.2) and (4.1) we have The following theorem summarises the main results of [13]. All aspects of Theorem 4.1 follow from the results of the previous two sections, except those relating to the quantity σ + LL + σ + LR , and for a complete proof the reader is referred to [4,13,14]. ii) If σ + L + σ + R is odd and σ − L + σ − R is even, then x L and x R are admissible for the same sign of µ, and the LR-cycle is not admissible for all µ = 0.
iii) If σ + L + σ + R is even and σ − L + σ − R is odd, then x L and x R are admissible for different signs of µ, and the LR-cycle is admissible for one sign of µ.
x R and the LR-cycle are admissible for the same sign of µ.
v) If σ + L + σ + R and σ − L + σ − R are odd and σ + LL + σ + LR is even, then x L and x R are admissible for one sign of µ, and the LR-cycle is admissible for the other sign of µ.
5 The LR-cycle coexists with at least one fixed point As a consequence of the following theorem, which is the main result of this Letter, if σ + L + σ + R and σ − L + σ − R are odd, then σ + LL + σ + LR is also odd. Therefore, scenario (v) of Theorem 4.1 cannot occur.
Theorem 5.1. Suppose 1 is not an eigenvalue of A L , A R and A R A L , and −1 is not an eigenvalue of A L and A R . Suppose (−1) σ + The key feature of the proof of Theorem 5.1, given below, is that we look closely at (−1) σ + as the first two quantities admit a convenient algebraic manipulation. We begin with the following lemma.
Next we recall the matrix determinant lemma [19]: det(A + pq T ) ≡ det(A) + q T adj(A)p, for any N × N matrix A, and p, q ∈ R N . By applying this result to the right-hand side of (5.4), from (5.3) we obtain (5.1).
Discussion
By Theorems 4.1 and 5.1, the existence and relative coexistence of x L , x R and the LR-cycle near generic border-collision bifurcations is almost completely determined by the even/odd parity of σ + L + σ + R and σ − L + σ − R . We only need to evaluate σ + LL + σ + LR if we wish to identify which fixed point the LR-cycle coexists with in scenario (iii) of Theorem 4.1. Scenario (v) of Theorem 4.1 cannot occur in view of Theorem 5.1, which was proved by using algebraic arguments to demonstrate that the particular combination of eigenvalue conditions required for scenario (v) cannot be satisfied.
The stability of x L , x R and the LR-cycle was not discussed here, refer to [4,13,14]. In brief, x L , x R and the LR-cycle are attracting if and only if all eigenvalues of A L , A R and A R A L , respectively, have modulus less than 1. Stability therefore relates directly to the various σ's defined above, and Theorem 4.1 can be used to show that for any µ ∈ R, at most one fixed point or period-two solution can be attracting.
The admissibility of periodic solutions of (1.2) with period greater than two cannot be characterised as simply as for fixed points and period-two solutions. For instance, a generic LLR-cycle is admissible for one sign of µ if and only if sgn (det(I + A R + A R A L )) = sgn (det(I + A L + A L A R )) = sgn det I + A L + A 2 L , (6.1) see [9,18], and it is not clear how to relate the quantities in (6.1) to the eigenvalues of A L and A R .
|
2014-05-27T22:39:04.000Z
|
2014-05-27T00:00:00.000
|
{
"year": 2014,
"sha1": "34c71caf6e4ce13c43a0821ca9ec28fe34947352",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.aml.2014.07.023",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "34c71caf6e4ce13c43a0821ca9ec28fe34947352",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
119690768
|
pes2o/s2orc
|
v3-fos-license
|
"Unusual"critical states in type-II superconductors
We give a theoretical description of the general critical states in which the critical currents in type-II superconductors are not perpendicular to the local magnetic induction. Such states frequently occur in real situations, e.g., when the sample shape is not sufficiently symmetric or the direction of the external magnetic field changes in some complex way. Our study is restricted to the states in which flux-line cutting does not occur. The properties of such general critical states can essentially differ from the well-known properties of the usual Bean critical states. To illustrate our approach, we analyze several examples. In particular, we consider the critical states in a slab placed in a uniform perpendicular magnetic field and to which two components of the in-plane magnetic field are then applied successively. We also analyze the critical states in a long thin strip placed in a perpendicular magnetic field which then is tilted towards the axis of the strip.
I. INTRODUCTION
The concept of the critical state introduced by Charles Bean 1 is widely used to describe various physical phenomena in the vortex phase of type-II superconductors, see, e.g., Refs. 2,3 and citations therein. According to Bean, in the critical state of type-II superconductors with flux-line pinning, the driving force of the currents flowing in this state is balanced by the pinning force acting on the vortices. The critical state is characterized by the component of the current density flowing perpendicular to the flux lines, j c⊥ , since only this component generates a driving force. It is assumed in the critical-state theory that this j c⊥ is known, i.e., it is a given function of the magnetic induction B, j c⊥ = j c⊥ (B), and the problem of this theory is to find the appropriate distribution of the magnetic fields and currents in the critical state. Below, for simplicity, we shall assume that the magnetic fields H in the superconductor considerably exceed the lower critical field H c1 , and so we put B = µ 0 H throughout the paper. Beside this, we deal only with bulk superconducting samples, assuming that all their dimensions noticeably exceed the London penetration depth, and we consider the critical state macroscopically, averaging vortex structures and the appropriate microscopic currents over a scale exceeding the intervortex spacing.
Hereafter we shall call the critical states "Bean critical states" if the current density j is perpendicular to the local magnetic field H at every point of a superconductor, j = j ⊥ , and thus j = j ⊥ = j c⊥ . This definition imposes limitations on the direction of the currents in the critical state, but it does not imply constancy of j c⊥ , e.g., j c⊥ (H) can be as in the Kim model. 4 The Bean critical states can be found from the static Maxwell equations, rotH = j, divH = 0, (1) and the conditions on the current density where j is the component of the current density along the local magnetic field H. Such states usually occur when the shape of the superconductor is sufficiently symmetric and the external magnetic field H a is applied along a symmetry axis, so that the direction of the currents is dictated by the symmetry of the problem. Most of the known solutions of the critical state problem describe just these Bean states. For example, this is the well-known solution for an infinite slab in an external magnetic field parallel to its surface, 1 and also the solution for an infinitely long cylinder with arbitrary crosssection in a magnetic field parallel to its axis since the currents flow perpendicular to this axis. 2 Bean critical states also occur in infinitely long and thin strips 5,6,7 and in thin disks 8 in a perpendicular magnetic field even if j c⊥ depends on |B| ≡ B or on the angle between B and the normal to the sample plane. 9,10,11 If the applied magnetic field is tilted to the plane of an infinitely long strip 12,13,14,15 or slab 16 but remains perpendicular to the sample axis, the critical currents flow along this axis, and a Bean critical state occurs. Further examples of the Bean critical states in samples of a complex shape can be found in Ref. 17,18,19,20,21. A characteristic feature of all these Bean critical states is that the perturbation of the current distribution caused by a change of the applied field propagates into the sample as a sharp front at which the direction of the currents changes abruptly.
In real samples of nonsymmetric shape, or when the applied magnetic field changes not only in amplitude but also in its direction, adjacent flux lines may be slightly rotated relative to each other in the critical state. This rotation generates a component of the current along the magnetic field, 22 j . The rotation of flux-lines can lead to their mutual cutting. 2,22 Flux line cutting occurs when the component of the current density parallel to the magnetic field, j , exceeds some longitudinal critical current density j c . In this situation a vortex 23 or a vortex array 24 becomes unstable with respect to a helical distortion, and the growth of this distortion leads to flux-line cutting. When both j and j c⊥ are equal to their critical values j c and j c⊥ , respectively, the so-called double critical state 22,25 occurs in the superconductor. 26 For example, this state appears in some region of a superconducting sample 25,27 when a rotating magnetic field of constant magnitude is applied to a superconducting disk (or slab) in its plane. 28,29,30 The double critical state can be still described by Eq. (1), (2), but with the following conditions on the current density j = j ⊥ + j : The concept of the critical state with flux-line cutting was further developed in Refs. 32,33,34 to explain the observed suppression of the magnetic moment of a superconducting slab under the action of an ac magnetic field. 34,35,36 However, in many real situations a change of the direction of the external magnetic field or a nonsymmetric shape of the sample does not lead to flux cutting in the superconductor, i.e., j does not reach j c in the critical state. In such situations there is no explicit condition on the magnitude of j except that j < j c , and the static equations (1) and (2) with the only restriction j ⊥ = j c⊥ are not sufficient to find the distributions of the magnetic field H(r) and current density j(r) in the critical state. This problem for the special case of a slab with an in-plane magnetic field was solved in Refs. 25,27. The full set of the critical-state equations for arbitrary shape of the sample and for any quasistatic evolution of the vector of the applied magnetic field H a was obtained in Ref. 37, where it was also shown that in contrast to the common Bean critical states, a perturbation of the current distribution in such critical states propagates into the sample smoothly in a diffusive way. We emphasize that this class of critical states with j < j c corresponds to the general situation, while the common Bean critical states and the double critical states are only limiting cases occurring when j = 0 or j = j c , respectively.
Such general critical states, which we shall call the Tcritical states (T means transport), 38 occur even for simple experimental situations. In particular, they appear in a certain region of thin rectangular platelets in a perpendicular magnetic field (in platelets with thickness exceeding the London penetration depth this is the region which is not penetrated by the perpendicular component of the magnetic field). 39 Critical states of this type also appear at the vortex-shaking in rectangular platelets 40 and even in strips if the ac field is along the axis of the strips. 41 They also occur in low-frequency ac experiments with a slab when a circularly polarized ac field is applied perpendicularly to the dc magnetic field H a that is normal to the plane of the slab. 42,43 As was pointed out in Refs. 25,27, one more type of critical states can exist in superconductors. In these states j ⊥ < j c⊥ and j = j c , i.e., only flux cutting occurs without any transport of vortices. The description of such C-critical states (C means cutting) in samples of arbitrary shape can be obtained by an immediate generalization of the approach used in Ref. 25,27 for a superconducting slab. Below we shall not analyze such states in detail but only briefly outline this generalization.
In Sec. II of this paper we develop the approach of Ref. 37. In particular, we take into account the dependence of j c⊥ on j and anisotropy of flux-line pinning. We also discuss the relationship between the equations of Ref. 37 and the variational principle recently proposed. 44,45,46 In Sec. III we then analyze three examples of the general T-critical state.
A. Critical-state equations
The critical state is well established in a sample if the characteristic time of change of the applied magnetic field H a , j c⊥ d/|dH a /dt|, considerably exceeds the time of flux flow across the sample, µ 0 d 2 /ρ ff , where d is a characteristic size of the sample and ρ ff is the flux-flow resistivity. In other words, the concept of the critical state can be used for a description of the magnetic-field and current distributions in superconductors if the generated eddy electric fields are relatively small, The ideal critical state thus corresponds to the limit ρ ff → ∞. Below we imply condition (5) to be fulfilled. The general T-critical states with j < j c can be described by the following approach: 37 The static equations (1) and (2) are supplemented by the quasistatic Maxwell equation whereḢ ≡ ∂H/∂t, and E is the electric field generated by a change of the applied field H a . For the set of equations (1), (2), and (6) to be solvable, it has to be supplemented by the current-voltage law E(j, B). 47 This law is introduced from two well-known physical ideas: 1) At any given j and B, the direction of E follows from where v is the vortex velocity caused by the Lorentz force [j × B]. Here for simplicity we shall neglect the so-called Hall angle, 48 and so the directions of v and the Lorentz force coincide.
2) The magnitude of E is found from the condition that In fact, this condition may be interpreted as the following current-voltage dependence: which just corresponds to the ideal critical state.
To proceed with our analysis, let us introduce the following notations for the magnetic field H(r) and the current density j(r) in the critical state: H(r) = H(r)ν(r), j(r) = j(r)n(r) where H and j are the absolute values of the magnetic field and the current density while the unit vectors ν and n define their directions. Then, the component of the current density perpendicular to the magnetic field is given by Here the unit vector n ⊥ defines the direction of j ⊥ , n ⊥ = (n − ν(νn))/D; D = 1 − (nν) 2 is the normalizing factor that is equal to the sine of the angle between H and j, and we have taken into account the condition |j ⊥ | = j c⊥ . These formulas also lead to the explicit expression for the magnitude j of the current density, that is only another form of the condition |j ⊥ | = j c⊥ . Let us now formulate condition (7). Let at a moment of time t the external magnetic field H a (t) change infinitesimally by δH a =Ḣ a δt. Under the change of H a , the critical currents locally shift the vortices in the direction of the Lorentz force [j × ν]; this shift generates an electric field directed along [ν × [j × ν]] = j ⊥ , i.e., along the vector n ⊥ . Thus, we can represent the electric field E(r) in the form: where the scalar function e(r) is the modulus of the electric field. Note that the electric field in general is not parallel to the total current density j(r). With formulas (10) and (11), equations (1), (2), and (6) are sufficient to describe the T-critical states in a sample of arbitrary shape. It is important that the magnetic fields H(r) and currents j(r) in the critical state at the moment of time t + δt depends only on the field and current distributions in the previous critical state at the moment t and on the change of the external field δH a =Ḣ a δt, while the electric field e is proportional to the sweep rateḢ a rather than to δH a , and so it plays an auxiliary role in solving the critical-state problem. 37 We emphasize that e is now found as a solution of the set of equations (1), (2), (6), (10), (11) without using the specific current-voltage dependence (9). The explicit equation for the scalar function e(r) has the form: 37 where E is given by Eq. (11). Continuity of the magnetic field on the surface of the superconductor, S, yields the boundary condition to Eq. (12): where r S is a point on the surface S, R ≡ r S − r ′ , R = |R|, and the integration is carried out over the volume of the sample. The right hand side of this boundary condition expresses µ 0Ḣ on the surface of the superconductor (but reaching from outside) with the use of the Biot-Savart law. If in the critical state of the superconductor there are also boundaries at which the direction of the critical currents changes discontinuously or which separate regions with j ⊥ = j c⊥ from regions with j = 0, 49 the function e(r) has to vanish at these boundaries to provide continuity of the electric field en ⊥ there.
In practical calculations of critical states developing in the process of changing H a (t) it is convenient to rewrite Eqs. (1) and (6) in the form which is a differential equation for the angles defining the direction of j, i.e., the unit vector n = j/j. Note that since the distributions of the magnetic fields and currents in the critical states of a superconductor are independent of the sweep rateḢ a , their temporal dependence is only a parameterization of their dependence on H a . Let us now write explicitly the applicability condition of the above theory. Since the projection of j on the local direction of H is j c⊥ (nν)/D, the condition that flux-line cutting is absent leads to the following restriction on the angle between the local j and H: where j c is the longitudinal critical current density. Finally, we make several remarks on the electric field. It may turn out that the electric field en ⊥ obtained with Eq. (12) does not satisfy the condition div(en ⊥ ) = 0. To clarify this situation, it is necessary to remember that a moving vortex generates an electric dipolar moment, 48 and hence the moving vortex medium is characterized by the vector of polarization P which is the macroscopic density of this moment. It follows from the results of Ref. 48 that P = −en ⊥ , and a nonzero div(en ⊥ ) means that in a type-II superconductor the electric-charge density −divP appears which generates a curl-free electric field E p = −∇Φ described by the scalar potential Φ. This potential field is a part of the total electric field given by E = en ⊥ inside the sample, and it obeys the equation divE p = div(en ⊥ ), i.e., where ∆ ≡ div∇. At the surface of the sample, S, the field E p satisfies the same boundary conditions as in the electrostatics of dielectrics: 47 The tangential components of E p and the normal component of E p + P = E p − en ⊥ are continuous there. Since P = 0 outside the sample, the latter condition means that where E + p and E − p are the surface potential fields calculated outside and inside S, respectively, and τ is the normal to S pointing outside. The right hand side of Eq. (17) gives the surface-charge density induced by moving vortices in the sample. Note that the potential part of en ⊥ does not influence the magnetic fields and currents in the critical state since rotE p = 0. Appearance of this part is caused by condition (11) that dictates the direction of the electric field in the sample. Although both the inductive part of the electric field, en ⊥ − E p , which generates the critical states, and the potential part E p can be measured in certain situations, 50 we shall not analyze electric fields in detail in this paper since these fields plays only an auxiliary role in the critical state problem. See also the recent book on electric fields. 51 Generally speaking, in the process of changing H a a migration of the induced charges ρ = div(en ⊥ ) occurs, which leads to a generation of currents satisfying divj = −(∂ρ/∂t) and violating Eq. (2). However, these nonstationary currents are proportional to the second power of the sweep rateḢ a and are negligible under assumption (5).
B. Generalizations
We now point out some generalizations of the above results which may be useful in analyzing critical states in real situations.
j c⊥ depends on j
The current-voltage law used in Sec. II A, Eq. (9), means that flux creep is negligible in our approach. In this case the critical current density is found from the condition that the creep activation barrier U of a vortex bundle is equal to zero. It has been implied above that j c⊥ may depend on B but is completely independent of the magnitude of j . In other words, the form U = U (j ⊥ , B) has been assumed for this U . However, the creep activation barrier U , generally speaking, may depend not only on j ⊥ and B but also on the j that characterizes flux-line misalignment in the bundle, i.e., in the general case one has U = U (j ⊥ , j , B). Then the critical current density j c⊥ determined from U (j ⊥ , j , B) = 0 takes the form j c⊥ = j c⊥ (B, j ). One may expect that this dependence of j c⊥ on the longitudinal current component j is especially noticeable when j is close to its critical value j c , and hence j c⊥ (B, j c ) in general differs from j c⊥ (B, 0). Similarly, the activation barrier U cut for flux cutting is a function of both currentdensity components and of the magnetic induction, i.e., U cut = U cut (j ⊥ , j , B), and the condition U cut = 0 gives j c = j c (B, j ⊥ ). In Fig. 1a at a fixed B we schematically show the dependences of j c⊥ on j and of j c on j ⊥ in the plane (j ⊥ ,j ). Note that these dependences cross when the equations U (j ⊥ , j , B) = 0 and U cut (j ⊥ , j , B) = 0 hold simultaneously. This occurs at isolated points in the j ⊥ -j plane since the barriers U and U cut characterize different physical processes and are essentially different functions of the current components. These points correspond to the double critical states when j ⊥ = j c⊥ and j = j c . In Fig. 1a the top/bottom and right/left sections of the curves between the four points describe j c⊥ (j ) in the general T-critical state and j c (j ⊥ ) in the C-critical state.
The dependence j c⊥ (j ) leads to a replacement of j c⊥ (H) by j c⊥ (H, j ) in formula (10) that now reads The dependence j c⊥ (j ) also leads to a modification of Eq. (12). In the right hand side of this equation the term −µ 0 (∂j c⊥ /∂j )(∂j c /∂t) should be added that equals with E from Eq. (11). Note that the first term in this expression has no singularity at H → 0 since the combination n ⊥ · rot(en ⊥ ) can be also rewritten as en ⊥ · rotn ⊥ and e = |[B × v]| ∝ H. In Refs. 52,53 a phenomenological model of the general critical state was considered that described sufficiently well a number of experimental data on the magnetization of a slab and of a disk in magnetic fields parallel to their planes. In fact, in this model a certain type of the dependence of j c⊥ on j (and of j c on j ⊥ ) was introduced. Even though the directions of the electric field in this model do not satisfy the physical requirement (11), the sufficiently good description of the data seems to indicate the importance of this dependence in real situations.
Anisotropy of flux-line pinning
In deriving Eq. (11) we have assumed that when H a changes, vortices shift in the direction of the local Lorentz force [j × B]. However, in the case of anisotropic pinning this assumption may fail. Nevertheless, even in this case the direction of the shift can be expressed via the directions of j and ν ≡ H/H, see Appendix A in Ref. 39. Now the unit vector u along the electric field, E = ue, is where the angle δ describes the change of the direction of the electric field due to anisotropic pinning. If in the plain perpendicular to the local H the critical current density j c⊥ depends on its direction n ⊥ , the angle δ is found from 39 where φ is the angle defining the direction of n ⊥ = (cos φ, sin φ) in the plane perpendicular to H. When j c⊥ is isotropic in this plane, we obtain δ = 0, and thus u coincides with n ⊥ . Equations (20) and (21) give a relation between n ⊥ and u. When δ = 0, i.e., when the vector u differs from n ⊥ , the only change in the critical state equations is that en ⊥ in Eq. (11) is replaced by eu, and j c⊥ (H) in Eq. (10) is now j c⊥ (H, n ⊥ ).
C-critical states
As it was mentioned in the Introduction, in the case of an infinite slab the critical states with flux-line cutting but without flux-line transport were considered in Refs. 25,27. For samples of arbitrary shape such Ccritical states can be described by Eqs. (1), (2), and (6), but now the electric field is along the local H, i.e., E = νe. This condition replaces Eq. (7) [or (11)]. The absolute value e of the electric field is now determined by the condition j = j c which is equivalent to the following current-voltage dependence: and leads to the formula Equations (22), (23) replace Eqs. (9) and (10), respectively.
C. Variational principle
Recently, 44,45,46 a variational principle was put forward to describe the critical states in superconductors. In deriving this principle Badía and López used Eqs. (1), (6) and the current-voltage law with |E| = 0 when j is inside some region ∆ of the j-space and |E| → ∞ when j lies outside this region. In other words, the critical states correspond to the boundary Γ of the region ∆, see Fig. 1b,c. However, the physical idea of the direction of the electric field, Eq. (7), was not incorporated in their principle. Instead of this they find the direction of the electric field from some condition of maximality of their Hamiltonian. This leads them to the conclusion that the electric fields in the critical states are directed along the normals to the boundary Γ at the appropriate points, Fig. 1b,c.
Within our approach their boundary Γ corresponds to the contour composed of the dependences j c⊥ (j ) and j c (j ⊥ ), see Fig. 1a and Sec. II B1. But in our general T-critical states with j < j c the electric field is always perpendicular to the local H (i.e., to j ), and in the Ccritical states with flux-line cutting but without flux-line transport the electric field is along the local H. It is clear that only in the case when ∆ is a rectangle does the approach of Badía and López lead to the correct results for the electric field, Fig. 1. 54 However, in general their approach leads to contradiction with existing physical concepts. 2,23,24 In particular, in the so-called isotropic model, when ∆ is a circle, Fig. 1b, the electric field E is parallel to j, and hence a nonzero E along H appears even for an infinitesimally small longitudinal component of j, i.e., flux-line cutting in that model occurs without any threshold j c . 55
III. EXAMPLES
We first consider two examples of the general critical state in an infinite slab of thickness d. Let this slab fill the space |x|, |y| < ∞, |z| ≤ d/2, and be in a constant and uniform external magnetic field H az directed along the z axis, i.e., perpendicularly to the slab plane. The critical current density j c⊥ is assumed to be constant in this slab. In the first example a constant field H ax (H ax ≥ J c /2 = dj c⊥ /2) is applied along the x axis, and after that the magnetic field H ay is switched on in the y direction. This example was considered in our paper, 37 but there H ax , H ay , and J c were assumed to be small as compared with H az , i.e., the tilt angle θ of the magnetic field to the z axis was always small. Now we do not put this restriction, and the angle θ may be sufficiently large. But we still assume that flux-line cutting does not occur (see below). This example may be considered as a modification of the experimental conditions of Refs. 34 where the suppression of the magnetic moment of the slab was investigated at H az = 0. In the second example the critical current along the y axis is applied to a slab, and after that the magnetic field H ay is switched on in the same direction.
The critical state equations are the same for these two examples. The difference is only in the boundary conditions. Let us write these equations. The condition divj = ∂j z /∂z = 0 together with j z | z=±d/2 = 0 yields j z = 0, i.e., the currents flow in the x-y planes. 56 Then, to describe the critical state, we may use the parameterization: whereẑ is the unit vector along the z axis; j c (ϕ, θ, ψ) is the magnitude of the critical current density when a flux-line element is given by the angles ψ and θ, tan ψ = h y /h x , tan θ = (h 2 x + h 2 y ) 1/2 /H az , while the current flows in the direction defined by the angle ϕ; all these angles generally depend on z. A dependence of j c on the orientation of the local H appears even at a constant j c⊥ if j c is not perpendicular to this H, and this dependence is described by formula (10), where D in terms of the angles is With this parameterization, the equation divH = 0 is satisfied identically, while the Maxwell equation rotH = j reads These equations differ from the appropriate equations of Ref. 37 by the factor 1/D, which is not unity now.
In the case under study one has n = (cos ϕ, sin ϕ, 0), ν = (sin θ cos ψ, sin θ sin ψ, cos θ). Then, a direct calculation gives the following expressions for the vector n ⊥ defining the direction of the current component perpendicular to H: and equation (12) for the electric field e takes the form: n ⊥x (en ⊥x ) ′′ + n ⊥y (en ⊥y ) ′′ − ψ ′ sin 2 θ(n ′ ⊥x n ⊥y − n ⊥x n ′ ⊥y )e = 0, (29) where the dash over a symbol means ∂/∂z. For the angle ϕ we obtain from Eq. (14): At small θ when D ≈ 1 and n ⊥ ≈ n, equations (29) and (30) reduce to the form that was used in Ref. 37: Equations (26) In the case of the slab condition (15) of absence of flux-line cutting leads to the following restriction on the angles θ, ψ, and ϕ: This condition is fulfilled at any ϕ and ψ, i.e., at any direction of j and h, if the z component of the magnetic field, H az , is not too small, We imply this condition to be fulfilled below.
A. First example: Hax and Hay
In the first example that we consider, a constant field H ax (H ax ≥ J c /2 = dj c⊥ /2) is applied along the x axis, and after that the magnetic field H ay is switched on in the y direction. Then, the boundary conditions to Eqs. (26) - (30) or equivalently, conditions (36), which follow from formula (13), can be rewritten in the form: Taking into account the symmetry of the problem, Since after switching on H ax , the critical currents flow in the y direction, we have the following initial condition for Eq. (30): where the moment t = 0 corresponds to the beginning of switching on H ay . As to the initial magnetic-field profiles, equations (26), (27), and (39) give h y (z, t = 0) = H ay = 0 and h x (z, In the limiting case H az ≫ H ax , H ay , J c , the solution of equations (26)-(30) with conditions (35) -(39) was investigated in Ref. 37. Since in this case n ⊥ ≈ n, one finds that the electric field en ⊥ is along the current density jn, and in fact, we arrive at a situation which can be formally described by the so-called isotropic model of Badía and López. 45 As was explained in Sec. II C, this model in general does not lead to the correct direction of the electric field. In particular, it fails in the following situation discussed by Badía Fig. 2. We present ϕ(z), ψ(z), h x (z), h y (z) in the sequence of the critical states developed in the process of increasing H ay . We do not show the electric field e that is proportional to the sweep rateḢ ay and plays an auxiliary role. As was noticed previously, 37 in stark contrast to the Bean critical states, in which any change of the current direction occurs inside a narrow front, in the general Tcritical state the change of the angle ϕ(z) with increasing H ay has diffusive character . But there is a difference between the data of Fig. 2 and the results 37 obtained in the case H az ≫ H ax , H ay , J c when the currents in the critical state are almost perpendicular to the local magnetic fields. In the latter case at H ay > J c the field profile The magnetic field components hx(z) (dashed lines) and hy(z) (solid lines) in the same critical states. We start from the diamagnetic initial critical state with hx(z) = Hax − 1 + z, hy(z) = Hay = 0, and ϕ(z) = π/2. Here z is in units of d/2, and the magnetic fields in units of j c⊥ d/2 = Jc/2. h x (z) becomes practically constant and coincides with H ax , while the angle ϕ tends to π. On the other hand, we see from Fig. 2 that for H az ∼ J c the angle ϕ lies in the interval π < ϕ < 3π/2 at H ay > J c . In other words, the y component of j(z) has the opposite direction as compared with the initial state. This leads to the fact that at H ay > J c the field h x increases towards the central plane of the slab, z = 0 (but h(z) = h 2 x + h 2 y still decreases towards this plane), and the initial diamagnetic state with the magnetic moment M x = −j c⊥ d 2 /4 (per unit area) turns into a paramagnetic state with positive M x .
In Fig. 3 we show the same sequence of the critical states but in the case of the paramagnetic initial state. This initial state is obtained if one first increases the field H ax essentially above the field of full flux penetration and then decreases it to a prescribed value. Now the initial and the magnetic fields at t = 0 are given by h y (z, t = 0) = H ay = 0, h x (z, t = 0) = H ax +0.5J c −j c⊥ z. It is seen from Fig. 3 that although at H ay < H az a decay of the initial paramagnetic profile h x (z) occurs, with a further increase of H ay new paramagnetic states are developed that are close to the appropriate states of Fig. 2.
In Fig. 4 we compare the H ay -dependences of the magnetic moment (M x , M y ) per unit area of the slab, 58 obtained using the two sequences of the profiles j(z, H ay ) developed from the diamagnetic and paramagnetic initial states with the same H ax . It is seen that in both cases |M x | and |M y |, and even more M = M 2 x + M 2 y , can exceed the "saturation value" j c⊥ d 2 /4 used as unit in Fig. 4. This is possible since the current density j exceeds j c⊥ when it does not flow at a right angle to the vortices. This excess of j leads to that M x (H ay ) does not saturate at large H ay but continues to increase nearly linearly, with slightly negative curvature. The other component, M y (H ay ), at large H ay practically saturates to a value slightly lower than −j c⊥ d 2 /4. Of course, one should keep in mind that in reality the region of large H ay where these results for M are applicable is limited by condition (34). Note also that in agreement with Figs. 2 and 3 the magnetic moment M x (H ay ) is always positive at sufficiently large H ay , and the diamagnetic and paramagnetic initial states lead to practically the same M x at such H ay .
As it is known, field-cooled type-II superconducting samples frequently exhibit a positive magnetic moment; see, e.g., paper 59 and references therein. Different explanations of this paramagnetic effect were put forward. In particular, this effect may be associated with the compression of trapped magnetic flux in the sample. 60 The data of Figs. 2-4 show that in principle, the paramagnetic effect may be also due to the field-cooling caused generation of critical states in which the circulating currents are not perpendicular to the local magnetic fields.
The general T-critical states considered here can be re- 34 except that now the field H az perpendicular to the plane of the sample is not equal to zero. Such investigations would enable one to compare the theoretical results for the general T-critical states with the appropriate experimental data avoiding complications due to flux-line cutting. To prepare the initial state which was described above, e.g., in a superconducting strip of length 2L and width 2w considerably exceeding its thickness d, 2L > 2w ≫ d, one may apply first the field H az perpendicular to the plane of the strip, and then an oscillating in-plane magnetic field H ax across the width of the strip. This "shaking" leads to a homogeneous distribution of the perpendicular field H az over the sample. 62 After this shaking process one keeps H ax = constant and applies the field H ay along the axis of the strip.
B. Second example: Jy and Hay
We now consider the second example of the general Tcritical state in the slab. It is assumed that the slab is in a uniform magnetic field H az along the z axis, the current J (per unit length along x) flows in the y direction, and at t = 0 the field H ay is switched on. In this case the boundary conditions at z = d/2 are The symmetry of the problem is now described by the relationships: e(−z) = e(z), ϕ(−z) = π−ϕ(z), h x (−z) = −h x (z), h y (−z) = h y (z), and at z = 0 the direction of the critical currents changes continuously. Thus, instead of condition (38) we have at z = 0, As in the first example, we shall consider the critical states only in the interval 0 ≤ z ≤ d/2. If J is less than J c = j c⊥ d, in the initial state the current flows only at 1 − (J/J c ) ≤ 2z/d ≤ 1. After switching on H ay the current distribution develops over the whole thickness d when H ay reaches a penetration field H 0 ay < J c /2, and we shall analyze the critical states only after this penetration of the current has occurred, i.e., at H ay ≥ H 0 ay . Below we consider only the case H az ≫ J c . In this case in the leading order in the small parameter J c /H az we find the following analytic solution of Eqs.
where D 2 = 1−cos 2 (ϕ−ψ) sin 2 θ ≈ 1−sin 2 ϕ sin 2 θ (since either ψ ≈ π/2 or sin 2 θ ≪ 1), cos 2 θ ≈ H 2 az /(H 2 az +H 2 ay ), and the length a is determined by the sheet current J and cos θ, We shall denote the solution of Eq. (46) as 2a/d = g(J cos θ/J c ). The function g(u) defined by u = g arcsinh(1/g) increases monotonically with its argument u, Fig. 6. Hence with increasing H ay , i.e., with decreasing cos θ, the length a decreases. When J is close to J c and cos θ ≈ 1, the length a tends to ∞, while for J ≪ J c one has 2a/d ∼ J cos θ/J c ≪ 1. A very good approximation valid at all u is, 41 The field of full flux penetration can be estimated from h y (0) = 0, where a is determined by Eq. (46). When H ay ≪ H az , one has cos θ ≈ 1, and the length a is almost independent of H ay . Thus, at such H ay the profiles ϕ(z), h x (z), and h y (z) − h ay practically do not change with increasing H ay . Fig. 5 shows that this property of the profiles, in fact, holds in the region H ay < H az /2 when J/J c = 0.5. However, if J is close to J c , the length a sharply depends on cos θ, Fig. 6, and the width of this region shrinks. Solution (45) can be obtained as follows: We put (en ⊥x ) ′′ = 0, (en ⊥y ) ′′ = 0, since it may be verified that the term proportional to ψ ′ sin 2 θ in Eq. (29) and the left hand side of Eq. (30) are small in the parameter J c /H az and hence may be omitted in the first approximation. Equations (49) mean that en ⊥x and en ⊥y are linear functions of z, and thus each of them generally depends on two constants. However, taking into account boundary conditions (43) and the symmetry of the problem, one finds that the functions en ⊥x and en ⊥y are expressed by only one constant that coincides with en ⊥y . If we denote this constant as a cos θ and use we arrive at formulas (45). The profiles h x (z) and h y (z) follow from Eqs. (26) and (27), and the constant a can be found from the condition which is just Eq. (46). It is also instructive to write the electric-field components E x ≡ en ⊥x and E y ≡ en ⊥y explicitly. Using Eqs. (45) and (50), we find E y = µ 0Ḣay a cos θ = µ 0Ḣay d 2 cos θg(J cos θ/J c ). (53) The field E x results from the tilt of a vortex line along the y direction when H ay is applied to the slab. Note that d/2 −d/2 E x (z)dz = 0 since the upper (z > 0) and lower (z < 0) parts of the vortex move in opposite directions when the tilt occurs. On the other hand, E x is independent of z. This component of the electric field is due to a drift of the vortex as a whole in the x direction when H ay is applied to the sample. 41 The above formulas for the slab with a current enable one to reproduce a number of results for the vortex-shaking effect that were derived from geometrical considerations. 40,41 In particular, the expression for ϕ(z) in Eqs. (45), formula (46), and Eq. (53), in fact, coincide with Eqs. (4), (6) and (28) from Ref. 41 in which the socalled longitudinal vortex-shaking effect in a thin strip was considered. To obtain the formulas for the vortexshaking effect in a rectangular platelet, 40 one should consider the slab with H az ≫ H ay , J c and with the total current J flowing at an arbitrary angle to the y axis, i.e., when J = (J x , J y ). The appropriate solution of the critical state equations is still obtained from Eqs. (49), but now there is no more symmetry restriction on the z dependences of all the functions, and en ⊥x and en ⊥y are expressed via two constants. Similarly to Eq. (51), these constants can be expressed via J x and J y , and the solution thus obtained reproduces the appropriate results of Ref. 40.
C. Third example: strip
We now consider the third example of the general Tcritical state. Let a thin strip fill the space |x| ≤ w, |y| < ∞, |z| ≤ d/2 (d ≪ w), and be in a constant and uniform external magnetic field H az directed along the z axis, i.e., perpendicularly to the strip plane. The critical current density j c⊥ is still assumed to be constant, and let H az considerably exceed J c = j c⊥ d so that at the initial moment of time, t = 0, the strip is in the fully penetrated Bean critical state. In other words, the magnetic-field profile H z (x) in the strip is described by the well-known function, 5,6,7 and one has J y (x) = J c for −w ≤ x < 0 and J y (x) = −J c for w ≥ x > 0, where the sheet current J y is the current density integrated over the thickness d. At t > 0 the magnetic field H ay is switched on in the y direction, and hence the applied field is tilted towards the axis of the strip. Note that the critical states in isotropic and anisotropic strips placed in inclined magnetic fields were studied in Refs. 12,13,14,15,16. However, in all these papers the external magnetic field was tilted perpendicularly to the axis of the strip, the currents in the critical states were always perpendicular to local magnetic fields, and thus, the usual Bean critical states occurred in the strips. In the considered case the general T-critical states develop in the strip, and these states differ from the states of the second example in that the magnetic field H z and the currents J are not uniform in the x direction any more.
Strictly speaking, the description of the magnetic-field tilt towards the axis of the strip reduces to solving a two-dimensional general T-critical state problem. However, the smallness of the parameter d/w enables us to simplify this problem by application of the approach of Ref. 39. Within this approach, we split the problem into two simpler problems: A one-dimensional problem across the thickness of the sample, and a problem for the infinitely thin strip. Namely, we first interpret a small section of the strip around an arbitrary point x (see Fig. 7) as an "infinite" slab of thickness d placed in a perpendicular dc magnetic field H z (x) and in a parallel field H ay and carrying a sheet current J y (x). This is just the problem that has been solved in Sec. III B. We then use the resulting electric field E y obtained for the slab, Eq. (53), as the local electric field E y (x) for an infinitely thin strip, to calculate the temporal evolution of the sheet current J y (x) and of the magnetic field H z (x) in this strip by the method of Ref. 63,64. The resulting equation for J y (x, t) can be written in the form: 62 where E y (J y ) is given by Eq. (53). On determining J y (x, t), the magnetic-field profiles are found from the Biot-Savart law. Since E y (J y ) ∝Ḣ ay , we see again that the temporal dependence of the current and magnetic-field profiles is only a parameterization of their dependence on H ay , Sec. II A. It also follows from Eqs. (53) and (54) that these profiles depend on the parameters H ay , H az , d, w via the following combinations: J y = J y (x/w, H ay /H az , P ), H z = H z (x/w, H ay /H az , P ) where we have introduced the notation P ≡ (d/2w)H az /J c . Note that the considered critical state problem is similar to the problem of the longitudinal vortex-shaking effect in a thin strip. 41 The difference between the problems is that the magnetic field H ay now increases monotonically rather than oscillates about H ay = 0, and here we present results up to large values of H ay even as compared with H az .
In Figs. 8 and 9 we show the profiles J(x, H ay ) ≡ |J y (x, H ay )| and H z (x, H ay ) that develop in the strip during increase of the longitudinal field component H ay , i.e., when the applied field is tilted away from the z axis towards the strip axis y. The profiles J(x, H ay ) take the shape similar to the shape of the profiles in the longitudinal vortex-shaking effect 41 , and their magnitude decreases with increasing H ay . However, in contrast to the shaking effect, this magnitude does not decrease down to zero but tends to a finite limit that depends on the only parameter P = (d/2w)(H az /J c ). Thus, at H ay ≫ H az the current profiles J(x, H ay ) and the magnetic-field profiles H z (x, H ay ) reach nonzero limiting distributions. The existence of such limiting J(x) and H z (x) can be understood from the following considerations: At small cos θ, if one neglects logarithmic corrections, the electric field E y , Eq. (53), is proportional toḢ ay (d/2w)(J/J c ) cos 2 θ, and equation (54) i.e., at H ay → ∞ the function F (H ay ) does not tend to zero. In other words, with increasing H ay the decay rate of J decreases so quickly that J does not reach zero even in the limit H ay → ∞.
In Fig. 10 is the following analytic approximation, When the magnetic field H ay is switched on, not only does the z component M z of the magnetic moment change but there appears also a magnetic moment M y along the axis of the strip. This moment (per unit length of the strip) is defined by the expression: where the x component of the current density, j x = j c⊥ cos ϕ/D, can be found using solution (45). With Eq. (27), formula (59) can be rewritten in the form: In other words, M y is the "expelled" flux in the y direction. Inserting Eqs. (45) into this formula, we obtain where M 0 y = −j c⊥ d 2 w/2 is the magnetic moment in the fully penetrated Bean critical state which occurs if the field H ay alone is applied to the strip, u = J cos θ/J c , and J = J(x, H ay ) is the current profile obtained from Eq. (54), see Fig. 8. Figure 11 (top) shows the normalized magnetic moment M y (H ay )/M ay (∞) plotted versus H ay /H az for the same values of the parameter P as in Fig. 10. The saturation value M ay (∞) always coincides with M 0 y = −j c⊥ d 2 w/2. Beside this, we find numerically the following interesting result: If P is not too small, P ≥ 0.5, the normalized magnetic moment plotted versus P · H ay /H az = (d/2w)H ay /J c is well described by the unique curve, Fig. 11 (62), this relaxation mainly finishes at some H ay proportional to min(H az , 2wJ c /d) [note that H az ≫ J c = dj c⊥ for Eq. (53) to be valid and for the full flux penetration to occur in the initial state]. All these results can be verified in experiments similar, e.g., to those of Refs. 66,67. However, we emphasize that in contrast to Refs. 66,67, the magnetic-field component H az has to be switched on before H ay . This guarantees absence of flux-line cutting for not too large H ay , see Eq. (34). If similarly to experiments 66,67 the in-plane magnetic field is switched on before H az , completely different critical states will develop.
IV. CONCLUSIONS
In this paper we point out how to calculate the general T-critical (cutting-free) states in an arbitrarily-shaped type-II superconductor when the applied magnetic field H a slowly changes in its magnitude and direction. In accordance with the definition of the general T-critical state, it is assumed here that the external magnetic field changes in such a manner that flux cutting does not occur in the sample. Our approach enables one to take into account the anisotropy of flux-line pinning and the dependence of the critical current density perpendicular to a local magnetic field, j c⊥ , on the longitudinal component of the current density j . We also show that the variational principle recently proposed 44,45,46 cannot give the correct description of the general T-critical states for many situations.
We analyze three examples of the general T-critical states, at least two of which may be investigated experimentally. In particular, we study a seemingly simple problem that has not been solved as yet, viz., we consider the critical states in a slab placed in a uniform perpendicular magnetic field H az and then two components of the in-plane magnetic field, H ax and H ay , are applied successively, Sec. III A. We obtain that one of the in-plane components of the magnetic moment, M x , becomes positive with increasing H ay for any sign of M x in the initial state (i.e., at H ay = 0). This paramagnetic effect is due to the fact that the currents in the critical states are not perpendicular to the local magnetic fields. This effect is especially evident when H az is of the order of the selffields of the slab.
In the other example, we analyze the general T-critical states in a long thin strip placed in a perpendicular magnetic field H az which then tilts towards the axis of the strip y, Sec. III C. When H ay , the axial component of the applied magnetic field, increases, the magnetic-field and current profiles across the width of the strip tend to limiting profiles, and the components of the magnetic moment, M z and M y , reach saturation values. The limiting profiles and the saturation value M z (∞) for M z (H ay ) are determined by the only parameter P = (d/2w)H az /J c where d and 2w are the thickness and the width of the strip, respectively, and J c = dj c⊥ . If P is not too large, P < 5, the limiting profiles and M z (∞) noticeably differ from zero, while at P ≥ 5 they become very small and practically vanish. The saturation value for M y is always equal to M 0 y = −j c⊥ d 2 w/2.
|
2007-08-10T23:57:26.000Z
|
2007-08-10T00:00:00.000
|
{
"year": 2007,
"sha1": "da3d383fd0a3b0e9e859eac1b870fd0c2989e0a0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0708.1531",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "da3d383fd0a3b0e9e859eac1b870fd0c2989e0a0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
10082289
|
pes2o/s2orc
|
v3-fos-license
|
Behavioral and subcortical signatures of musical expertise in Mandarin Chinese speakers
Both musical training and native language have been shown to have experience-based plastic effects on auditory processing. However, the combined effects within individuals are unclear. Recent research suggests that musical training and tone language speaking are not clearly additive in their effects on processing of auditory features and that there may be a disconnect between perceptual and neural signatures of auditory feature processing. The literature has only recently begun to investigate the effects of musical expertise on basic auditory processing for different linguistic groups. This work provides a profile of primary auditory feature discrimination for Mandarin speaking musicians and nonmusicians. The musicians showed enhanced perceptual discrimination for both frequency and duration as well as enhanced duration discrimination in a multifeature discrimination task, compared to nonmusicians. However, there were no differences between the groups in duration processing of nonspeech sounds at a subcortical level or in subcortical frequency representation of a nonnative tone contour, for fo or for the first or second formant region. The results indicate that musical expertise provides a cognitive, but not subcortical, advantage in a population of Mandarin speakers.
Introduction
The plastic effects of musical training on the brain have gained great interest in the research community [1]. Musical training has been shown to be associated with perceptual benefits in lower frequency discrimination thresholds for pure tones [2,3] and faster and more accurate detection of small pitch changes [4] not only in nonspeech sounds but also in a foreign language [5], compared to nonmusicians. Musicians have shown enhanced mismatch negativity (MMN) to slightly detuned chords, indicating more precise detection of frequency deviations [6]. On a subcortical level, musicians show enhanced phase locking and pitch representation in the frequency following response (FFR) in both musical and speech sounds [7,8], enhanced representation of spectral content which contains vocal emotion [9,10], and enhanced PLOS ONE | https://doi.org/10.1371/journal.pone.0190793 January 4, 2018 1 / 14 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 differentiation of speech sounds by encoding of the second formant [11]. The magnitude of brainstem responses to tuned and detuned chords was also related to perceptual differences in pitch discrimination between musicians and nonmusicians, indicating a link between behavioral performance and subcortical plasticity [12]. Auditory plasticity has been shown over short periods of time in schoolchildren participating in both formal and informal musical activities, indicating that experience-based effects of music are not limited to adult professional musicians but that musical experience promotes maturation of the auditory system [13,14,15] and auditory plasticity over the lifespan is sensitive to behavioral needs. The current view is that musical training promotes efficiency through corticofugal tuning which emphasizes features that are trained and/or are useful for the current task demands [16].
Native speakers of tone languages, which encode lexical pitch contrasts, show perceptual benefits for frequency and interval change detection as well as discrimination of nonnative linguistic tone contrasts compared to English speakers, even after training [17,18]. Mandarin speakers have shown stronger pitch representation and smoother pitch tracking to Mandarin tones as well as stronger representation of the second harmonic [19]. Pitch tracking of Mandarin Chinese and Thai speakers to linguistic tone contours was more accurate than that of English speakers, indicating a transfer effect between tone languages [20]. Moreover, pitch representation is enhanced to musical and nonmusical sounds, speech stimuli, and iterated ripple noise, which suggests an effect that is not specific to the speech context [21,22,23].
Effects of musical training and tone language are very similar, and many studies have equivocated them. However, recent attempts to disentangle the effects have shown a much more complex picture. Cooper and Wang [24] separated tone and non-tone speakers and musicians and nonmusicians in both linguistic groups (English and Thai) and taught them a new tone language (Cantonese). They found no clear advantage for tone language learning from the native tone language speaking musicians; rather, English-speaking musicians had the most advantage in learning Cantonese. The Thai speakers experienced tone confusion which impeded their learning of the new Cantonese tone contours, while the musicians in both linguistic groups performed better than the nonmusicians.
Language effects have been shown not only with tone languages but also quantity languages, like Finnish, which encode lexical duration. Previous studies have shown that native speakers exhibit enhanced perceptual, cortical, and duration processing at subcortical level [25,26,27,28]. The interaction of these effects with musical training, however, is more complex, and the effects of musical expertise within linguistic groups are unclear. Enhanced MMNs and perceptual detection for duration deviants was found for Finnish speaking nonmusicians and French speaking musicians, but enhanced MMNs were found for frequency deviants only in French speaking musicians [29]. Likewise, Finnish speakers with greater musical sophistication have shown enhanced perceptual frequency discrimination, but not duration discrimination, and no enhanced subcortical duration discrimination, compared to those with less musical sophistication [30]. These studies indicate a specific effect of native language phonological patterns in the effects of musical expertise within the linguistic group.
Other research has shown an interesting disconnect between perceptual and neural effects when music and language are investigated in combination. Bidelman, Gandour, and Krishnan [31] found enhanced subcortical representation of pitch sequences in both musicians and Chinese speakers but only corresponding perceptual pitch discrimination advantages for the musicians, indicating that cognitive benefits of auditory training may arise only for behaviorally relevant tasks.
On the other hand, Hutka et al. [32] found enhanced perceptual pitch discrimination for both musicians and Cantonese speakers, compared to nonmusicians, but only enhanced MMNs for pitch and timbre deviants in musicians. The authors interpret this as musical training having broader benefits to auditory processing than language, which is more specific. The divergence of results between several studies suggest that music and language may have different mechanisms or effects on plasticity; i.e. they do not appear to be clearly additive.
Moreover, there is a lack of linguistic group control in the language and music literature and little knowledge about the effects of musical expertise within different linguistic groups, particularly tone language speakers. If musical training and native language possibly have different mechanisms or interacting effects, then they must be adequately controlled in future research. This study attempts to contribute to the illumination of the separate effects of musical expertise and native language by investigating the effects of musical expertise on native speakers of a tone language (Mandarin Chinese). It uses both perceptual auditory feature discrimination tasks and brainstem recording designed to spotlight onset and sustained responses for subcortical duration and frequency signatures in order to form a thorough profile of the effects of musical expertise in Mandarin speakers.
Methods
Participants 57 native Mandarin Chinese speaking adults aged 18-35 participated in behavioral data collection (21 males, 28 nonmusicians, 29 musicians; Table 1). 55 of them also participated in the auditory brainstem response (ABR) data collection (20 males, 26 nonmusicians, 29 musicians). No participants had any experience with Finnish and spoke primarily Mandarin Chinese at home for the first 15 years of life. Some studies have shown connections between auditory discrimination and intelligence [33,34,35,36], but for practical reasons, it was not possible to conduct large-scale intelligence testing.
Musicians were defined as having more than 6 years of formal musical training and weekly musical practice, and nonmusicians were defined as having fewer than 2 years of musical training and no regular musical hobbies.
Participants were recruited by student telephone phone and email lists within Beijing Normal University and were compensated for their time. They gave written consent according to
Procedure
The full experiment took 2 hours and all participants completed the brainstem recording first. The recording consisted of two blocks of a passive listening task, counterbalanced between participants in order to avoid any attentional issues that may affect data quality (boredom, movements, etc.). The first block contained two synthesized short sounds (see section: Stimuli) presented at 55 dB sound pressure level (SPL). The second block contained one natural consonant-vowel (CV) speech contour, /puu/, extracted from a longer Finnish word /puuro/ which means "porridge," presented at 65 dB SPL. There were a total of 6000 sweeps for each short stimulus (3000 per polarity) and 4000 sweeps for the speech stimulus. For brainstem recording, a one-channel setup was used with one active channel at Cz online referenced to linked mastoids with a forehead ground at the hairline and four vertical and horizontal electrooculography (EOG) electrodes. A ±30 μV thresholding process was applied for artifact rejection. Data was collected in a shielded room using a Neuroscan SynAmps 2 Scan 4.5 system with a sampling rate of 20 kHz in AC mode/Gain 2010 and online open filter 10-3000 Hz with 6 dB roll-off. Sound stimuli were presented binaurally with shielded circumaural Sennheiser HD 419 headphones.
The behavioral experiment consisted of four listening tests modified from Kaernbach [37]. Participants listened to sounds with headphones presented from a laptop with sound calibrated to 65dB SPL. There were three adaptive single-feature tasks in which one sound feature was adjusted at a time (intensity, frequency, or duration) in order to find the 75% accuracy threshold for each feature. During each trial, two sounds were played in sequence and the participant was asked to press a key on the laptop to choose which sound was louder, higher, or longer (Intensity Test, Frequency Test, Duration Test, respectively). Correct answers increased the task difficulty by one step and incorrect answers reduced task difficulty by three steps (one-up three-down procedure), to find an accuracy rate of 75%. These tasks took about 10 minutes each. Then, a multifeature task asked again which sound was longer (duration), but all three features were varied randomly. This task took 20 minutes and terminated after 300 trials.
Stimuli
The first block of the ABR section consisted of two synthesized narrowband gamma-filtered stimuli, one at 162 Hz and on at 216 Hz, both presented at 55dB (SPL). A sawtooth wave of each pitch was narrow band filtered using a fourth order polynomial gammatone filter with centre frequency 3141.56 Hz; then, average intensities were normalized. Each stimulus is about 25ms in length with a 25ms silent buffer before and after the sound for an interstimulus interval (ISI) of about 50ms (the lengths are not actually absolute since the duration of the stimuli depend somewhat on the periodicity of the frequencies). The short stimuli were presented in alternating polarities and randomized.
The second block of the ABR section consisted of one CV syllable, /puu/, which means "tree," recorded from an adult female native Finnish speaker and cut from the longer word /puuro/, which means "porridge." The tone contour ranges in fundamental frequency (f o ) from 169 to 233 Hz and lasts 340ms long with a 20ms silence before and 30ms silence after, presented in a single polarity at 65 dB with a total of 4000 sweeps. Finnish does not have a system of lexical tones as Mandarin does, but instead a lexical duration contrast, in which vowels and consonants have a long and short version, e.g. tuli, "fire," tuuli, "wind," and tulli, "customs." The long vowels are co-signaled by a tone contour which has a slight initial rise followed by a long fall which aids in recognition of duration contrasts. Mandarin has four lexical tones: high level, high rising, low falling-rising, and high falling. Thus, the tone contour used here came from a natural spoken language but represented a totally unfamiliar contour to Mandarin speakers.
The behavioral stimuli were synthesized in the same way as the short nonspeech sounds used for brainstem recording but were longer since they were used for perceptual judgments. The standard sounds were 150ms long, 65 dB, and 162 Hz. The behavioral tasks were created within custom Matlab functions to be within the range of human speech syllables in intensity, frequency, and duration. The three features were either held constant or varied adaptively or randomly, depending on the task. The adaptive tasks automatically terminated after 51 reversals; the multifeature task had 300 trials.
Psychoacoustic tasks
The behavioral analysis used estimates from a logistic regression model that were fitted to the binary response data to calculate Weber fractions that represent discrimination thresholds for each auditory feature, using the equation ln(3)/k where k is the GLM estimate. For the duration modulation test, generalized Weber fractions use the same calculation and represent the extent to which duration is judged longer, given an increase in each specific feature (intensity, frequency, or duration). Additional effects were calculated: the intensity ratio, which is the (absolute value of the) ratio of generalized Weber fractions for the intensity dimension over the duration dimension and represents the extent to which participants were influenced by variation in intensity when making the duration judgment (a larger ratio corresponds to more influence). The frequency ratio is the same calculation for the influence of frequency on duration judgment, and the duration ratio is the ratio of Weber fractions of duration discrimination from the simple task to the complex task, which represents the difference in performance between the simple and complex tasks (a smaller ratio corresponds to decrement in performance from simple to complex task). It is expected that all participants decrease in performance between the simple and complex task since ignoring distracting features is a more difficult task.
Subcortical responses
For analysis of the short ABR stimuli, data was preprocessed with band-pass filters at 80Hz and 4000Hz and an artifact rejection threshold of 30 μV and epochs of 15ms prestimulus and 30ms poststimulus. Due to a technical error, it was not possible to separate responses to the two different stimuli; therefore, the results show group grand averages. Wave V peak amplitudes and latencies were extracted with a custom Matlab thresholding algorithm designed to detect peaks within a designated time window as a percentage of total peak size, which is a conservative measure to take higher-amplitude noise into consideration. Wave V is thought to be generated by the inferior colliculus, which is a waystation for corticofugal connections and is an important integration point for incoming afferent and efferent information. The amplitude of wave V indicates precision in the temporal tuning of a population of neurons responding to sound [38]. It has been shown to reflect subcortical experience-based plasticity from auditory training and is affected by learning and language disorders [39,40,41]. It has previously been shown that wave V amplitude reflects enhanced duration processing at a subcortical level associated with quantity language experience [28], so the current study was interested in possible duration processing enhancement at the subcortical level due to musical expertise.
Responses to the speech stimulus were bandpass filtered from 80-1000 Hz. The analysis was mainly concerned with the sustained portion of the response (post-20ms). Waveforms for each subject were averaged before further analysis. FFR analysis was conducted by means of a sliding window short-term autocorrelation function which allocated 40 ms time bins shifted by 1ms, creating 283 overlapping bins. For the pitch tracking analysis, each bin was autocorrelated (cross-correlated with itself) and the peak autocorrelation value (expressed as a number between 0 and 1, excluding the first lag which is 1) was identified for each bin, representing the periodicity strength of each time bin. Then, these peak values were averaged for each participant to determine the participant's pitch strength over the entire course of the response.
A short-term spectral analysis was also conducted using the same sliding window function. A Fast Fourier Transform (FFT) was applied to the windowed bins (Hanning window, bins zero-padded to 1 second to increase spectral resolution). From this, it was possible to extract the f o contour from the spectrogram by identifying the frequency which shows the peak magnitude for each time bin. Thus, this is the measure of pitch tracking in terms of frequency. These peak magnitude frequencies per subject were then cross correlated with the stimulus itself (which has undergone the same short-term FFT process) to obtain the FFT pitch tracking measure (expressed as a cross correlation coefficient between 0 and 1) per participant.
Musical expertise
For measures of musical expertise used in correlations, the current study uses the generalized score of the self-report questionnaire from the Goldsmiths Musical Sophistication Index (Gold-MSI) [42]. As a full evaluation it consists of the self-report questionnaire and a battery of listening tests including melodic memory, beat perception, and sound similarity. The selfreport questionnaire alone has been validated using objective listening tests and is an effective measure of musical ability [43]. The self-report inventory scores participants along five factors of musical engagement: active engagement, perceptual abilities, musical training, singing abilities, and emotional engagement. These factors are weighted together to create the generalized musical sophistication score. The Gold-MSI is equally useful for evaluating the musical sophistication of people who are highly formally trained, untrained, or have casual musical experience.
Statistical analysis
Since the distributions were not normal, nonparametric methods were used. A series of Mann-Whitney-Wilcoxon tests were run to compare the results of each test between music groups and Bonferroni corrected for multiple comparisons within effect type.
An additional comparison was done of pitch tracking in responses to the speech stimulus with a restricted frequency window of 100Hz around the first and second formants. A further analysis correlated Gold-MSI general sophistication scores with all of the previous effects: behavioral single-feature frequency, intensity, and duration discrimination, multifeature duration discrimination, frequency ratio, duration ratio, intensity ratio, wave V amplitude and latency, autocorrelation pitch tracking, FFT pitch tracking for f o , F 1 , and F 2 .
Perceptual effects
Musicians showed enhanced single-feature discrimination for both frequency and duration and for duration in the complex task compared to the non-musicians (for descriptives, see Table 2). They also showed a trending difference in single-feature intensity discrimination and frequency ratio, which did not reach significance at the corrected level (Table 3).
Subcortical effects
There were no differences between musicians and nonmusicians for either peak amplitude or latency of wave V in onset responses to the short nonspeech stimuli (Fig 1), nor for either autocorrelation pitch tracking or FFT pitch tracking of fundamental frequency in responses to the speech stimulus (Fig 2, Table 4).
Further analysis
Formant pitch tracking. The FFT pitch tracking sliding window algorithm was run again on 100 Hz windows around the average first (397-497) and second (700-800) formant frequencies as identified by Praat. The sliding window was also run on the original speech stimulus with the same restrictions and the results were cross correlated. Pitch tracking of the formants was also not significantly different between musicians and nonmusicians ( Table 5).
Correlations with musical sophistication. A further analysis determined whether the results would be different with another measure of musical expertise, namely, the Gold-MSI general musical sophistication score. This score takes into consideration formal musical training but also factors which are unrelated to training and which may be due to aptitude or social or environmental conditions. All previously used perceptual and neural measures were correlated with the general musical sophistication index score, and the results mirror those of the music groups although some of the trends do not reach the corrected significance level (Tables 6 and 7). Moreover, the group music score means were significantly different, indicating that the participants were accurately assigned to musician and nonmusician groups (W = 812, Bonferroni-corrected alpha level at 0.007. * indicates significance at the Bonferroni-corrected level. p = 9.52 x10 -11 ). The difference is likely due to the fact that the two measures of musical expertise emphasize slightly different factors.
Discussion
This work investigated the basic perceptual and subcortical auditory profiles of Mandarin speaking musicians and nonmusicians. Mandarin speaking musicians showed more accurate single-feature discrimination for both frequency and duration and a stronger influence of frequency on duration discrimination in a complex auditory environment. No subcortical effects were found.
Perceptual effects
There was no effect of duration ratio, which means that there was no group-based difference in the relationship between the single-feature duration task and the multifeature duration task. In general, participants decline in accuracy between the simple and complex tasks due to the increase in processing load from the addition of distracting features. It might be expected that musicians would perform better in the complex task (showing less decrement in performance) than nonmusicians due to their superior processing skills. However, it may also be argued that enhancement in processing of low level single features could lead to an overall increase in system efficiency, which promotes integration of low level features. This appears to be what happened in a population of musically diverse Finnish speakers, whose linguistically driven enhancement for duration processing was more degraded in the complex task for those with higher levels of musical sophistication [30]. Here, there was no difference in degradation of duration discrimination for Mandarin speakers with the addition of distracting features between musicians and nonmusicians. In fact, the musicians showed significantly more accurate duration discrimination in the complex task compared to the nonmusicians. In other words, both the Mandarin speaking musicians and nonmusicians showed a similar extent of degradation between the simple and complex task, but the musicians had an overall more accurate duration discrimination within both tasks. There was a nonsignificant trend (at corrected level) of frequency ratio. Previous studies have found that Mandarin speakers are less affected by frequency when making duration judgments than quantity language speakers (Finnish and Estonian), with both the most accurate duration discrimination and the most influence of frequency on duration discrimination occurring for Finns [44]. The positive correlation indicates that the more musically sophisticated participants were more affected by frequency in their duration judgments than the less musically sophisticated. Although counterintuitive, this indicates an enhanced efficiency in the auditory system since psychoacoustically, frequency contributes to perceived duration [45,46]. By integrating features which are perceptually bound, musicians process sound more efficiently in real-world acoustic environments like music performance.
Subcortical effects
Both groups showed high variability in amplitude of the onset response. Both groups accurately followed the speech stimulus tone contour, however, FFT pitch tracking for both musicians and nonmusicians, while giving generally high cross correlation values, was similarly highly variable and contained octave jumps. It is likely that since the participants were all healthy adult native Mandarin speakers, there was a ceiling effect for subcortical frequency processing due to linguistic expertise. The speech stimulus was chosen to represent a nonnative tone contour from a natural language. It is possible that the Mandarin speakers did not process the stimulus as linguistic, and/ or that the musicians processed it as musical, which would activate perceptual benefits from cognitively identifying the task demands in a musical context. Previous research has shown top-down effects of language or music on categorization (and further pitch processing) of sounds which are similar to natural language tone contours or musical notes [47,48]. It may be necessary to direct participants' "listening mode" with stimuli that could be ambiguously interpreted to be linguistic or musical. Additionally, further investigations could use a wider range of similar natural speech, musical, and speech like stimuli, such as instrumental and vocal contours, synthesized contours without phonemes, and iterated ripple noise in order to determine the effect of top-down organization of auditory domains.
Musical expertise
The correlational analysis with Gold-MSI scores showed the same pattern of effects as the cross-sectional analysis, which was expected since the group means were significantly different. However, the distribution of scores was not bimodal, as would be expected from groups which did not overlap in level of musical training (fewer than 2 years/6 or more years). The Gold-MSI is likely capturing additional features that are not directly associated with formal musical training and which may have a weak effect on the results. Participants indicated their main instrument on the Gold-MSI (Table 1). Of the 28 musicians, 21 indicated Western instruments, 5 indicated traditional Chinese instruments, and 2 indicated voice. The traditional instruments included guzheng (Chinese zither), koto (Japanese instrument similar to the guzheng), yangqin (a hammered dulcimer), erhu (a two-stringed fiddle), and bamboo flute. Previous research has shown that there are differences in auditory feature processing between different kinds of instrumentalists and musical styles [49,50,51]. Here, it is possible that different styles or cultures of music training could emphasize different auditory features enough to influence the results. Unfortunately, the Western and traditional groups here were too different in number to compare in a statistically meaningful way. However, musical culture remains an interesting question for the future and could be investigated by focusing on style of musical expertise as a design factor.
Limitations
As mentioned above, it was not possible to statistically compare musicians trained in traditional or Western musical styles. It would be of particular interest to compare musicians trained in different tonal systems or on fixed-and movable pitch instruments or vocalists since regular practice of a tonal system with smaller or larger frequency differences between notes may influence discrimination patterns.
One of the main limitations of this work is the lack of a multifeature frequency discrimination task. In the future, some of these questions could be addressed by a more complete set of perceptual tasks, especially since the Mandarin speakers show music-based effects for both frequency and duration.
Some recent research has indicated genetic factors in auditory feature processing and musical aptitude heritability [52,53,54,55,56]. Future studies should consider the impact of genetic differences across major linguistic groups and the effect that difference may have in comparing auditory processing between the groups.
Conclusions
Knowledge about early auditory processing plasticity is becoming more granular and effects specific to certain sound environments are becoming clearer. Future investigations must take into consideration the differences between language environments and musical environments in their effects in tuning the auditory system. Additionally, in order to gain a more complete picture of the plasticity of the auditory system, musicality evaluations should be carefully considered as well as other factors like genetics/aptitude, socio-cultural differences in music attitudes, and behavioral task demands. Musical expertise appears to confer mainly perceptual advantages within linguistic groups. The transfer between language and music effects happen on an early level of processing, but responses are still modulated by behavioral goals which drive efferent connections as well as a holistic pressure to efficiency in the full system. Supporting information S1
|
2018-04-03T05:09:14.632Z
|
2018-01-04T00:00:00.000
|
{
"year": 2018,
"sha1": "097143d661a14206310f7c5877a6450a44f117d5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0190793&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "097143d661a14206310f7c5877a6450a44f117d5",
"s2fieldsofstudy": [
"Linguistics",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
36085623
|
pes2o/s2orc
|
v3-fos-license
|
Return to work following disabling occupational injury – facilitators of employment continuation
Young AE. Return to work following disabling occupational injury – facilitators of employment continuation. Scand J Health. 2010;36(6):473–483. Objective Return to work following occupational injury is an important rehabilitation milestone; however, it does not mark the end of the return-to-work process. Following a return to the workplace, workers can experience difficulties that compromise their rehabilitation gains. Although there has been investigation of factors related to a return to the workplace, little attention has been paid to understanding what facilitates continued return-to-work success as this paper aims to do. Methods This study used data gathered during one-on-one telephone interviews with 146 people who experi-enced a work-related injury that resulted in their being unable to return to their pre-injury job, but who returned to work following an extended period of absence and the receipt of vocational services. Results Numerous return-to-work facilitators were reported, including features of the workers’ environmental and personal contexts, as well as body function, activities, and participation. Influences that stood out included a perception that the work was appropriate, supportive workplace relationships, and a sense of satisfaction/ achievement associated with being at work. Conclusions The findings support the contention that initiatives aimed at improving return-to-work outcomes can go beyond the removal of barriers to include interventions to circumvent difficulties before they are encountered. Together with providing ideas for interventions, the study’s findings offer an insight into research and theoretical development that might be undertaken to further the understanding of the return-to-work process and the factors that impact upon it.
With the realization that return-to-work outcomes are only loosely associated with the resolution of symptoms has come research aiming to discover what other factors play a key role in determining outcome. While a number of factors have been found to relate to return to work and biopsychosocial models have been developed, many of the factors found to relate to inferior outcomes are not modifiable, leaving us with the ability to identify a person at risk, but often with little understanding of how to intervene so that the individual can achieve a betterthan-predicted outcome.
Perhaps in response to this, a relatively recent development within the field of return to work has been research that has sought to identify return-to-work barriers, with the rationale being that if we know what problems people are likely to encounter ahead of time, we might be more able to help them avoid them (1).
Along similar lines, it can also be argued that learning from the positive experiences of others will likely help to develop plans that will hold up to predictable stresses and strains.
Thus far, research in this vein has tended to focus on barriers to (2)(3)(4)(5)(6) and/or facilitators of (7-10) a return to the workplace. Although this is an important milestone in the return-to-work process, it is only a step toward the achievement of a safe and sustainable employment outcome. Conceptualized as a developmental process, return to work has four phases: (i) off work, (ii) work re-entry, (iii) maintenance, and (iv) advancement (11). While a return to the workplace denotes the end of the off-work phase, the opportunity for the return to work to be compromised has the potential to continue well beyond that event. Although there are numerous supports and services aimed at helping injured workers to go back to work, Facilitators of employment continuation once back at the workplace, this support often wanes and the worker can struggle. If the return to the workplace cannot be maintained, the affected individual's return to work often becomes increasingly complicated and has the potential to be terminally compromised. As such, efforts to improve our understanding of factors related to the later phases of the return-to-work process (ie, re-entry, maintenance, and advancement) are justified.
To date there has been little investigation of what influences work continuation after a return to the workplace has occurred. One example is a study of people with spinal cord injury (N=20) that looked at factors respondents felt facilitated their post-injury employment participation (12). Identified facilitators related to employment maintenance included: flexibility of hours and duties, employer attitude and active facilitation of successful re-employment, bonds between the worker and coworkers, and support from others while regaining functional abilities. Other data providing insight comes from a 10-year follow up of a supported employment program. People who participated in the program reported facilitators including: being able to work reduced hours, on-thejob training, support at the workplace, "having someone help you get along better with the people at work," having someone to talk to about work stress, transport assistance, and "having a trial period to see if you can cope with the work" (13). From the broader return-to-work literature, research suggests that transportation/mobility independence (14)(15)(16), good social support for return to work (17), as well as workplace factors -including good supervisor interactions (18) and flexibility of working conditions (19,20) -also have the potential to facilitate the maintenance of employment gains.
This manuscript reports on results from a larger study investigating return-to-work experiences following the receipt of vocational services. Earlier reports have described participant's prevocational rehabilitation return-to-work experiences (21). This report focuses on the part of the project that aimed to develop further an understanding of what facilitates continued returnto-work success following the receipt of vocational services and explore the idea that different influences are important during the various phases of the return-towork process. With this information, we will be better placed to make recommendations regarding initiatives that will efficiently and effectively facilitate the achievement of return-to-work goals.
Design and procedure
While a qualitative approach was considered appropriate for the exploration of participants' return-to-work experiences, in order to address the second part of the aim (ie, an exploration of the idea that different influences are important during different phases of the return-to-work process), the intention was to test for between-group differences and answer questions relating to experience commonality. Given that this could not be achieved using a purely qualitative approach, a design was chosen that involved the blending of qualitative and quantitative data collection and analytical techniques and, consistent with mixed-method procedures (22), a sample was drawn that was larger than typically used when conducting purely qualitative investigations. Although there was a desire to determine if certain influences were important during different phases of the return-to-work process, no attempt was made to draw a minimum sample from any of the subgroups (ie, off work, re-entry, maintenance, advancement). Rather, a consecutive sampling methodology was employed whereby an attempt was made to recruit all people with state-approved returnto-work plans having a return-to-work date on or after the sampling commencement date of 1 January 2003. The intention was to keep sampling until the desired study size (N=150) was obtained. This occurred in September 2006. While there was no attempt to obtain a minimum sample for any of the subgroups, the decision to continue sampling until 150 people had been recruited was made after a sense of the proportion of people maintaining and not maintaining employment had been gained (ie, when sampling was complete there was the understanding that there should have been around 20 people in the off-work group).
To obtain the desired sample size, 549 invitations to participate in the research were sent out via written invitation, with enclosed and pre-paid response forms. If the invitee did not respond to the invitation within two weeks, attempts were made to reach the individual by phone. In 129 cases (24%) the provided contact information (mailing address, phone number, or a combination of both) was no longer valid. In an additional two cases, the invitee was reached, but they were unable to be interviewed due to language barriers. An additional five people were not interviewed because they had not returned to work. Of those thought to be eligible and with valid contact information (N=413), 127 (30%) could not be reached despite numerous attempts (minimum of 10 follow-up calls) and 140 (34%) declined to take part in the study. The remaining 146 people were interviewed. Application of the American Association for Public Opinion Research (AAPOR) standard definitions and allowing for ineligible non-contracts (23) resulted in a calculated response rate (RR3) of 40%. Using the number of eligible respondents successfully contacted as the denominator, the cooperation rate (23) was calculated to be 51%.
Young
Data were gathered via one-on-one telephone interviews. A concurrent-nested strategy (24) was adopted such that responses to closed-ended questioning led to further open-ended questioning. As such, open-ended questions were interspersed throughout the questionnaire. When collecting the data, the interviewer used a computer-assisted semi-structured proforma. This was pilot tested with experts in the subject area and revised as required. The interview began with demographic and injury questions. After these, questioning turned to postinjury return-to-work experiences and an exploration of what participants thought had helped them continue working. Depending on whether the participant was employed at the time of interview, the opening question specific to facilitation was asked in one of two ways. For those not employed, the question was: "Was there anything that you found helped you stay at work even though it was difficult?" For those employed, the question was: "Is there anything that you have found that is particularly helping you stay at work?" Interviewers were instructed to follow up participant's comments in a conversational manner (eg, "Can you tell me a bit more about that?") and summarize responses as a way of checking that they had understood and had accurately recorded the participant's thoughts (eg, "So, if I have this right, you would say that the support of your co-workers helped you to stay at work even though it was difficult. Is that right?"). The summary was entered into the computer and an audio recording was made of the entire interview. In addition, interviewers took handwritten field notes to supplement the summarized material and prompt them to follow up on pertinent material. Each interview took about an hour to complete.
Four interviewers were involved in the data collection process. All were female, university-educated and had received in-house training in the conduct of semi-structured interviews from the lead researcher, who had formal training on the topic. Interviewers were unknown to the participants until the time they were approached to schedule an interview. The aims of the project were detailed in the letter of invitation and informed consent was obtained prior to the commencement of interviewing. Participants were also informed that the project was a joint initiative conducted by an area university and an industry-funded research institute. This information, together with a statement indicating that participation would in no way influence their workers' compensation claim, was provided to participants at the time of recruitment. While data regarding the reasons behind why people chose not to participate in the research was not systematically collected, there was no reported instance of refusal because an insurance company was involved. At no time were participants asked to identify their insurer. Based on the insurer's market share within the study catchment area, it is anticipated that only a relatively small proportion (less than 10%) of the people invited to participate in the research would have received their compensation benefits from the insurer funding the research.
Participants
All participants were unable to return to their pre-injury job and had accepted workers' compensation claims, state-approved return-to-work plans, received vocational services within the Massachusetts workers' compensation system, and subsequently found a position of employment. Such individuals represent approximately 30% of those determined by the Office of Education and Vocational Rehabilitation to be suitable for services and referred to insurers for vocational rehabilitation and about 50% of all those with approved return-to-work plans (25). The vocational services provided to the study population include assistance to develop a plan for employment that includes a review of goals, career interests, work history, educational background, labor market research and trial-work experiences. In addition, assistance is provided with the preparation of resumes/ curriculum vitae, job seeking training skills, job hunting, and interview training skills. Post-placement assistance is also provided. Services are provided under a hierarchy such that if the worker cannot return to the same job, then an effort will be made to modify the job so that they can return to the same employer. If that is unsuccessful, an effort is made to find a different job with the same employer. If that is not possible, an effort is made to find a different job with a different employer. Finally, if that does not eventuate, retraining may be undertaken.
[Note: More detail regarding vocational rehabilitation service provision can be found at the official website of the Commonwealth of Massachusetts: www.mass.gov.] Of the study participants, 38% were female. The average age at injury was 39.7 years [standard deviation (SD) 8.3] and 43.9 years (SD 8.3) at interview. In 78% of cases the respondent's medical condition was the result of a trauma. At the time of injury, 5% had not completed high school, 43% were high school graduates and 51% had undertaken some type of further education. Threequarters of participants had undertaken some post-injury training. Prior to their injury, the majority of participants had worked in the service industry (34%). Construction, transportation and utilities, and retail trade workers were also well represented (17%, 15% and 13%, respectively). Participants' pre-injury occupations covered the full range, but most could be categorized as being in associate professional and technical (23%) and elementary occupations (23%). The average time between injury and the commencement of the participant's post-vocational services return to work was 37.63 months (SD 17.72). The average time between the commencement of the return to work and interview was 1.05 years (SD 0.84).
Data interpretation and presentation
Once collection was complete, data were exported from the computer-assisted interviewing software to various data management programs. Pre-coded data were sent directly to SPSS version 15 (SPSS Inc, Chicago, IL, USA).
Responses to the open-ended questions were exported to Microsoft Word and then later Excel (Microsoft Corp, Redmond WA, USA), and subject to a directed content analysis (26). Assessment of a participant's placement in the return-to-work process was made by the lead researcher. This was rated based on the participant's employment status, time in their current position, their self-assessed performance, and their plans to pursue other work. Those not at work were classified as being off work. Those who had been in their current position for a relatively short period and gave an indication that they were not yet performing to expectation were categorized as being in the re-entry phase. Those who had been in their job for longer periods and reported performing well were categorized as being in the maintenance phase. Those pursuing alternative work were categorized as being in the advancement phase.
Two researchers who had been involved in the data collection process performed the analysis of responses regarding facilitatory influences. The analytical process involved breaking down the participant's responses to meaningful elements (ie, succinct words or phrases that captured an essential meaning). These were then represented on flash/index cards. If an individual gave a response involving more that one element, multiple cards were made up. For example, if a participant mentioned the support of their healthcare provider and the work being appropriate, this was reflected on two separate cards.
Cards were then sorted and coded using the conceptual framework afforded by the Word Health Organization's International Classification of Functioning, Disability, and Health (ICF) (27). The ICF describes functioning as the interplay of body functions, body structures, activities and participation, and environmental and personal factors and provides a taxonomy that includes component and second-order codes (28,29). Previous researchers conducting qualitative investigations of patients' functional problems have used the ICF, and it has been found to accommodate most patient articulations (30)(31)(32). Although the ICF has proven useful, it has been found that some meaning is not captured through application of the coding structure (30). Therefore, while data is presented within the overarching framework and interpreted referencing existing codes, if the application of an existing code resulted in a loss of meaning, a code was not applied. Instead, a descriptive emergent code was chosen and applied. This approach is consistent with ICF-supporting material, which states that the ICF should be considered as a building block and applied according to the needs of the user depending on their creativity and scientific orientation (27).
The coding of data pertaining to facilitatory influences was iterative. Duplicate cards were made where it was felt that more than one element was being mentioned; cards were removed if it was deemed that withinparticipant duplication had occurred. Categorizations were refined throughout the analytical process. Where there was a discrepancy between the coders, this was discussed and resolved through reference to the ICFsupporting materials and, if needed, review of the audio recordings. Following the initial round of coding, the original transcripts were again consulted to ensure that all relevant information had been included.
Once reduced to categorizations, the qualitative data was mixed with the quantitative data and subject to simple uni-and bivariate analyses. Due to the data's lack of parametric properties, c 2 analyses were chosen to test the significance of between-group differences. If the smallest expected cell frequency was <5, Fisher's exact test (FET) was applied. Coded responses regarding facilitatory influences were cross-tabulated by the participant's return-towork phase (ie, off work, work re-entry, maintenance, or advancement). When individuals reported facilitators of the same type (eg, orthotics, back brace, and heating pads), these were grouped together (in this case as "products") and recorded as one unit (as opposed to three). As such, each unit (as reported in table 1) represents a person that reported a facilitatory influence of that type, regardless of whether or not he or she mentioned one or more influences of that type. Component level totals (ie, environment, personal, body function, and activities/ participation) detail the number of people reporting an influence that could be categorized as being of that type. The same is true for the sub-component categories "working conditions" and "coping style." While summaries are useful for revealing group trends, they do not convey a sense of the individual. To provide some context, illustrative examples have been included in the text. Where participant identification numbers have been provided, these include a notation of that individual's phase in the return-to-work process, such that off work=0, re-entry=1, maintenance=2 and advancement=3. As an example "ID31-2" denotes the 31 st person invited to participate in the research who was in the maintenance phase at the time of interview.
Results
Overall, 83% of participants reported some influence that helped them to stay at work (ie, a facilitator). Comparisons between those off work and those in the other Young phases indicated that there were significantly fewer people in the off-work phase that reported facilitatory influences in comparison with those employed (FET P<0.05). More detailed analysis indicated a significant difference between those off work and those in the reentry and advancement phases (FET P<0.05). Of those reporting being assisted by something (N=121), 52% mentioned only one influence (mean=1.3); the remaining reported between two and four facilitators (2=36%, 3=11%, 4=1%). The mean number of facilitators was not found to relate to return-to-work phase (off-work mean=1.1, re-entry mean=1.5, maintenance mean=1.3, advancement mean=1.4), nor whether or not they were employed at the time of interview (working mean=1.4, not working mean=1.1). Summarized responses crosstabulated by the participant's return-to-work phase are detailed in table 1. The following is a description of what participants reported, organized within the conceptual framework afforded by the ICF.
Environmental influences
Of the study participants, 63% reported that some feature of their physical, social, or attitudinal context had helped them overcome the difficulties they were experiencing. The most commonly reported influences were features of the individual's working conditions, but they also included more general environmental influences, Body function (b) 0 0 0 0 1 1 1 3 2 1 No pain/fully recovered Activities & Participation (d) 3 14 2 14 12 16 3 8 20 14 Economic self sufficiency (d870) d 1 a ICF classification codes are presented within the square brackets following the description of the influencing factors. If no code is listed, no applicable code could be identified. b ICF classification code in parentheses. c Number of people mentioning facilitatory influence. d This category overlaps with the paycheck/money/benefits category listed above. The rationale for including it separately relates to the stated importance of being able to provide for oneself and/or family.
including medications (most commonly reported by those in the off-work phase), products (more commonly reported by those working at the time of the interview), services, supportive relationships, and the economy. In all cases, the medications that people mentioned were for pain management. The products that people reported included heat and ice (ID80-1, ID175-1, and ID481-3), transcutaneous electrical nerve stimulation (TENS) units (ID80-1 and ID481-3), orthotics (ID71-2 and ID423-2), and back brace and kneepads (ID277-3). Five individuals cited the support of others. In some cases, the support was said to have come from family and friends (ID75-1, ID87-1, and ID497-3); for others, it was the support of people assisting the worker in their return to work. Two people, both in the maintenance phase, talked about how their providers had been helpful in that they had encouraged them to take their time and find a job that suited their skills and interests (ID41-2 and ID132-2). Only one person said that healthcare services per se were something that had helped them to stay at work. This person referenced physical therapy and acupuncture (ID275-0). One person indicated that something that he found helpful was "not having to deal with doctors who think you are faking it" (ID334-2). Two individuals indicated that what they hoped would be an improved demand for their services was something that helped them keep going (ID426-2 and ID434-2).
Slightly more than half of the participants (51%), spoke about some feature of the individual's employment conditions. Most frequently, this involved money and was true for approximately 20% of most of the four groups, with a slight deviation for those in the maintenance phase, 16% of whom said it was important. A factor that was of importance for all groups was that the duties were appropriate. Across all groups, people spoke of the advantage of having duties they knew were within their physical capacity. Embedded within such comments were references to safety and the avoidance of further injury and symptom exacerbation, with examples including: "got more into the programming aspect to eliminate the lifting and more physical aspects" (ID220-0), "won't have to strain myself physically" (ID364-1) "have no lifting requirements and that was a major concern" (ID523-2), "not feeling pain at this job … not sure could return to [previous job] … worried about the pain returning if had to lift again" (ID338-3). A trend was noted for those in the later phases, where work that suited the participant's interests and skills was said to be of importance (ID13-2, ID132-2, ID442-2 , and ID392-3). Such comments did not come from people in the off-work and re-entry phases.
The influence of supportive workplace relationships was mentioned by participants in all phases of the return-to-work process. While both employers/ supervisors and coworkers were spoken about, coworker support was mentioned more frequently. People in all phases mentioned the facilitatory impact of supportive coworkers; however, there appeared to be a developmental quality to the way coworkers were referenced. Take for example the case of an individual who was not working at the time of the interview: "The people that I worked with would help with the lifting. They would come over on their own to help a lot, but they would get reprimanded" (ID81-0). Contrast this with a participant, in the re-entry phase, who said she benefited from asking for help from her colleagues when she needed it. "I've got better at asking for help ... hard to do, but it has gotten easier." (ID446-1). Also, this from participants in the maintenance and advancement phases: "The use of the young kids for lifting ... really good team of people ... everybody helps everybody" (ID237-2); "Co-workers help with heavy things ... employer negotiates the work as they go" (ID431-3). As was alluded to above, support from coworkers and supervisors was commonly coupled. This is further supported by the fact that half of the people talking about the influence of the support of their employer/supervisor also referenced their coworkers.
Thirteen participants mentioned the benefits of flexible working conditions. Statements coded into this category included those relating to flexibility of: hours (eg, ID118-1, ID393-3, ID428-2, and ID537-3), duties (ID59-2 and ID375-3), breaks (ID314-0, ID375-3, ID416-2, and ID532-2), pace (ID75-0, ID105-2, and ID509-2), and physical working conditions (ID467-3). Among those who spoke about flexibility, control of their working conditions and ability to respond to their symptomatology appeared to be key to their success: "figuring out what works best … knowing what will push me over the limit … breaking up the day into manageable tasks that will not stress my wrists … limiting number of hours of repetitive movement" (ID59-2), "being able to get up and do something else if I am experiencing symptoms" (ID375-3), "flexibility to get up and move around and have breaks when I need to" (ID532-3), "being able to put my leg up on the cabinet under the register ... being able to sit on a stool when I need to" (ID467-3).
Seven of the participants spoke of the benefits of workplace equipment. In most cases, the equipment was not specifically designed for a person with activity limitations and included: items with push-button technology [eg, hydraulic lift (ID277-3) and an electric hospital bed (ID435-3)], ergonomic workstation (ID515-3) and seating (ID481-3 and ID515-3). All those who spoke of the influence of equipment were in the maintenance and advancement phases. Convenience -as in a convenient commute (ID125-3 and ID286-3) and fitting in with Young the family schedule (ID537-3) -was only mentioned by those in the advancement phase. The benefit of working reduced hours was spoken about by only two individuals: one in the re-entry phase who was building her capacity (ID74-1) and one in the maintenance phase who was working part-time and who said she would leave if her symptoms worsened and she was in pain all the time (ID86-2).
Personal factors
Although the ICF currently classifies only environmental influences as having the potential to exert facilitatory effects, not all participant responses could be classed as environmental in nature. Facilitators that could be classified (according to the ICF guidelines) as being part of the participant's personal context were mentioned by 35% of the sample. The most commonly mentioned factor was job satisfaction, which was most frequently reported by those in the maintenance phase. Interestingly, an appreciation of being at work (as opposed to off work) was also mentioned with relative frequency, but most commonly done so by those not working at the time of interview: "wanted to keep busy" (ID20-0), "not one to just sit there … love to work" (ID305-0), "the job was good for me, I got to do a lot of different things" (ID439-0). Comments made by those in the maintenance and advancement phases included: "I love people, have to be with people … love working … did not like the isolation when out of work" (ID371-2), "work is like physical therapy … helped me to get fingers moving" (ID126-2), "feeling that you are doing something again" (ID126-3), and "glad to be back at work" (ID184-3).
Another group of facilitators, which can be classed as relating to the individual's coping style, also played a role. Inspection of the comments made by those in the later phases indicates a difference in the techniques used. Those in the re-entry phase tended to talk more about attitude and determination (ID50-1, ID213-1, and ID391-1), whereas those in the maintenance phase talked more about knowing their limits and working accordingly, again suggesting a developmental nature to work re-entry and what is needed at different times throughout the return-to-work process. Another personal characteristic was asking for help when it was needed.
Although this was spoken about by just two individuals, it is worthy of note given the stated importance of coworkers. Interestingly, even though these people indicated that they benefited from the assistance of others, this was not always easily asked, as was the case for ID446 who was in the re-entry phase. A sense of confidence can be noted in responses stating that their own personal skill set was something that kept them on the job (ID332-2 and ID393-3).
Physical body function, actions and participation
While only two people (ID338-3 and ID512-2) indicated that their physical/pain-free status was something that helped them to stay at work, a number of people indicated that what they did to maintain their body function was important. This was true for people in the first three return-to-work phases, but most commonly among those in the maintenance phase. Again, there appeared to be a developmental nature with regards to exercise and its benefits. For illustration, take these examples from individuals in the first three return-to-work phases: "getting up quite frequently and taking a walk" (ID314-0), "stretching everyday" (ID175-1), and "exercise has been very important in motivating me to work hard and be a better employee … more alert … became 'semi-retired' not working" (ID483-2). Others within the maintenance phase talked about using their lunch break to take walks (ID228-2), the importance of their regular exercise regime (ID423-2 and ID456-2) and the benefit of "keeping active, keeping going" (ID525-2).
Although captured to some extent as the facilitatory impact of earning money and having healthcare benefits, a number of individual's went further in their explanation. In eight cases, people indicated that beyond receiving a paycheck, what they were able to do with it (in terms of being able to pay their bills and provide for their family) was something that helped them to stay at work. The four people who talked about the facilitatory influence of learning new skills were all in the first three phases of the return-to-work process. In three of these cases, people had moved into jobs that required further training: "... learning something I like.
[Apprenticeship type] was not my first choice, but I am getting to like it more as I get to know more about it" (ID50-1), "lectures are great … continually improving knowledge" (ID293-2), "continuing education is a true blessing" (ID47-2).
Discussion
The fact that 83% of the sample reported that some influence had helped them to stay at work indicates that the vast majority of people who return to work following disabling injury are accessing resources that facilitate the maintenance of their rehabilitation gains. Consistent with the ICF conceptualization (27), environmental contexts were reported to exert a positive influence on the individual's performance. The most commonly reported environmental facilitator was the resource rewards associated with working. Although a recent review of cohort studies investigating prognostic factors has indicated no evidence for the role of income in disability duration and return to work (33), as is likely the case for many of us, respondents indicated that their paycheck and insurance benefits helped them to stay at work. However, while this may well be the case, the extent to which such rewards can be manipulated to improve return-to-work outcomes is largely dependant on economic conditions and as such, these factors are unlikely to prove a useful focus for intervention.
While a number of the factors that people talked about are also those that would likely help the typical individual stay at work (eg, paycheck, work fitting in with other commitments, and job satisfaction), some influences appeared more important than might normally be the case. One such feature was an emphasis on the work being appropriate, with safety being an important index thereof. It would appear that identification of appropriate work is important not only for timely return to work (34), but also for work maintenance. Related to work appropriateness, and consistent with earlier studies (8,19,20,35), were the reported benefits of a workplace/supervisor that allowed flexibility for task variation, breaks, and autonomous decision-making. These findings support the suggestion that a working relationship that allows "decision latitude", that is an employee's ability to make decisions related to the way he/she work, can help employees devise coping strategies than can mitigate the effects of workplace stressors (36) and support return to work following long-term sickness absence (37). Similarly, findings are in line with the idea that a freedom to develop different ways of working in order to meet production targets (referred to as "margin of maneuver") can help rehabilitating workers get the job done (38). As such, the findings support the suggestion that application of the margin of maneuver concept to work rehabilitation can help those involved in return-to-work planning to evaluate more systematically indicators pertinent to work performance and help integrate a person-environment model that facilitates the meeting of production targets without compromising worker health (38).
The results of this study are consistent with earlier work that has demonstrated that supervisors can have an impact on return to work (18,39), but also highlight the important role of coworkers. Although coworkers have been previously identified as providing assistance that helped in the transition to employment, this was considered secondary to the support of family, friends and pre-injury employers (12). In this study, coworkers were more consistently identified than any other group. The assistance provided was both practical and psychosocial in nature, suggesting that workplace relationships and practical support contributed to helping the worker stay at work. In the context of earlier research indicating that a lack of collegial support was a barrier to work re-entry following a stroke (8) and a cohort study that has demonstrated that coworkers support was related to productivity for those with inflammatory joint conditions (35), workplace relationships appear to be an important feature of the environmental landscape and one that could benefit from input regarding how to best demonstrate and provide coworker support.
Consistent with previous research, other people, including healthcare providers, family, and lawyers were also reported to play an influential role. These people, collectively referred to as "return-to-work stakeholders", have been argued to have the potential for differing return-to-work-related interests and motivations (40). As such, it is understandable that there were instances where the assistance given was not enthusiastically endorsed and conflict and ill feelings resulted. The current findings are in support of the contention that when planning work reentry, it is important that stakeholders are engaged (39), be advised to make an effort to ensure that communication is clear and explicit (41), and are coordinated in their efforts (10,42). Further, these findings suggest that there is a need for a shared and ongoing commitment to the return-to-work goal and related plan. With such, those involved will be better placed to provide the returning worker with the needed assistance and support.
The workplace equipment that people referenced tended to be relatively simple and likely to be found in most modern workplaces. This finding may prove reassuring to return-to-work stakeholders in that it demonstrates that modifications to accommodate the limitations of an injured worker do not need to be extensive or expensive. This is also true for the treatment products people spoke about using. Of some concern was that people who indicated that medication had been instrumental were more frequently not working at the time of interview than those who spoke about other facilitatory influences. This finding is in line with earlier research that indicates that passive forms of coping tend to be associated with inferior return-to-work outcomes (43) and suggests that a reliance on pain medication, as opposed to taking a more active coping approach, is not something that is associated with long-term success.
While the ICF conceptualized only environmental factors as facilitators (28), participants' responses indicated that there were influences beyond their environment that helped them to manage and overcome the difficulties they experienced. These included personal characteristics and a variety of other experiential influences associated with undertaking tasks and actions. The findings also point to physiological body function as a facilitator of the maintenance of rehabilitation gains and suggest an opportunity for proactive intervention. Together with a desire to be working/productive, work enjoyment, and/or personal satisfaction was an important influence for people in all return-to-work phases, but particularly so for those in the later stages. This Young finding stresses the importance of finding work that not only is within the individual's physical abilities, but also fulfills the worker's emotional and intellectual needs. This finding is consistent with past investigations of durable employment following disabling injury (44) and supports the position of Holland who emphasizes the need to match the physical demands of the job with interests and personality type (45).
The influences people spoke about varied greatly; however, this should not be interpreted to suggest that participants were not facilitated by similar influences as the way the interview question was phrased placed emphasis on what the participant thought had particularly helped them stay at work. As such, study findings reflect the broad scope of factors that can play a role in facilitating employment maintenance, but do not portray what happens at an individual level. Further and preferably prospective research would be needed to achieve this level of understanding.
While transportation has previously been identified as an important facilitator of return to work (14-16), it was not a standout feature for those taking part in this research. An explanation for this might be that, once appropriate transportation had been established and a return to the workplace had been achieved, transportation in no longer an important facilitatory force. This is not to say that a disruption to established transportation patterns might not constitute a barrier to employment continuation; rather, that once the issue of transportation has been successfully addressed, it is not something that appears to require further development. As such transportation may be a perceived as facilitator of the transition from the off-work to re-entry phase, but not necessarily facilitatory of on-going success.
Methodological considerations concerning loss to recall have been discussed earlier (21). Specific to the data presented in this manuscript, it is likely that when thinking about what helped them stay working, participants referenced their most recent experience; as such it is likely that there is an underreporting on facilitatory influences important during earlier phases of the individual's return to work. Data presented in this paper were gathered with the use of a semi-structured interview -a format that is valuable for exploring experiences and perceptions and developing rapport, but also one where the interviewer is involved in the production of data. While interviewers were instructed to avoid dispensing advice, expressing judgment, or providing reinforcement, the quality of the personal interaction likely influenced the participants' responses. Given the exploratory nature of this research, a level of interviewer involvement was deemed acceptable; however, future researchers aiming to test specific hypotheses and draw generalizable conclusions would be advised to employ a more structured approach. The fact that the research was funded by an insurance provider was not an interview topic, but based on the perceptions of the research team, members of which had also conducted interviews while working for academic institutions and government departments, it did not appear to be a factor of importance to the interviewees. The extent to which results generalize to those who do not go on to receive vocational services is unclear. However, given that research has found that more than half of people with compensated occupational injuries have ongoing limitations (46), it is likely that the problems encountered are not unique to the study population. While it is likely that others encounter similar problems, the current sample is of particular interest because they successfully negotiated the complicated return-to-work process, managing to find employment, and -for the most part -staying employed. This highlights the potential for differences between the experiences of the study participants and other injured workers; however, as exemplars, the current sample has the potential to teach us much about what facilitates success.
Although study findings provide insight, the ability to draw firm conclusions about the causal nature of the observed relationships is limited due to the study's crosssectional design. For example, while a proactive style was associated with the advancement phase, this may be a function of the experience of achievement "teaching" the worker to be more proactive. This has implications for those developing interventions based on study findings in that, if identified facilitators are not causally related, interventions aimed at improving access to facilitatory influences will not produce the desired effect. Further research employing a longitudinal design is needed to confirm the importance of the facilitators identified. While the sample was relatively large, the small numbers in each of the return-to-work phase groups, and the diversity in the facilitators reported, meant that the capacity for the testing of phase-specific facilitators was limited. However, trends do suggest that influences can be important at different times and that there is a developmental nature to the facilitatory influences such that they are more highly evolved at later phases of the return-to-work process. This is consistent with earlier research suggesting that there are phase-specific predictors of work disability (47) and implies that additional investigations along these lines would provide more specific information regarding what would be beneficial to workers in the various phases of the return-to-work process. Future work aiming to test for between-group differences might benefit from a staged recruitment procedure; when the desired sample for the first of the groups (eg, maintenance) has been achieved, stage two begins and only those in the underrepresented group (eg, off work, re-entry, and advancement) would be interviewed. This process of targeted recruitment would continue until the desired sample size for the final group has been reached. Along similar lines, research investigating facilitators or other phases of the return to work process (eg, from maintenance to advancement) has the potential to improve the long-term outcomes of injured workers.
Concluding remarks
The reported facilitatory influences were numerous and varied. Although many were contextual, they also included influences that may not have previously been thought of as facilitators. These findings support the suggestion that facilitatory factors are more than just the opposites of barriers and can include actions to be taken to overcome difficulties (48). As such, the current results add support to the argument that initiatives aimed at improving return-to-work outcomes can go beyond the removal of obstacles to include interventions to circumvent difficulties before they are encountered. The findings suggest that those involved in the return-to-work process have the potential to intervene within contextual environments to improve work-disability outcomes. At this stage, it would appear that ongoing good health and workplace support are the two factors that have the greatest potential to facilitate long-term return-to-work success; however, further longitudinal research is required to confirm and determine the extent of this potential.
|
2018-04-03T03:02:49.330Z
|
2010-10-29T00:00:00.000
|
{
"year": 2010,
"sha1": "d9188d0c32053c40afe1a9f1e6f5bd97c5a2cc96",
"oa_license": "CCBY",
"oa_url": "https://www.sjweh.fi/download.php?abstract_id=2986&file_nro=1",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "10ed52c9fd2bc54727528030838aaff24f865938",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257744277
|
pes2o/s2orc
|
v3-fos-license
|
Effect of a Single Multi-Vitamin and Mineral Supplement on Nutritional Intake in Korean Elderly: Korean National Health and Nutrition Examination Survey 2018–2020
Inadequate nutritional intake is common, especially among elderly individuals. Although micronutrient intake may help fill nutritional gaps, the effects of multi-vitamin and mineral supplements (MVMS) among the Korean elderly are not well known. Therefore, we investigated the nutrition-improving effects of a single MVMS. A total of 2478 people aged ≥65 years who participated in the Korea National Health and Nutrition Survey 2018–2020 were analyzed. Nutrient intake from food and supplements was measured using the 24 h recall method. We compared the nutritional intake and insufficiency between the food-only group (n = 2170) and the food and MVMS group (n = 308). We also evaluated the differences in inadequate nutritional intake after taking MVMS with food. The analysis included vitamins A and C, thiamine, riboflavin, niacin, calcium, iron, and phosphorus. The proportion of insufficient intake ranged from 6.2% to 80.5% for men and from 21.2% to 82.4% for women, depending on the nutrients. Intake of MVMS with food was associated with lower rates of inadequacy (3.8–68.5% for men and 3.3–75.5% for women) compared to the food-only group. The results suggest that micronutrient deficiency frequently occurs in the Korean elderly population and can be improved by MVMS intake.
Introduction
For many older adults, food remains the sole source of nutrients. As a result, older adults in communities and hospitals have been shown to be at a high risk of malnutrition in assessments using the Mini Nutritional Assessment (MNA), a widespread nutritional screening and assessment tool. The prevalence of malnutrition risk in this population is relatively high, ranging from 19% to 27% [1,2]. The Korean Frailty and Aging Cohort Study (KFACS) showed that 14.3% of community-dwelling older adults were malnourished [3]. The increase in the aging population suggests that the number of malnourished older people will also increase, causing severe social and health problems.
Older people experience nutritional deficiencies for various reasons, including anorexia, oral health problems, biological changes in the digestive system, chronic diseases, multiple drug use, and social factors [4][5][6]. Nutritional deficiencies in food intake can also lead to macronutrient and micronutrient deficiencies [4]. In particular, micronutrient deficiencies such as iron and vitamins A, C, D, and E are more common in older adults than in younger adults [7]. Micronutrient deficiencies can adversely affect various aspects of the health of elderly individuals, including immune function [8,9], frailty [10], osteoporosis [11], and longevity [12], through multiple pathways related to cell differentiation, oxidative stress, muscle and bone metabolism, inflammation, and decreased immunity [10,13,14]. Recent studies have also suggested that micronutrient deficiencies, such as vitamins, can cause abnormal brain functions such as oxidative Nutrients 2023, 15, 1561 2 of 12 stress, mitochondrial dysfunction, and neurodegeneration, leading to various neurological disorders such as Alzheimer's disease, Parkinson's disease, and depression [14]. An increase in dementia or other neurological diseases can be another burden in an aging population.
Previous studies [7,[15][16][17][18][19][20][21][22] have shown that taking vitamins and minerals through dietary supplements (DS) can help improve micronutrient deficiencies and achieve the recommended levels of these micronutrients. In Korea, some studies have shown that the supplementation of vitamins and minerals through DS in addition to food can help reach the recommended daily nutritional intake for the general population [17,18] and adolescents [22]. Some studies conducted in Korea showed the DS usage patterns and micronutrient deficiencies in elderly Korean individuals [23], and some studies suggested the improvement effect after taking multi-vitamin and mineral supplements (MVMS) in older people. Therefore, the improvements in older Korean individuals taking MVMS with food need to be clarified.
We hypothesized that the intakes of micronutrients in older Koreans need to be increased to meet the recommended nutritional intake and that the MVMS may help improve nutritional status. In order to exclude overlapping effects due to the combination with other DS and to determine the effect of only a single MVMS, this study was conducted with subjects who only took MVMS purely by excluding all multi-users. To address these hypotheses, in this study, based on data from the large nationally representative Korean National Nutrition and Health Survey (KNHANES) 2018-2020, we investigated the micronutrient intake status in older Korean people. We also evaluated the MVMS-related degree of reduction in the proportion of insufficient nutritional intake below the estimated average requirement (EAR) by comparing the findings for the group consuming food and a single MVMS with that consuming food alone.
Study Population
This cross-sectional study used data from KNHANES 2018-2020. The KNHANES provides nationally representative data on Koreans' health status, health awareness and behavior, and food and nutrition status. The three surveys-Health Interview, Health Screening, and Nutrition Survey-were conducted through face-to-face interviews and self-administered questionnaires. The nutrition survey evaluated dietary habits, including DS use, food security, and food and nutrient intake during the previous day, which were obtained by a well-trained nutritionist using a 24 h recall method. Of the 23,471 people who participated in the 2018-2020 survey, 5110 were over 65 years old. Of these 5110 older adults, 643 participants were excluded from the study because they did not complete the nutrition survey. Among the remaining 4467 participants, 243 who did not answer the DS-related questionnaire were excluded from the study, and 115 participants with missing weight values were also excluded. Two thousand one hundred and seventy people who had not taken a DS during the previous 24 h were assigned to the food-only group (n = 2170). Of the 1939 individuals who took a DS, 768 responded that they were taking MVMS. After excluding 460 individuals who took an additional DS along with MVMS, 308 individuals were finally assigned to the "food + MVMS group" (n = 308) (
Baseline Characteristics
Sociodemographic factors, including age, sex, education level, household income level, smoking status, frequency of alcohol consumption, physical activity level, body mass index (BMI), and history of disease or cancer diagnosis, were surveyed using a self-administered questionnaire. BMI was calculated from the measured height and weight. Individuals were divided into two age groups aged 65-74 years and ≥75 years. Education levels were divided into elementary school or lower, graduation from middle school, high school, and university or higher. Household income was divided into high, middle-high, middle-low, and low based on household income quartiles. Smoking status was categorized as non-smokers, ex-smokers, and current smokers. The frequency of alcohol consumption was divided into none, less than once a week, two to three times a week, and more than four times a week. Physical activity level was divided into the following groups: moderate-intensity physical activity for more than 2 h 30 min each week, high-intensity physical activity for more than 1 h 15 min each week, combined moderate-intensity and high-intensity physical activity, and no moderate-or high-intensity physical activity. BMI was classified as the underweight range <18.5 kg/m 2 , normal range between 18.5 kg/m 2 and 23 kg/m 2 (18.5 kg/m 2 ≤ BMI < 23 kg/m 2 ), overweight for values between 23 kg/m 2 and 25 kg/m 2 (23 kg/m 2 ≤ BMI < 25 kg/m 2 ), and obese for values ≥25 kg/m 2 . If the participants answered that they had been diagnosed with a disease or cancer by a doctor, they were recorded as having a disease or cancer. Diseases included hypertension, hyperlipidemia, cardiovascular disease, stroke, and diabetes mellitus. Cancers included stomach, colon, liver, breast, cervical, lung, and other cancers.
Baseline Characteristics
Sociodemographic factors, including age, sex, education level, household income level, smoking status, frequency of alcohol consumption, physical activity level, body mass index (BMI), and history of disease or cancer diagnosis, were surveyed using a self-administered questionnaire. BMI was calculated from the measured height and weight. Individuals were divided into two age groups aged 65-74 years and ≥75 years. Education levels were divided into elementary school or lower, graduation from middle school, high school, and university or higher. Household income was divided into high, middle-high, middle-low, and low based on household income quartiles. Smoking status was categorized as non-smokers, ex-smokers, and current smokers. The frequency of alcohol consumption was divided into none, less than once a week, two to three times a week, and more than four times a week. Physical activity level was divided into the following groups: moderate-intensity physical activity for more than 2 h 30 min each week, high-intensity physical activity for more than 1 h 15 min each week, combined moderate-intensity and high-intensity physical activity, and no moderate-or high-intensity physical activity. BMI was classified as the underweight range < 18.5 kg/m 2 , normal range between 18.5 kg/m 2 and 23 kg/m 2 (18.5 kg/m 2 ≤ BMI < 23 kg/m 2 ), overweight for values between 23 kg/m 2 and 25 kg/m 2 (23 kg/m 2 ≤ BMI < 25 kg/m 2 ), and obese for values ≥25 kg/m 2 . If the participants answered that they had been diagnosed with a disease or cancer by a doctor, they were recorded as having a disease or cancer. Diseases included hypertension, hyperlipidemia, cardiovascular disease, stroke, and diabetes mellitus. Cancers included stomach, colon, liver, breast, cervical, lung, and other cancers.
Definition of MVMS Users
DS included MVMS and vitamin C, omega-3 fatty acids, probiotics, red ginseng, calcium, vitamin A or lutein, propolis, vitamin D, iron, and other vitamin and mineral supplements. We defined the MVMS user group as those who only took MVMS among DS. The participants who answered "yes" to the question "Did you take a dietary supplement the day before the survey?" were defined as DS users. The brand name of the DS and the intake amount were confirmed by a nutritionist.
Vitamin and Mineral Intake from Foods and Supplements
A well-educated nutritionist visited the examiner's home and obtained information on dietary habits and food frequency through a 24 h recall method and a semi-quantitative food frequency questionnaire. Using the DS survey, we determined product names and ingredient amounts, intake duration, dose, frequency, and whether the DS was taken one day before the survey. We calculated the daily intake of nutrients, from total energy, carbohydrates, protein, fat, fiber, vitamin A, thiamine, riboflavin, niacin, vitamin C, calcium, iron, and phosphorus, from food and the MVMS. Nutrient intake through DS that was evaluated in this study included intake of calcium, phosphorus, iron, vitamin A, vitamin B1, vitamin B2, niacin, and vitamin C, since the KNHANES presented only the intake of the above micronutrients among the components included in MVMS. The percentage of intake below the EAR and upper limit (UL) compared with the nutritional intake standard was calculated [24]. The EAR and UL standard values for each nutrient were applied according to the sex and age of the participant. EAR was evaluated for all eight nutrients, and UL was only analyzed for nutrients with recommended baseline values: vitamin A, vitamin C, calcium, iron, and phosphorus. In the Dietary Reference Intakes for Koreans 2020 [24], the UL of niacin was divided into limits for nicotinic acid and nicotinamide. Thus, niacin was excluded from the analysis because it did not reflect the total niacin levels.
Statistical Analysis
Statistical analysis was conducted using STATA 15.0 SE (Stata Corp., College Station, TX, USA), with a significance level of p < 0.05. The 2018-2020 KNHANES is a multistage stratified cluster with combined data. Therefore, we applied the weights presented for each survey year. Categorical variables were expressed as percentages and standard error (SE), and χ 2 analysis was used to compare the two groups. The intake of nutrients was compared between the "food-only" group and the "food + MVMS" group using multivariate regression analysis after adjusting for variables such as age, sex, education level, household income, smoking status, frequency of drinking, physical activity, BMI, and presence of diseases and cancer. The degree of intake below the EAR and the changes after adding MVMS were evaluated as percentage SE.
Results
In this study, the intake rate of DS among elderly individuals aged 65 years or older who participated in the survey on DS was 49.9% (males, 45.3%; females, 53.4%). Among participants taking DS, 39.5% (males, 40.5%; females, 38.9%) took MVMS, and 16.7% (males, 19.2%; females, 15.2%) took MVMS alone. Table 1 shows the basic characteristics of the two groups. In assessments of sociodemographic variables, the "food + MVMS group" included a high proportion of individuals aged 65 to 74 years and a low proportion of those with low household income. The two groups showed no differences in sex or education level. In assessments of healthrelated variables, the food + MVMS group included a lower proportion of current smokers and a higher proportion of individuals performing a moderate-to-high-intensity exercise. The two groups showed no difference in alcohol consumption frequency and no difference in terms of disease-related variables such as BMI, chronic disease diagnosis, and cancer diagnosis. In a comparison of the daily intake of nutrients through food alone in the two groups, males showed a higher intake of total calories, fat, riboflavin, and calcium in the food + MVMS group than in the food-only group, while females showed higher fat intake in the food + MVMS group than in the food-only group. The food + MVMS group showed a significantly higher intake of all eight nutrients in males and females than the food-only group, except for phosphorus in females (Tables 2 and 3). Figure 2 shows the proportion of people who consumed less than the EAR for each nutrient in the MVMS user group. We also evaluated the percentage differences in nutritional intake status after taking MVMS (Table S1). When participants consume food only, more than 50% of male participants consumed less than the EAR of vitamin A (80.3%), vitamin C (80.5%), and calcium (67.3%), and more than 50% of female participants consumed less than the EAR of vitamin A (78.1%), niacin (69.4%), vitamin C (82.4%), and calcium (80.5%). In addition, the rates of inadequate intake of thiamine (21.0%), phosphorus (6.2%), and iron (18.2%) in males and iron (21.2%) in females were low, at less than 20%. After MVMS intake (in the food + MVMS group), micronutrient intake in older adults increased for almost all nutrients. The most significant improvement was observed in vitamin C levels in both sexes (68.5% in males and 75.5% in females). Additionally, niacin, riboflavin, thiamine, and vitamin A also showed improved intake rates lower than the EAR between 21.7% and 46.1% in both males and females (Table S1). However, even after taking MVMS with food, more than 50% of the male and female participants did not reach the EAR for vitamin A and calcium intake. Figure 2 shows the proportion of people who consumed less than the EAR for e nutrient in the MVMS user group. We also evaluated the percentage differences in nu tional intake status after taking MVMS (Table S1). When participants consume food on more than 50% of male participants consumed less than the EAR of vitamin A (80.3 vitamin C (80.5%), and calcium (67.3%), and more than 50% of female participants c sumed less than the EAR of vitamin A (78.1%), niacin (69.4%), vitamin C (82.4%), a calcium (80.5%). In addition, the rates of inadequate intake of thiamine (21.0%), ph phorus (6.2%), and iron (18.2%) in males and iron (21.2%) in females were low, at than 20%. After MVMS intake (in the food + MVMS group), micronutrient intake in ol adults increased for almost all nutrients. The most significant improvement was served in vitamin C levels in both sexes (68.5% in males and 75.5% in females). Ad tionally, niacin, riboflavin, thiamine, and vitamin A also showed improved intake ra lower than the EAR between 21.7% and 46.1% in both males and females (Table However, even after taking MVMS with food, more than 50% of the male and fem participants did not reach the EAR for vitamin A and calcium intake. Table 4 shows the percentage of people who exceeded the ULs for vitamins A and C and calcium, phosphorus, and iron. Some male participants exceeded the UL for vitamin A and iron with food intake alone. However, these cases were relatively few, and the female participants exceeded the UL for none of the nutrients by food intake alone. The nutrients consumed above the UL in the MVMS + food group were vitamin A and iron in both men and women; the corresponding ratios were 1.5% for vitamin A and 2.1% for iron in men, and 1.4% for vitamin A and 1.2% for iron in women. Table 4. Changed percentage of micronutrient intake exceeding upper limit after eating a single MVMS.
Discussion
This study confirmed the effects of supplementing insufficient micronutrient intake with a single MVMS. The proportion of participants who did not achieve adequate micronutrient intake through their diet was greater than 50% for vitamin A, vitamin C, and calcium in men and vitamins A, niacin, vitamin C, and calcium in women. On the other hand, MVMS intake helped reduce the proportion of undernourished men and women. However, although MVMS intake improved the nutrient levels, more than 50% of the participants still required more vitamin A and calcium intake. Nevertheless, concerns about the problems caused by excessive intake were not serious.
We found that the "food + MVMS group" participants were relatively younger, suggesting that even among elderly individuals, older age is associated with a more vulnerable nutritional status. Other studies have also suggested that elderly individuals aged 70-75 years or older show lower intakes of most nutrients than those aged 65 years and older [7,25]. As the population ages, individuals become more vulnerable to malnutrition requiring more attention. We also observed that MVMS users had a healthier lifestyle. Previous studies have shown that supplement users are more likely to be female, educated, non-smokers, physically active, and knowledgeable about nutrition and health [17,23]. When we compared the nutrient intake obtained from food between the two groups of participants in this study, the intake of calories, fat, riboflavin, and calcium was higher in male MVMS users than in non-users. In contrast, among women, there was no significant difference in nutrient intake from food, except for fat intake. Consumption of DS implies a greater interest in health, and these individuals are believed to seek a higher intake of vitamins and minerals from high-quality meals. These differences were more pronounced with increased MVMS intake.
Inadequate vitamin and mineral intake in the elderly has already been described in previous studies [4,7,13,15,16,23,25,26], and the findings of this study were consistent with the previously reported data. A meta-analysis using various nutritional evaluations of the elderly in 37 Western countries showed that more than 50% of older adults did not receive sufficient amounts of thiamine, riboflavin, vitamin D, calcium, magnesium, and selenium from their diet [4].
Among the nutrients, vitamin A, vitamin C, and calcium showed a high inadequate nutritional ratio in our study. For vitamin A, approximately 80% of older Korean adults consumed less than the EAR, and even after taking MVMS, more than 50% still consumed insufficient amounts of vitamin A. Vitamin A is associated with visual function, immune function for the prevention of some viral infections in relation to host susceptibility [9], cell growth, and development, and bone health [27,28]. For vitamin A, the unit of calculation was changed from retinol equivalent (RE) to retinol activity equivalent (RAE) in KNHANES VII (2016) [29]. Due to the unit change, the measured carotenoid activity was lower, and vitamin A intake was reduced from before, even if the same amount was consumed, which exaggerated the deficiency. In the traditional Korean diet that is mainly based on rice, which is the primary diet consumed by the elderly, calorie intake was low and the intake of vitamins A and C as well as riboflavin, calcium, and iron were also low [30]. Therefore, Koreans who obtain vitamin A mainly through plant foods such as carotenoids appear to have lower intakes.
Overall, MVMS supplementation improved micronutrient intake in older adults. This improvement effect was mainly observed for vitamins. Although vitamin C showed one of the highest rates of insufficiency among participants in the food-only group, this vitamin also showed the best improvement after taking MVMS. The proportion of men and women who consumed vitamin C below the EAR was 80% in the food-only group, and the intake level was very vulnerable. The intake of fruits and vegetables, which can be considered representative vitamin C sources for the elderly, has tended to increase gradually in comparison with the past. However, consumption exceeding the recommended amount is still essential [31]. Vitamin C is a representative antioxidant that plays a role in enhancing immunity [8]. The intake of MVMS improved the insufficiency rate by nearly 70%. As a water-soluble vitamin, vitamin C has fewer known toxicities than fat-soluble vitamins and minerals; therefore, supplemental intake contributes to reducing inadequate intake in the elderly by providing these nutrients in a safer form.
Proper calcium intake reduces the risk of osteoporosis and fractures in the elderly [29]. As the fracture rate increases with age, osteoporosis is recognized as a significant disease for women and men. The findings indicated a severe calcium intake problem due to a marked lack of calcium intake in both individuals who consumed food alone and those who consumed an MVMS. Our results indicate the importance of a high-quality diet including calcium sources such as vegetables, milk, and fish for the elderly and additional single-nutrient calcium supplementation, if necessary.
In this study, the proportion of participants who exceeded the intake limit after taking MVMS was approximately 2% for vitamin A and calcium, which was lower than that in previous studies [15,16]. Excessive intake of vitamin A can lead to hip fractures, and excessive iron intake in older adults increases the risk of coronary heart disease [15,16]. Weeden et al. showed that more than 10% of older adults exceeded the UL intake for vitamin A, niacin, folic acid, and magnesium [16], and Sebastian et al. found that more than 10% of older men exceeded the UL for iron and zinc [15]. Our study showed fewer UL exceedances than other studies, probably because it evaluated people who used only one MVMS as a nutritional supplement. Thus, exceeding the ULs may be a concern when consuming multiple nutritional supplements. Nevertheless, the benefits of MVMS intake are clear, and the risk of side effects is not high, with some levels remaining below the recommended amount despite supplementation. These results highlight the importance of educating elderly individuals to consume mineral supplements without exceeding the UL of mineral or vitamin intake in food.
This study had some limitations. First, we used a 24 h recall method to evaluate nutrient intake; however, recall bias was possible due to memory errors. In addition, there may be differences in the accuracy of evaluating nutritional intake for 24 h recall only once. However, since this study involved an evaluation of nutrients in a population, not individuals, the difference was likely to be insignificant. Since only the limited nutrition data included in the survey were used, further analysis of vitamins B6, E, folic acid, magnesium, and zinc, which will not be analyzed in this study, is needed. This study was cross-sectional, and we only could suggest associations rather than causal relationships. The strengths of our study are that we used nationally representative KNHANES data from Korea, which provided helpful information regarding the vulnerable group and identified target groups for intervention. Although we did not conduct a direct multicountry comparison in this study, the results of previous studies conducted in various countries with nationally representative data and our results confirmed that older people were vulnerable to micronutrient intake [7,13,15,25,26]. This study was considered a cornerstone for conducting a pilot study to actively address inadequate nutritional intake for vulnerable older people in the future. Among those taking DS, those who took only a single MVMS were evaluated to more accurately evaluate the effect of MVMS consumption on nutritional intake in older Korean adults. In addition, more accurate information was obtained by matching the day of the 24 h recalls with the day of taking the MVMS. This is the first study to evaluate the micronutrient status of elderly Koreans at the national level by estimating micronutrient deficiency status and the improvement effect based on a single MVMS intake.
Conclusions
This study showed a cross-section of insufficient and excessive nutritional status through food intake and MVMS in older Korean adults. When Korean older adults consumed most micronutrients only with food, the intake of most nutrients was less than recommended. However, the intake of a single MVMS helped improve micronutrient intake in older Korean adults. Dietary supplement intake improved the nutritional status of vitamins and minerals, but the proportion of inadequate intake of some nutrients was still higher. Although excessive nutritional intake is not a cause for concern, the risk of excessive intake cannot be ruled out. Therefore, elderly Koreans should receive education on proper vitamin and mineral intake through a proper diet and, if necessary, MVMS.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/nu15071561/s1, Table S1: Comparison of the percentage of intake inadequacy based on EAR between two groups.
Author Contributions: Conceptualization, S.G.P. and H.K.; methodology, S.G.P. and H.K.; software S.G.P.; validation, S.G.P. and H.K.; formal analysis, S.G.P.; investigation, S.G.P.; resources, S.G.P.; data curation, S.G.P.; writing-original draft preparation, S.G.P. and H.K.; writing-review and editing, S.G.P. and H.K.; supervision, S.G.P. and H.K.; project administration, S.G.P. and H.K. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Informed consent to participate in the study was obtained before participating in the KNHANES.
Data Availability Statement:
The data presented in this study are available on reasonable request from the corresponding author.
|
2023-03-26T15:07:15.663Z
|
2023-03-23T00:00:00.000
|
{
"year": 2023,
"sha1": "8f8e6e3590e8b177173c3f51e4cb7f8bc40320ab",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/15/7/1561/pdf?version=1679569398",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc7e76179756fded656d55901a3bb849e45667da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55751667
|
pes2o/s2orc
|
v3-fos-license
|
Some Questions of Linguocultural Specificity Communication at the English Humour Translation
This problem is relevant today because it is necessary to study the issues of the correlation of language, culture and translation, so far as translation is a link between linguistic cultures speakers. Obvious lack of researches in the field of linguocultural specificity affects the quality of the translation and the adequacy of reflection of ethno-linguistic worldview in the minds of other languages speakers. The objective of the article is to identify the degrees of interaction of language and culture in the translation process to provide a deep penetration in the national associated meanings of original literary works. Leading approach to the study of this problem is the analysis that was carried out on basis of descriptive, comparative methods, on the method of a literary text description, involving elements of linguistic and cultural analysis. Methodological basis became researches in linguistics, intercultural communication and translation studies. The paper revealed that linguocultural humour study suggests the priority coverage of the values that are relevant for the compared cultures. These values may get different expression in humorous texts. English humour includes relevant characteristics of universal humour and humour of those social groups that make up the English nation. The article materials may be useful for further research in this area, for effective translation techniques development.
Background
Comprehensive research interest in the "dialogue of cultures" in modern translation studies has become more important than ever in recent years due to globalization and integration of world cultures when the exchange of cultural values becomes an integral part of the socio-cultural situation.A huge impact on every national culture have a variety of borrowing, the "voices" of other cultures that come into it, including through translation.A fundamental change in attitudes on the formation of cultures gave grounds to say that such a dialogue hold a translation basis, and one of the factors of the formation, development, modern functioning and interaction of cultures becomes an intercultural texts exchange.The view of Kh.G.Gadamer is close to this approach, according to which the translation is treated as a dialogue, and "the translator should transfer the meaning which is subject to understanding in such context in which the conversation participant lives» (Gadamer, 1988).Dialogical character of the translation is stressed by Yu.M. Lotman in his article "Semiotika kultury" (Semiotics of Culture), "dialogue is the basis of all meaning generated processes" (Lotman, 1992).
Status of a Problem
The translation recently began to be regarded as a complex and multiple-aspect phenomenon.It is no longer limited only to linguistics, in which mainly deployed the translation studies.Currently, the need of reconsideration of the translation role in the epoch of global integration is aware more acutely.Understanding of translation as a phenomenon of "cultural transfer" is inherent in H. Vermeer views, according to which the most important for the understanding of translation lies in its functioning in the new cultural environment (Vermeer, 1986).A view at the translation as "an eternally relevant culturological category" (Lyusyy, 2003) is reflected in linguocultural translation theory, in which the study of the mechanisms of intercultural interaction through the texts exchange is the most rational because it can contribute to the expansion of spiritual spaces of receiving cultures and their self-identification within the world spiritual space, not only by receiving categorized concepts, meanings and ideas, but also due to the expansion of space and means of understanding (Galeeva, 2003).
A particular importance has national humour that usually delineated by territorial borders of countries and regions and national identity.Man masters national humour, absorbs it from the culture of the country in which he lives, and translated texts from different cultures, in the last ones outlines the need to transfer not only the form but also content with the diversity of the meanings containing therein provided always preservation of these meanings.On this occasion, L.L. Nelyubin notes that so far as the translation is a "transformation of the original text while retaining of the meaning", the translator must try to find "the equivalent forms of expression of a certain meaning" in another language " (Nelyubin, 2003).
Thematic Justification
This problem is important today because it is necessary to study the problems of the correlation of language, culture and translation, so far as translation is the link between linguistic cultures speakers.Obvious lack of researches in the field of linguocultural specificity affects the quality of the translation and the adequacy of reflection ethno-linguistic worldview in the minds of other languages speakers.
Objectives of the Research
The objective of the article is to identify the degree of interaction of language and culture in the translation process provide a deep penetration in the national associated meanings of original literary works.
Methods of the Research
During the work in the interrelation and interdependence, the following methods are used.
-Descriptive method includes observation and classification of the investigated material, -Comparative method, aimed at identifying of general and specific features of the compared languages at all levels of the text, -contextual analysis aimed in this case at the study of micro and macro context that allows to determine in relation to the studied unit the implementation conditions of its meanings, additional associations, connotations, and to set the function of the unit in the text, which is an integral system;
Factual Material of the Research
Humorous stories of Sir Pelham Grenville Wodehouse (Wodehouse, 1923) from the cycle of Jeeves and Wooster served as a factual material.The choice of the texts as research material is explained by the presence of linguocultural specificity of English humour.
Basis of the Research
During the analysis of the literature on the research subject was found that one of the main tasks of cultural linguistics is the study and description of the interaction of language and culture.Language in cultural linguistics is not only and not so much a tool of culture understanding, it is an integral part, one of its images.In the same way, that in the culture of every nation there is a universal and ethno-national element, in each language can be found a reflection of the general, universal components of culture and identity of a particular nation culture.
According to Timko N.V. (Timko, 2011) the culture is not simply a set of norms, behaviours and values that exist in the culture of translated language speakers.Culture, among other things, is also an indispensable condition for the existence of language, the context in which the language functions and reveals.Language is inseparably connected with the culture, with the reality in which the people lives, and the activities that they perform, i.e. culture is an important culture-forming element.Under the factor "culture" in the translation we understand the totality of everything material and spiritual, created by a nation and opposed to "primordial" nature, the totality of all national-specific, which distinguishes one linguocultural community from the other: specificity of thinking and perception of the world, beliefs, traditions, values orientations, communicative strategies and cognitive environment that determines the basis of behaviour shared by all members of a particular linguocultural community.
On purely practical level, the factor "culture" appears before the translator as a list of the specific features of the culture of the source language speakers, which are not irreproducible either in the translation or with their direct (unadapted) projecting onto culture of target language speakers can cause inadequate communicative effect, that is a misunderstanding, miscommunication, wrong understanding, unequal emotions, etc.This list can also include complicated perception of the translated text, the loss of the emotional and aesthetic perception (Timko, 2011).
Performing a translation, the translator chooses the method of translation, even when reading the text."For the translation of majority of texts on the stage of preparation for the translating he needs a certain amount of background information, and for stocking of the background information fund, for example, in medicine, or over any literary school translator can use reference sources.Translator must possess actively the linguo-ethnical specificity of the text, as it often is not given in the text in a concentrated form, but dispersed therein, or encrypted, and his task is to recognize this specificity, based on the total activity of knowledge " (Alekseeva, 2004).When translating it is referred to the transference of a literary work, not only from one language system, but also from one mental sphere in the another , where all relations and communications, all poetic origins are not like as the first one.To translate means to create a work from the start, in a different language.The act of translation is a creative act, although a secondary, subordinate.As a result, there is a new product.
One of the most important components of culture is the humour.In studies, the concept "humour" has two meanings.In the narrow sense, humour defines one type of comic, which is usually characterized by a sympathetic attitude to the object of ridicule.In the broad sense humour is the ability of a person or social group to perceive the comic in all its diversity.Humour implies that under the ridiculous, under causing laughter disabilities are felt something positive, attractive.In humour, laughter combined with sympathy for that it is directed.Sense of humour suggests the presence in one phenomenon or person both negative and positive aspects.Pure humour is a realistic "acceptance of the world", with all its weaknesses and shortcomings, which are not devoid of reality even in the best, but also with all those valuable that these shortcomings and weaknesses are hidden.The irony splits the unity from which humour is emanated.It contrasts the positive to the negative, the ideal to reality, exalted to ridiculous, infinite to ultimate.Irony strikes the imperfection of the world from the perspective of rising above their ideal.The irony is not possible without a sense of the exalted.In pure irony suggests that man feels his superiority over the subject, calling in him an ironic attitude.
National humour delineated by territorial borders of countries and regions and national identity.National element in humour and wit plays an important role, because here is expressed the connection and conditionality of comic perception with national mental temperament, national cultural traditions, as well as a special effect conditionality of understanding of comic aesthetic ideal, which always bears the stamp of the national peculiarities of the people.In addition, we should consider the rich comic possibilities of realization inherent in the national language, which can act as a special and independent artistic means of comic processing of life material.In puns, wordplay the national characteristics of humour through its national and linguistic form appear before us with a particular force and retain almost not transferable by means of another language a special national charm and colouring.
Study of the English Humour
Proceed to consideration of the English humour, which always admired and admires many people by its refining and paradoxicality.However, due to the emphatic conservatism of British nature, the analysis of national humour becomes sufficiently complex and contradictory task: traditional restraint mask hides the real attitude to the phenomena.Reality manifests itself on the verge of half-joking hint that, as the Cheshire cat of Lewis Carroll, suddenly disappear, leaving only a glimpse of a smile.Indeed, the symbolism of British humour is concentrated more in a smile than in the sounds of loud laughter."Loud laughter cannot be combined with les bienseances, because it only shows the noisy and wild crowd fun, ready to laugh at some stupidity.With regard to the real gentleman, his laughter can often be seen, but very seldom heard"-wrote T. Chesterfield, a model gentleman.In the English cultural and philosophical tradition, an open laughter, as a rule, is given as ethically defective; superiority theory of Hobbes is a classical evidence of this (Dmitriev & Sychev, 2005).Journalist V.V. Ovchinnikov, who has years of experience in communication with the Englishmen, writes: "The Englishmen love good jokes, and top class of English humour is considered the ability to make fun of something untouchable, at that avoiding blasphemy.Usually they say that no one can to laugh at themselves as Englishmen.In the English house, introducing guests to each other at any party, the hosts usually told only names and if they added some characteristic, usually of humorous character: "That's our neighbour, John, a principled opponent of washing the car," or "Let me introduce you to Sir Charles, who lives in London, as his Irish Terrier prefers fresh air" (Baryshnikov, 2013).
The traditional behaviour code prescribes Britons to be calm, polite and pointedly courteous, i.e. try to remain serious in all situations.He must hide a smile at the sight of any unintentional absurdity, politely to keep silent, seeing the comic awkwardness, the mistake of foreigner in language or behaviour.This classic British seriousness gets well along with the famous British sense of humour.As the English weather alternates sun and fog, the English ethnos combines practicality and Norman courtesy and English character combines optimistic smile and gloomy spleen.Here, however, there is no paradox; humour becomes humour only against the background of seriousness, but something serious, in turn, seems to be more significant against the background of entertainment.Jean-Paul in the "Preparatory aesthetics school" casual concerns the themes of national humour: "The serious nations -he writes, referring primarily to the Englishmen -has a higher and pathetic sense of the comic" (Dmitriev & Sychev, 2005).
"I hadn't seen Aunt Agatha since that little affair of the pearls; and, while I didn't anticipate any great pleasure from gnawing a bone in her society, I must say that there was one topic of conversation I felt pretty confident she wouldn't touch on, and that was the subject of my matrimonial future.I mean, when a woman's made a bloomer like the one Aunt Agatha made at Roville, you'd naturally think that a decent shame would keep her off it for, at any rate, a month or two.
But women beat me.I mean to say, as regards nerve.You'll hardly credit it, but she actually started in on me with the fish.Absolutely with the fish, I give you my solemn word.We'd hardly exchanged a word about the weather, when she let me have it without a blush.
'Bertie,' she said, 'I've been thinking again about you and how necessary it is that you should get married.I quite admit that I was dreadfully mistaken in my opinion of that terrible, hypocritical girl at Roville, but this time there is no danger of an error.By great good luck I have found the very wife for you, a girl whom I have only recently met, but whose family is above suspicion.She has plenty of money, too, though that does not matter in your case.The great point is that she is strong, self-reliant and sensible, and will counterbalance the deficiencies and weaknesses of your character.She has met you; and, while there is naturally much in you of which she disapproves, she does not dislike you.I know this, for I have sounded her -guardedly, of course -and I am sure you have only to make the first advance -' 'Who is it?'I would have said it long before, but the shock had made me swallow a bit of roll the wrong way, and I had only just finished turning purple and trying to get a bit of air back into the old windpipe.'Who is it?'"(The Pride of the Woosters is Wounded) One of the characteristic features of British humour is a very clearly felt sympathy to the object of ridicule.Philanthropy of the English humour lies in many terms on the clearly expressed foundation of tactfulness.Sense of delicacy -one of the most important virtues of the Briton, implied a priori.
English politeness involves lack of direct attacks, prefers gentle hints and reticences against direct insults of the object of laughter.In situations where it is necessary to give a negative assessment, to express dissatisfaction, harsh words are veiled by descriptions and comparisons, what creates an abundance of euphemisms.
"'All over except the hand-clasping,' I replied, slapping the old crumpet on the back.'Charge up and get matey.Toodle-oo, old things.You know where to find me, if wanted.A thousand congratulations, and all that sort of rot.'" (All's Well) Britain is an isolated island, and the British are trying, separated from the continent, to live according to their own rules, keeping the tradition.Innuendo, secrecy of English humour, based on hints, often understood only in private, to some extent, based on a certain ethnocentrism of Britons.For a person not knowing the English culture is difficult to catch many its subtleties.Knowledge of the language does not imply understanding of jokes, this requires something more -knowledge of traditions, specific group and professional values, unobtrusive social and cultural relationships, street jargon or boarding schools.People of another culture can only come closer to understanding of these subtleties, but in very rare cases -to understand.English adjectives "foreign", "alien" denote in everyday speech -"remoted from the British standard," "uncivilized": Isolationism of Briton is supported by commitment to the traditions that are carefully preserved in almost all spheres of life in Great Britain.Suffice it to recall the political structure of the country, the case law, consumer habits of the Englishmen.This traditionality frequently acts as the subject of jokes.
Classic English restraint, isolation, law-obedience, and traditionality has its backside.In the structure of the British national character can be found compensated (almost in the psychoanalytic sense) elements.Unhealthy, from the point of view of other cultures, interest in corporal punishment or paradoxically combined with almost pathological respect for the laws the popularity of crime novels (A.Conan Doyle, A. Christie, G.K. Chesterton, etc.) can be provided as examples.
English restraint to some extent compensates for English eccentricity.Unified education and immutable values force to seek a way out for individuality in the cultivation of a variety of "strangenesses" and unusual hobbies.As a rule, these oddities are harmless, do not go beyond all bounds and colour the Briton image by light good-natured humour.'I hope you won't think I'm butting in, don't you know,' I said, 'but -er -well, how about it?''I fear I do not quite follow you.' 'Well, I mean to say, his allowance and all that.The money you're good enough to give him.He was rather hoping that you might see your way to jerking up the total a bit.' Old Little shook his head regretfully.
'I fear that can hardly be managed.You see, a man in my position is compelled to save every penny.I will gladly continue my nephew's existing allowance, but beyond that I cannot go.It would not be fair to my wife.' 'What!But you're not married?' 'Not yet.But I propose to enter upon that holy state almost immediately.The lady who for years has cooked so well for me honoured me by accepting my hand this very morning.'A cold gleam of triumph came into his eye.'Now let 'em try to get her away from me!' he muttered defiantly" (No Wedding Bells for Bingo)
Study of the Scottish and Irish Humour
The United Kingdom is not a mono-ethnic country, and prominent zone of ethnic contacts.Specific features have both a humorous attitude of Englishmen to Scots, Irishmen and others nations who make up the population of the country, and the characteristics of their own ethnic humour.The Scots, for example are a target for ridicule about their trade inclinations and extraordinary stinginess.
The Scots invented copper wire, when two of them were unable to share five pence coin.
Such caricatural features of Scot as stinginess, a complete lack of scruples, are certainly exaggerated, but based on real assumptions: Scotland is the industrial part of the country, where the bourgeois enterprise has always been carried a price and shown brighter than, let us say, in the English aristocratic houses.
The image of Irishman in English jokes is primarily the image of man, opposed to all laws and regulations as contrasted to law-abiding Englishman.
Translation Studies Discussions
Translation is not a simple modification of some language structures into other, but a complex process of conveying meaning, defined as the result of the interaction of linguistic meanings and cognitive additions that match the utterance.Cognitive additions are a part of the translator's cognitive knowledge, that is, the totality of his encyclopaedic (linguistic and extra-linguistic) knowledge stored in his "long-term" memory.They are also a part of the so-called cognitive context, that is, the knowledge learned by the translator from the previous parts of the text and used in the meaning transfer of its subsequent parts.R.K. Minyar-Beloruchev says, "the object of the science of translation is not just a communication using two languages, but a communication using two languages, including correlated activity of source, translator and recipient.Central element of this communication is translator's activity or translation properly, which is one of the most difficult types of speech activity " (Minyar-Beloruchev, 1996).
Translation is a very important tool for cross-cultural communication, because it serves as an intermediary, a link, helping speakers of one language culture to make the acquaintance with the facts of the other.The role of language as a mean of outlook transfer to the representatives of other cultures is very important.This vision of the world in a culturological sense is unique, and its transfer by means of a foreign language is often a difficult task.
For centuries, the practice of translation, trying as best as possible to cope with the tasks assigned to it, follows the norms of language and cultural norms that existed in the society, although they varied depending on time and place.According to A.D. Schweitzer, the most important element of the culture are socio-cultural norms of translation, "representing a collection of the most common rules that determine the choice of translation strategy and reflecting the demands that society at a particular stage of its development makes to translation" (Schweitzer, 1999).Referring to the Jiri Levyy, he gives examples of national variation of linguistic norm.Therefore, there was a French setting to translate poems in prose, and in Czech, Slovak and Hungarian literature this is considered a violation of norms, as well as transferring of Alexandrine verse by blank verse, pass of pun or historical allusions.W. von Humboldt, considered that in the process of translation the translator tries to resolve an "impossible task", trying to combine successfully the accurate transfer of the original with due regard for the taste and originality of the language of the people, on whose language he translated the work (Humboldt, 1984).
Humboldt ideas were taken up by his contemporaries, and later developed by his followers not only in Europe but also on other continents.However, they had opponents who had a convincing argument against the untranslatability theory -the most powerful and persuasive of them was the practice of translation.Centuries of practical activities could not but lead to the distinguishing of individual methods of translation, to certain generalizations, rules, bans, recommendations, but it also gave rise to more questions, disputes and uncertainties.All these ambiguities was designed to clarify the general theory of translation, which was formed getting the status of an independent science in the XX century., performing one more argument in favour of recognition of the XX century as "the age of translation."
Humour Problems Discussions
The category of comic was studied by many researchers (Dzemidok, 1974;Rumina, 1998;Propp, 1999;Borev, 1970;Sychev, 2003), questions of humour sociology are given in the works of Dmitriev A.V., Bergson A., K. Powell; special place among social researches of comic is taken by the study of ethnic humour (Davis, 1990).The literature emphasizes that the ways and means of creating the comic is always associated with the peculiarities of national life, national traditions, with the specificity of the national culture, and that the national humour plays an important role in terms of the national character formation of heroes, cultural traditions.
A special place among researches of comic is taken by the study of ethnic humour.
Known sociologist and folklorist Christy Davis studies the humour (jokes, anecdotes, etc.) as an integral part of modern spiritual culture of many peoples of the world (Davis, 1990).The author's attention is directed to the so-called "ethnic jokes".The author convincingly demonstrates that changes in ethnic jokes in different societies connected with the history, development and the dynamics of social changes in them.The study presents a comparative analysis of the socio-economic, political, psychological and other factors that determine the position of the subject of joke and its narrator.Thus, the author makes a fundamental distinction between humour of immigrants or ethnic minority and national humour of.majority.Despite the obvious differences in the plots, themes, characters of humour of different nations, the author finds similarities common to ethnic humour around the world.These similarities consist in the presence of two characteristics of ridiculed characters that are found everywhere and always in the ethnic humour around the world -these are jokes about fools and, as it were, in contrast, jokes about the cunning, clever and calculating misers.Using the enormous amount of material, Davis demonstrates that ethnic jokes can only exist if there is a complete paradigm: cunning people-normal people (they are narrators of anecdotes) -fools.
Conclusion
All human communication occurs in the context of culture, which is manifested both in language and in the text.Modern national-cultural orientation of translation studies researches dictate the necessity of the translation theory withdrawal from statistical paradigm in the paradigm of the text, due to which separate translation problems caused by culturogical specificity (limited mostly lexically) will become a system in one text aspect.The success of the translation as linguocultural translation depends on the understanding of implicitly expressed in the text of meanings shared by all members of the linguocultural community and based on cultural values; the ability to choose the right linguistic means to convey a message in order to achieve the impact of translation equivalent to the impact of the original.
"
'You think it's all right for a chappie in what you might call a certain social position to marry a girl of what you might describe as the lower classes?' 'Most assuredly I do, Mr Wooster.'I took a deep breath, and slipped him the good news.'Young Bingo -your nephew, you know -wants to marry a waitress,' I said.'I honour him for it,' said old Little.'You don't object?' 'On the contrary.'I took another deep breath and shifted to the sordid side of the business.
|
2018-12-05T06:33:40.453Z
|
2015-04-27T00:00:00.000
|
{
"year": 2015,
"sha1": "47cb77b943699b8f9060743d368188a24adc7fa0",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/jsd/article/download/48139/25897",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "47cb77b943699b8f9060743d368188a24adc7fa0",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
234265069
|
pes2o/s2orc
|
v3-fos-license
|
Determination of calculating stresses on the depth of loess grounds of hydraulic structures
The processes of deformation and moistening of subsiding soils are closely related to each other. On the one hand, the deformations of subsiding soil depend on the degree of its moisture content, and on the other hand, they seriously affect the regularities of the process of moistening the massif. In connection with this, the improvement of methods for calculating the deformations of loess bases of hydraulic structures requires a thorough study of the process of moistening the massif and the influence on this process of the specifics of the impact and irrigation structures on the soil. The article presents the results of studies to determine the stresses on the depth of the loess bases of hydraulic structures in the Karshi steppe. It was established that the lateral pressure coefficient reaches its maximum during the period of the most intense manifestation of subsidence in the layer under consideration.
Introduction
The nature of the moistening of the loess foundations of the hydraulic structures depends both on the soil conditions of the site, and on the type of structure, the pressure transmitted by the structure to the ground, the width of the water and its pressure, etc. Two types of structures can be distinguished by the nature of the moistening of their bases: Type I -structures, during the period of operation of which their foundations are constantly moistened for a long period. Such structures should include drops, swift currents and other structures on the canals, as well as the canals themselves. During the operation of such structures, a significant amount of moisture enters their foundations; Type II -structures from which water enters the ground only by accident, for a short time as a result of damage to their structures. These are pipes, trays, channels in impervious clothing, as well as other water sources that have a very small area of the water, etc.
In the foundations of type I structures, the subsidence toll is wetted intensively and completely. In case of accidents of type II structures, the soil massif is moistened not to full water saturation, and usually, a suspended moistening loop is formed.
Within the humidification contour, the soil content varies from natural (at the border of the wetted zone) to close to full water saturation near the water source.
The process of filtration moistening of soils, including subsidence soils, has been studied by many authors: M. 2 sources of moisture of any shape in terms of having approximately the same depth of filling, the rate of soil wetting is proportional to their transverse dimensions. At the same time, when the soil is soaked from sources of moisture that have approximately the same width and depth of filling, but a different shape in plan (in one case, compact, imitating construction pits, and in the other elongated, representing sections of channels), the intensity of soil moisture is different. In the second case, it is slightly higher.
As shown earlier in the analysis of the works of several scientists A. Dzhumanazarova failure to take into account the anisotropy of the properties of wetted collapsible soils, as well as the formation of two layers with different physical and mechanical properties in them leads to a discrepancy between the calculated and actual values of the subsidence of loess soil, R. Xujakulov, M.Zaripov, R. Xujakulov, E. Nabiev, R. Xujakulov, E. Nabiev [24][25][26][27][28].
The values of vertical stresses along the depth of the bases under the centres of the dies undergo significant changes in the process of moistening. If their values measured at natural soil moisture ω = (8-10)% are much less than those calculated following the instructions of KMK 2.02.02-98, then after moistening the base at ω = (25-30)%, they exceed. This cannot tell the accuracy of calculations of the values of settlement and subsidence, as well as the strength and stability of the foundations. Proceeding from this, the authors researched the massifs "Samarkand", "Turkmenistan" and "Surkhan" of the Karshi steppe R. Frier [19] [35][36][37][38][39][40].
Methods
In the course of soaking the loess bases of the stamps, the process of transformation of the lateral pressure in the soil mass was studied. It was found that the value of lateral pressure in collapsing soils reaches the highest value at the moment when the soil is moistened on the studied horizon and the bonds between its particles are destroyed.
At the moment of deformation of the soil, many rigid bonds are broken in it and, before the formation of new soil particles, they have an increased ability and movement, which somewhat brings its properties closer to those of a liquid. Besides, the pore volume of the soil in the process of deformation rapidly decreases, with an almost constant weight moisture content, which can lead to an increase in the degree of moisture content and an increase in pore pressure by some degree.
After attenuation of the deformation process and the investigated soil layer, the value of the lateral pressure decreases. The decrease coincides in time with the fall of both the vertical and horizontal components of the stress tensor. After the stabilization of stresses in the soil mass, in the bases of the dies, it remained practically constant at moisture content. Figure 1 shows the isobars of the lateral pressure coefficient at the base of a round stamp with an area of 1m ², transmitting pressure of 0.1 MPa to the ground, after stabilizing deformations in the soil mass. MPa to the ground, after stabilization As can be seen from the figure, it has the greatest value in the upper soil layer under the stamp. The value of the lateral pressure coefficient decreases with depth. The maximum value of the coefficient of the stamp alone is slightly higher than 0.5.
Results and Discussion
The lateral pressure coefficient obtained from the experimental data is of greater importance under the edges of the stamp than under its central part, which corresponds to theoretical assumptions. Curve 2 is plotted taking into account the stress concentration due to the anisotropy of the loess soil, which has a moisture content ω = (26-29%). To plot curve 2, the stresses were calculated using the formula: σ′ z = σ z К к (1) where σ z -stresses determined by КМК 2.02.02 -98 K k -concentration factor calculated using the formula recommended by N.A. Tsytovich [13].
Here Е У and Е z -the modulus of soil deformation when a load is applied, respectively, in the horizontal and vertical directions. They were determined by the results of compression tests of samples taken from the studied soil horizons from the bases of the stamps.
When determining the stress concentration factor based on the results of compression tests of loess subsidence soils in the southeastern part of the Karshi steppe, under a load of up to 0.2 MPa, we obtained average values of К к = 1.10-1.66 (depending on the load and soil deformability).
The K к values depending on the depth of the considered soil horizon according to the data of our experiments are shown in Fig. 3. Curve 3 (Fig. 3) was constructed by us based on experimental data for the case of stabilization of stresses in the soil mass after its moistening. In order to exclude the influence on the value of σ z of the unevenness of the transfer of pressure stamps to the soil, here are given the averaged values of vertical stresses at the considered horizons in the bearing column of the soil. As can be seen from the figure, curves 2 and 3 have very close outlines. Calculation and measurement of stresses in the soil layer Н <0.5 P is a problem. This is due to the unevenness of the actual pressure distribution over the contact between the die and the base. Also, in the process of wetting the base of the stamp, the nature of the interaction between the stamp and the soil is constantly changing. Fig. 4 shows diagrams of the maximum values of vertical stresses arising in the soil mass. The concentration of stresses takes place in each soil horizon at the moment of moisture passing through it (at the contact with a rigid underlying unmoistened layer). Curve 2 (Fig. 4) characterizes the maximum stresses σ′′z at the horizons recorded by instruments. The σ′′ z values can also be expressed by the formula: σ′′ z = σ′ z К′ к = σ z К к К′ к (3) where К′ кthe stress concentration factor at the border of the wetted zone.
Most simply and conveniently, σ "can be determined by the formula: The table under consideration shows the experimental values of the coefficient α for a low-moisture loess base; for soaked soil at the end of the process, stress stabilization; in the process of water infiltration into the ground at the border of the wetted zone, where maximum stresses arise. Experiments have shown that the difference in the values of α in the accepted range of loads and for a certain configuration of the stamp is insignificant. Their tabular values were determined as the arithmetic means of the values obtained in experiments with different loads on the punch, but with its constant shape.
In cases for Н> 3.5 R (H> 1.75 V), the stress in the soil from the action of the additional load becomes rather small. When calculating soil deformations to take this allows a sufficient degree of accuracy for practical purposes: σ′′ z = σ′ z = σ z (5)
Conclusion
The foregoing allows us to draw the following conclusions: the depth of the core in the foundations of structures erected on loess soils of natural moisture is much less than that calculated following KMK 2.02.02 -98; the stressed state of the soil massif of the base of the structure model is transformed in the process of the moisture front advancing into the massif depth. In this case, there is a concentration of stresses in the soil layer at the border of the wetted in the non-wetted zones, the moisture content of which corresponds to the initial subsidence; after stabilization of stresses at the base of the structure, the values of the lateral pressure coefficient, even under the condition of a high degree of soil moisture, are significantly less than unity; the lateral pressure coefficient reaches its maximum during the period of the most intense manifestation of subsidence in the layer under consideration.
the stress state of the loess foundations of hydraulic structures depends on several factors and which should include the nature of the massif moistening, anisotropy, as well as other physical and mechanical properties and features of loess soils.
to determine the stresses in the foundations of structures, a special table can be used, taking into account the above factors and compiled based on experimental data.
|
2021-05-11T00:03:53.427Z
|
2021-01-15T00:00:00.000
|
{
"year": 2021,
"sha1": "8106ad3c37282353bec1dc5ae750f283a0b98077",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/1030/1/012133/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9c051ad59c4f0e9bf78bfbf9c59de871e6218b05",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
}
|
93138684
|
pes2o/s2orc
|
v3-fos-license
|
Isolation and Characterization of The Functional Properties of The Major Protein Fraction from Nyamplung ( Calophyllum inophyllum )
Defatted nyamplung (Calophyllum inophyllum) seeds as by-products of oil extraction is a rich source of protein. In order to evaluate its potential as value-added of nyamplung seeds, nyamplung proteins were isolated by solubilization-precipitation method at pH 3 and 5. The obtaining protein isolates were characterized with respect to their functional properties, including water binding capacity, oil binding capacity, foaming capacity, foaming stability, emulsifying activity, emulsifying stability, gelation capacity, and amino acid composition. The results show that nyamplung protein could be considered as high protein quality because essential amino acids leucine (4.39 %), proline (4.22 %), valine (3.34 %), aspartic acid (3.23 %) and lysine (3.34 %) were found to be the major amino acids. Polar amino acids were higher than non-polar amino acid (1.7 times). With the consequence in higher ratio of water binding capacity to oil binding capacity (2.7 times) and high value of hydrophile-lypophile balance. In general, the isolated protein from precipitation at pH 3 (IP3) was found to have better functional properties than that being precipitated at pH (IP5), and showed excellent in water binding, emulsifying, gelation and foaming properties. In conclusion, IP3 can be utilized as high quality proteins and emulsifier in oil in water emulsion system.
The cakes from oil extraction, as a byproduct, still have high concentration of protein (30%, unpublished data).It is very potential source for protein isolate, animal feed, fertilizer and chemical-based materials.Many studies on the characteristics and functional properties of proteins have been reported from various oilseeds, such as crambe seed (Massoura et al., 1998), soybean and lupin seed (Rodriguez-Ambris et al., 2005), sesame seed (Gandhi and Srivastava, 2007), rapeseeds (Yoshie-Stark et al., 2008), bayberry kernel (Cheng et al., 2009), sunflower seeds (Pickardt et al., 2009), and sweet lupin seed (Jayasena et al., 2010).However, the characteristics and functional properties of nyamplung protein isolate have not been explored yet.The process of isolation and fractionation of proteins might affect the functional properties of the proteins.In order to explore its potential as protein resources for industrial applications, a study was done to isolate the protein by solubilization of protein at pH 10 and further precipitation at pH 3 (IP3) or pH 5 (IP5).Protein isolates were evaluated with respect to the Available online at http://jifnp.tp.ugm.ac.id chemical compositions and its functional properties including, water-and oil-binding capacity, foaming capacity, foaming stability, emulsifying capacity, emulsifying stability and gelation capacity.
Preparation of Sample
The seeds were shelled and the kernels were separated and crushed.The oil was extracted by hydraulic press, followed by solvent extraction using hexane.The defatted material was air-dried at ambient temperature and crushed again followed by sieving (80 mesh).The defatted flour was stored in a refrigerator at 7C prior to analysis.
Isolation of Protein
Defatted nyamplung flour was dispersed in distilled water (1 : 20, w/v).The pH was adjusted to 11.0 using 1 N NaOH at 30 C.After 2 h, it was centrifuged at 4000 g for 30 min.The supernatant was decanted.The residue was extracted again to obtain high yield of proteins.The supernatants were combined and separated into 2 parts.The proteins were precipitated by adjusting pH to 3.0 (IP3) and 5.0 (IP5), respectively.The precipitated proteins were recovered by centrifugation at 4000 g for 30 min.Protein curd was washed twice with distilled water and freezedried.
Composition Analysis
Concentration of water, ash, fat and crude protein were determined according to the method of standard Association of Official Analytical Chemists (AOAC, 1990).
Water and Oil Binding Capacity
Water and oil binding capacity were determined as described by Manak et al., (1980).One g of protein isolate was added into 10 mL of distilled water and palm oil in a weighted centrifuge tube for determination of binding capacity of water and oil, respectively.The mixture was homogenized for 30 s every 5 min using a Vortex stirrer.After 30 min, the tubes were centrifuged at 4000 rpm for 20 min.The free water or palm oil was decanted.The amount of bound water or oil was measured by weighing.The binding capacity of water or oil was expressed as the amount of water or oil retained per 100 g of proteins.
Foaming Capacity and Foam Stability
The foaming capacity (FC) and foam stability (FS) of the protein isolates were determined as described by Sathe et al., (1982).For FC determination, protein solution was prepared by adding protein isolate into distilled water (1% w/v).The pH was adjusted to 7 by 1 N NaOH.It was further homogenized using Nissei AM 10 homogenizer at 10,000 rpm for 5 min.The foam volume was determined.FC values were calculated using Eq. ( 1): (1) On the other hand, FS was evaluated over a period of 2 h and determined base on the remained foam volume at 15, 30, 60, 90 and 120 min.It was calculated using Eq.(2):
Emulsifying Activity and Emulsion Stability
Emulsifying capacity (EA) and emulsion stability (ES) were determined as described by Naczk et al., (1985) with slightly modification.Protein isolate was added into distilled water (1 % w/v).The pH was adjusted to 7 by 1 N NaOH.The solution was homogenized at 10,000 rpm using Nissei AM 10 homogenizer.Five mL of palm oil was added gradually to the solution with continuous stirring.Another 5 mL of oil was added.Volume of the emulsion layers was determined.EA was calculated using Eq.(3): The ES was determined by heating the emulsion, as prepare before, for 15 min at 85 C, followed by cooling and centrifugation at 3,000 g for 5 min.The emulsion stability was expressed as the percentage of remained emulsifying activity after heating at pH 7.
Temperature was maintained at 4 C for 2 h.LGC was defined as the minimum protein concentration, in which the formed gel did not flow when the test tube was inverted.
Amino Acid Analyses
Protein isolate (250 mg) was hydrolyzed using 5 ml 6 N HCl at 110 C for 24 h and further derivatized with solution containing of methanol, Na-acetate and triethylamine for 20 min at 25 C.The hydrolyzed protein was then analyzed by HPLC at ambient temperature using PICO TAG 3.9 x 150 nm column using a gradient system with 1 M sodium acetate pH 6.0 and 60% acetonitrile.Detector was set at 254 nm.Chart speed and run time were 2 cm/min and 32 min., respectively.Amino acid composition was expressed as g of amino acid per 100 g of protein.
Colour Evaluation
Colour of protein isolate was determined using a Minolta Chroma Meter CR-300 (Minolta Camera Co., Osaka, Japan).Measured values were expressed as L; a; b colour units, where L = lightness, + a = redness, -a = greenness, + b = yellowness, -b = blueness.
Isolation of Protein
The compositions of nyamplung seed and defatted flour are shown in Table 1.De-oil process of nyamplung seeds caused an increase in all concentrations of flour components due to a decrease in cake oil concentration.Protein and carbohydrate concentrations increased 3.9 and 2.8 times, respectively.nHighprotein concentration of flour (30.4%) is suggested that the defatted flour was very potential as protein resource for food and non-food application.It was comparable with the defatted Lesquerella fendleri flour (31.8%) (Hojilla-Evangelista and Evangelista, 2009).However, it was slightly lower compared with the defatted of jatropha flour, bayberry flour, rapeseed flour, Lupinus compestris and soybean flour, in which the values were 56.4%, 60.5%, 48.2%, 55.3% and 52.4%, respectively (Rodriguez-Ambris et al., 2005;Makkar et al., 1997;Cheng et al., 2009;Yoshie-Stark et al., 2008).
Isolation of protein using solubilizationprecipitation technique resulted in protein recovery of 54.88 ± 7.37% and 44.27 ± 5.27% for IP3 and IP5, respectively.Protein concentration of protein isolates were 91.25 ± 0.04 % and 87.42 ± 1.15% for IP3 and IP5, respectively.Low recovery of protein might be due to their retention in the residue and the forming of protein complex with other molecules such as lipid and carbohydrate.The protein yield was comparable with the result of beach pea protein isolate (59.4-67%) (Chavan et al., 2001), but it was 2 times higher than cotton seed protein isolate (Tsaliki et al., 2003).
Amino Acid Composition
Amino acid composition is one of the factors that affect the functional properties of protein.The results demonstrated that protein isolates from nyamplung were rich in lysine, leucine, proline, aspartic acid and glutamic acid but they limited in tryptophan, methionine and cysteine (Table 2).Since IP3 and IP5 had most of the essential amino acids, they could be considered as a high quality protein.
Polar amino acids are the primary site of protein-water interaction, and non-polar amino acids affect protein-lipid interaction by hydrophobic interaction.IP3 had polar amino acids such as glycine, proline, tyrosine, threonine, lysine, arginine, histidine, aspartic acid and glutamic acid.There were not significantly different on the total polar amino acids in IP3 (23.2%) and in IP5 (22.7%).Total non-polar amino acids in IP3 (13.28%) were also not significant difference comparing with IP5 (13.42%).However, it was suggested that amino acid compositions affected the capacity of water binding, oil binding, gelation, emulsion and foaming due to their interaction and conformational formation as shown on Table 3.
Functional Properties
The functional properties of nyamplung protein isolate are shown in Table 3. IP3 had higher capacity of both water binding and foaming than IP5, but it did not have significantly difference in the capacity of oil binding, gelation and emulsifying.
Water Binding Capacity
The degree of water-protein interaction determines the water binding capacity of protein.
Results showed that water binding capacity of IP3 was 1.7 times higher than IP5 (Table 3).It may be due to the difference in the average of protein charge at pH 7. IP3 had lower average pI (3) than IP5 (5).It may result in more negative charge at pH 7 than IP5.Therefore, water-protein interaction was more effective in IP3.
Water binding capacity also depended on polar amino acids availability on the primary sites of protein for protein-water interactions (Zayas, 1997).However, the calculated amino acid polar side chain of IP3 (23.2%) and IP5 (22.7%) were not significantly different.The results suggested that protein conformation and the presents of protein complex with other molecules such as, carbohydrate, tannin and lipid may also have important role in water binding capacity.Water binding capacity of IP3 was comparable with Lupinus angustifolius seed protein isolate, bayberry protein isolate, cotton seed protein isolate, in which the values were 446.7%, 300% and 470%, respectively (Lqari et al., 2002;Cheng et al., 2009;Tsaliki et al., 2003).
Oil Binding Capacity
Oil binding capacity is one of the important functional properties of food product.It has an important role in mouth feel and flavour retention.Oil binding capacity of IP3 and IP5 were not significantly different (Table 3).The results were consistent with the calculated non-polar amino acids, in which they were also not significantly different with non-polar amino acids.It indicated that oil binding capacity has a correlation with the lipophilic amino acid contents (Zayas, 1997).
Oil binding capacity of isolates was lower comparing with their water binding capacity is.This result was also consistent with a high ratio of polar amino acids to non-polar amino acids of proteins (1.7 times).The oil binding capacities were comparable with the protein isolates of bayberry, Lupinus campestris, soy bean and Lupinus angustifolius, in which the values were 180%, 170%, 150%, and 195%, respectively (Cheng et al., 2009;Rodriguez-Ambriz et al., 2005;Lqari et al., 2002).
Gelation Capacity
Heating of protein solution at certain concentration will induce gel formation.LGC is defined as the lowest protein concentration at which gel remain in the inverted tube.It is used as an index of gelation capacity.The lower LGC means that the better of the gelating ability of the protein ingredient.Results showed that protein concentration of 8% was required to form a protein gel at pH 7.0.LGC was not significantly different with of IP3 and IP5 (Table 3).LGC of nyamplung protein isolates were lower comparing with Lupinus angustifolius protein isolate, chickpea protein isolate, indian chickpea and mucuna bean protein concentrates, in which the values were 10%, 14 %, 14 % and 12 %, respectively (Lqari et al., 2002;Zhang et al., 2007;Kaur and Singh, 2007;Adebowalea and Lawal, 2003).Thus results indicated that nyamplung protein isolates had a better gelating capacity than these other proteins and they may be used as gelating agent.
Emulsifying Properties
Nyamplung protein isolates contained both hydrophilic and hydrophobic amino acid fractions.Interaction of protein with both water and oil in a water-oil system has an important role in stabilization of emulsion.Table 3 showed that EA of IP3 was not significantly different with IP5.Both protein isolates had high HLB, indicated that they could stabilize oil in water emulsion system.The finding was consistent with high ratio of water to oil binding capacity of IP3 and IP5, namely 2.7 and 1.5, respectively.EA was slightly higher comparing with Lequerella fendleri seed of 32.3% (Hojilla-Evangelista and Evangelista, 2009) and bayberry protein isolate of 48.7% (Cheng et al., 2009).
The values of ES were affected by various factors including pH, droplet size, net charge, interfacial tension, viscosity, and protein conformation (Hung and Zayas, 1991).The results showed that ES of IP3 was 5.5 times higher than IP5.The finding was consistent with higher ratio of water to oil binding capacity of IP3 (2.7) than IP5 (1.5).Since IP5 had lower net charge due to higher pI of IP5 comparing with IP3, the results suggested that low ES of IP5 may be attributed to low net charge of protein.Thus, the high ES may be attributed to the dissociation of some proteins, and the forming of the resulting subunits had more hydrophobic groups which interacted more strongly with the lipid phase (Mahajan and Dua, 1995).
Foaming Capacity and Stability
The results show that FC of IP3 was 2.9 times higher than IP5 (Table 3).It may be due to higher net charge of IP3 at pH 7 than that of IP5 because of the deprotonated carboxyl groups of proteins at higher pH than that of their pI.High net charge will weakened the hydrophobic interactions, and it increases the flexibility of the protein.As a result, it allows the protein to diffuse more rapidly into the air-water interface to encapsulate air particles and then enhances the foam formation (Aluko and Yada, 1995).The finding was consistent with the fact that IP3 had high ratio of water to oil binding capacity.
Comparing with an other reported data, FC of IP3 was comparable with the protein isolates of beach pea and Lupinus angustifolius, in which the values of FC were 128-143% and 116-119%, respectively (Chavan et al., 2001;Lqari et al., 2002).As shown in Fig. 1, FS of IP3 was found to be higher than IP5, and it remained 32.23% after 120 min, but the foam of IP5 was completely lost after 60 min.Better FS of IP3 may be due to the difference in the electrostatic repulsion.It is suggested that electrostatic repulsion increases with an increase in pH due to the deprotonated of carboxyl groups of proteins.As a result, the ability of protein to interact with water to encapsulate air particles becomes better.FS of nyamplung protein isolates was lower than the protein isolate of beach pea and Lupinus angustifolius, in which the foam still remained 90.1% after 60 min and 94.8% after 120 min, respectively (Chavan et al., 2001;Lqari et al., 2002).
Colour of Protein Isolates
The colour of protein isolates may limit the use of protein in foods.Colour determination of nyamplung proteins showed dark brown with the values of L, a and b as shown in Table 4.The results indicated that the covalent binding between phenolic compounds and reactive groups of the proteins, such as cysteine and lysine, occurred during alkaline protein isolation process (Sosulski, 1979;Sahidi and Naczk, 2004).The results were similar to that of Adebowale et al., (2007) for Mucana bean protein isolate.
Conclusion
Nyamplung protein isolates that were obtained from the defatted nyamplung seeds cake by solubilisation-precipitation method at pH 3 (IP3) and 5 (IP5), can be considered as a high quality protein since they have most of the essential amino acid.IP3 protein isolate differed significantly from IP5 with respect to both the capacity of water binding and foaming.Both isolates had excellent in emulsifying capacity.Since total polar amino acid was about 1.7 times higher than non-polar amino acid, it might result in high ratio of water-oil binding capacity and HLB.Nyamplung protein isolates also had a better gelation capacity.In conclusion, nyamplung protein isolates are better source of protein and may stabilize oil in water emulsion system.
Fig. 1
Fig. 1 Foaming stability of IP3 () and IP5 () disperse in distilled water as function of time at pH 7
Table 1 .
Composition of nyamplung seed and defatted flour *Moisture content including solvent.
Table 2 .
Amino acid composition of nyamplung protein isolates
Table 3 .
Functional Properties of Nyamplung
Table 4 .
Colour evaluation of protein isolate using a
|
2019-04-04T13:05:03.613Z
|
2014-09-10T00:00:00.000
|
{
"year": 2014,
"sha1": "5ba00590c6e26f91e134af6d56769d5f1c9ec509",
"oa_license": "CCBYSA",
"oa_url": "https://journal.ugm.ac.id/ifnp/article/download/15235/10241",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "74967be210536e8c7e43ab17a5eae6a3ca0dc721",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
256230018
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of phenomics and cfDNA in a large breast screening population: the Breast Screening and Monitoring Study (BSMS)
To assess their roles in breast cancer diagnostics, we aimed to compare plasma cell-free DNA (cfDNA) levels with the circulating metabolome in a large breast screening cohort of women recalled for mammography, including healthy women and women with mammographically detected breast diseases, ductal carcinoma in situ and invasive breast cancer: the Breast Screening and Monitoring Study (BSMS). In 999 women, plasma was analyzed by nuclear magnetic resonance (NMR) and Ultra-Performance Liquid Chromatography-Mass Spectrometry (UPLC-MS) and then processed to isolate and quantify total cfDNA. NMR and UPLC-MS results were compared with data for 186 healthy women derived from the AIRWAVE cohort. Results showed no significant differences between groups for all metabolites, whereas invasive cancers had significantly higher plasma cfDNA levels than all other groups. When stratified the supervised OPLS-DA analysis and total cfDNA concentration showed high discrimination accuracy between invasive cancers and the disease/medication-free subjects. Furthermore, comparison of OPLS-DA data for invasive breast cancers with the AIRWAVE cohort showed similar discrimination between breast cancers and healthy controls. This is the first report of agreement between metabolomics and plasma cfDNA levels for discriminating breast cancer from healthy subjects in a true screening population. It also emphasizes the importance of sample standardization. Follow on studies will involve analysis of candidate features in a larger validation series as well as comparing results with serial plasma samples taken at the next routine screening mammography appointment. The findings here help establish the role of plasma analysis in the diagnosis of breast cancer in a large real-world cohort.
INTRODUCTION
Breast cancer (BC) is the most frequent cause of death among women after lung cancer, worldwide [1]. Current diagnosis is largely based on a physical examination, mammographic and other imaging and histopathological assessment of tissue biopsy, complemented by blood tests for the detection of specific antigens and/or proteins [2,3]. Early diagnosis significantly increases long-term survival rates [4]. However, more sensitive and breast cancer-specific biomarkers are required for early detection of aggressive disease.
Use of cfDNA was first described over 60 years ago [5]. Elevated levels are seen in cancer in part due to reduced DNase activity [6][7][8]. Elevated levels of cfDNA in plasma have been suggested for the diagnosis of breast cancers and qualitative tests have demonstrated increased cfDNA integrity/size [9][10][11]. However elevated levels of cfDNA are also sometimes observed in benign breast disease [12], reducing its specificity for cancer. Certain patterns in cfDNA (e.g. mutations, loss of heterozygosity (LOH), hypermethylation) have the potential to provide specific markers and have also been investigated [13][14][15]. We have previously described that that patient-specific circulating tumor (ctDNA) analysis can detect early evidence of progression up to 2 years ahead of imaging [16].
Altered metabolism is one of the key hallmarks of cancer. The development of sensitive, reproducible and robust bioanalytical tools such as NMR and mass spectrometry (MS) techniques has allowed us to explore its role [17,18] in conjunction with other new methods. We have previously shown that metabonomics identifies excess energy expenditure pathways perturbed during chemotherapy for breast cancer [19] and have suggested new therapeutic approaches that focus on metabolism [20]. Either individually or grouped as a metabolomic profile, detection of metabolites can be carried out in the same plasma samples as cfDNA analysis. We have thus explored the potential of using both cfDNA and the metabolome together, in a large cohort of women recalled for mammography at Imperial College Healthcare NHS Trust, including healthy women and women with early mammographically detected breast cancer. We also compared results to a second independent series of healthy controls from the AIRWAVE study. Together the use of cfDNA and metabolomics, when used as a translational research tool, can provide a link between the laboratory and clinic.
RESULTS
The demographics and clinical metadata of the 1185 individuals analyzed in this study are reported in the Supplementary Table 1 comprising 999 from the BSMS study and 186 female individuals recruited from AIRWAVE (AW II).
NMR spectroscopy
In the BSMS cohort OPLS-DA of plasma 1 H-NMR global profiling data (1D-NOESY and CPMG) between patients diagnosed with invasive breast cancer and cancer-free subjects, did not show significant discrimination (Table 1, Fig. 1a, b). Similar nonsignificant discrimination was found between groups for the comparison between benign vs. in situ, invasive cancer vs. benign, invasive cancer vs. in situ and cancer-free vs. all breast cancer groups. Similar results, with poor discrimination accuracy (<60%, Table 1) between all studied groups ( Supplementary Fig. 2) were obtained for OPLS-DA modeling of the plasma NMR targeted data (19 metabolites and 112 lipoproteins).
Taking advantage of NMR data reproducibility between spectrometers and spectra collection centers [21], we also compared invasive cancer patients with data generated as part of the AIRWAVE study, comprising an independent cohort of female healthy individuals (n = 186). In particular, the targeted datasets from both studies (i.e. the absolute concentration values of 19 metabolites and 112 plasma lipoproteins) were employed and used to build the corresponding MVA models. Initially, unsupervised Principal Component Analysis (PCA) was performed on diseases-free and healthy AIRWAVE individuals' datasets from both studies to test the feasibility of coupling the two independent datasets. PCA score plot ( Supplementary Fig. 3a) from the 19 metabolites concentrations showed a perfect classification between healthy AIRWAVE versus BSMS diseasesfree individuals. Further examination of loadings plots (Supplementary Fig. 3b) revealed that glucose and lactic acid concentrations were significantly different between the 2 study cohorts, where glucose and lactic acid values were higher and lower, respectively, in BSMS diseases-free individuals ( Supplementary Fig. 3c, d). This could be attributed to the sample collection time points, nutritional habits and/or physical exercise between individuals from each cohort, amongst possible factors. Nevertheless, glucose and lactic acid were removed from both datasets, and the new PCA results indicated an overlap without any significant classification trends between BSMS and AIRWAVE samples, allowing us to employ them for further supervised MVA analyses. It should be noted that the lipoproteins datasets were highly overlapped for both studies (Supplementary Fig. 3e) and they were employed for further analyses as such.
The supervised OPLS-DA analysis of the 17 metabolites dataset (excluding glucose and lactic acid) for BSMS patients with invasive breast cancer versus the AIRWAVE healthy subjects showed high Model groupings comprise; Invasive BC: subjects diagnosed with invasive BC (n = 105); Cancer-free: subjects without invasive BC, in situ and benign diseases (n = 614); In situ: subjects with in situ cancer (n = 40); Benign group: subjects with benign breast disease (n = 214); Disease/Medication-free: subjects without BC or any other disease and being under no medication (n = 288); Diseases/ Medication-free (subgroup 1): subset of Disease/Medication-free group, discriminated from invasive BC with high accuracy by MS assays models (n = 237); Diseases/ Medication-free (subgroup 2): subset of Disease/Medication-free group, but predicted as invasive BC group with high accuracy by MS assays models (n = 51); Healthy (AW): healthy female subjects from an independent cohort from the AIRWAVE study (n = 186). a HILIC+ results of the fitted models are after the removal of lidocaine features.
classification accuracy (Table 1) of the two groups ( Supplementary Fig. 4a, b) and one-way ANOVA calculated p-values after Benjamini-Hochberg correction [22] indicated citric acid, acetic acid, leucine, histidine, glycine, glutamine, pyruvic acid and creatinine as discriminative biomarkers ( Supplementary Fig. 4c). The same analysis for the 112 plasma lipoproteins provided a good classification of invasive cancer patients versus healthy AW subjects (Table 1, Supplementary Fig. 5) and 17 lipoprotein classes appeared to significantly change (p < 0.05) between the 2 classes (Supplementary Table 2). Following the same strategy, OPLS-DA models were constructed for the comparison between benign vs healthy (AIRWAVE) ( Supplementary Fig. 6a) and in situ vs healthy (AW) ( Supplementary Fig. 6b) and their performance is summarized in Table 1. Results indicated again high classification accuracies for the benign vs. healthy (AW) and in situ vs healthy (AW) models based upon the 17 metabolites concentration datasets. The produced loadings from the models suggested several metabolites as potential biomarkers, such as pyruvic acid, citric acid, leucine, histidine, glycine, glutamine and creatinine ( Supplementary Fig. 6c Fig. 7) in the present datasets.
UPLC-MS
Similarly, OPLS-DA showed no significant discrimination between any sample class pairings for all LC-MS assays. In particular, the statistical models based upon the lipidomic profile of plasma samples for both positive and negative ionization modes, exhibited similar discrimination accuracy between invasive cancer and cancer-free subjects (accuracy = 64%), whereas the models from the benign vs. in situ, invasive cancer vs. benign, invasive cancer vs. in situ and cancer-free vs. the rest of the types of breast cancer groups showed lower discrimination accuracy values (i.e. <60%) ( Table 1, Supplementary Fig. 8). However, a moderate discrimination accuracy (AUC = 0.65 and accuracy = 76.5%) was observed between the invasive cancer and the cancer-free control group from the HILIC+ dataset. An examination of the extracted loadings data from the supervised OPLS-DA analysis showed that the most weighted HILIC+ features leading to the observed discrimination, corresponded to lidocaine, most likely explained by contamination of several plasma samples by local anesthetic during the blood sampling procedure. When we removed HILIC+ lidocaine features and repeated the MVA analysis the model showed less accuracy in discriminating the two groups (AUC = 0.62 and accuracy = 67.0%) in agreement with the lipidomic profile (Table 1 and Fig. 2a).
Having considered lidocaine contamination of the samples, we further stratified the 614 cancer-free controls, comparing 288 reported as having no drugs intake and/or other disease with the other 326 subjects. Subsequently, we isolated this disease/ medication-free group and we re-evaluated all MVA analyses for both UPLC-MS and NMR data. This was undertaken to avoid any confounding in the data owing to the presence of features corresponding to drug related compounds or to metabolites relating to other diseases that cancer-free subjects were experiencing during the blood sampling period. This OPLS-DA model for invasive cancer vs. disease/medication-free subjects indicated a slightly higher discrimination accuracy (+3%) for all UPLC-MS assays (Table 1 and Fig. 2b). When exploring the predicting ability of our models, 51 of the 288 plasma samples from the diseases/medication-free healthy controls, were predicted as invasive cancer with accuracy >85% based on their metabolic data (Table 1 and Fig. 3a).
However, the supervised OPLS-DA analysis of the diseases/ medication-free vs. the diseases/medication-free predicted as invasive cancer samples showed high discrimination accuracy, namely, 86%, 76 and 71% for HILIC+, Lipid RPC+ and Lipid RPC-MS assays, respectively (Table 1). When this group of 51 control subject were excluded highly predictive models were produced from the diseases/medication-free (without those predicted as Invasive Cancer) vs. invasive cancer plasma samples, with accuracy values 76%, 70 and 73% for HILIC+, Lipid RPC+ and Lipid RPC− MS assays, respectively.
Plasma cfDNA analysis
Initially, total cfDNA levels in all blood samples from BSMS were employed for multiple univariate ANOVA analyses, comparing the total cfDNA concentration between each group of subjects as for the metabolomics data (Fig. 3b). All univariate analyses of the cfDNA concentration corroborate the obtained results from the MS based MVA models. The total cfDNA concentration was significantly higher in invasive breast cancer vs. the diseases-free subjects, whereas the cases of cancer-free and benign tumors vs. invasive cancer samples showed no significant differences (Fig. 3b). In addition, there was no significant difference in concentration between patients with invasive and in situ cancer. Of note, the 51 diseases/medication-free subjects (subgroup 2), that were classified as "cancer like" by HILIC+, Lipid RPC+ and Lipid RPC− LC-MS assays respectively also had a significantly higher cfDNA concentration (p = 0.002) compared to the rest of the healthy controls (n = 237), whereas non-significant differences were observed vs. the invasive cancer samples. In addition, the subgroup of 237 diseases-free subjects (subgroup 1) had significantly lower cfDNA concentration vs. the invasive cancer (Fig. 3b). Consequently, cfDNA results were in total agreement with the LC-MS metabolomics data. It should be noted that Pearson correlation analysis (r = 0.068) of plasma cfDNA measured values with subjects' age indicated insignificant contribution of age to the cfDNA differences between the studied groups.
As expected, the MVA analysis of the combined cfDNA and LC-MS datasets-since their agreement-produce superior OPLS-DA models i.e., with higher discrimination accuracy (see MVA results of HILIC+ and cfDNA combined datasets in Supplementary Fig. 9).
DISCUSSION
We report the metabolomic and cfDNA analysis of a large cohort of sequential plasma samples from 999 women attending for routine breast screening and validation with an independent cohort of 186 healthy women from the AIRWAVE study. Our main findings demonstrate the utility of cfDNA quantification here. This represents a real-world cohort, and results of this comprehensive work exemplify the challenges of establishing such a complex composite biomarker panel since the resulting accuracy of the signature derived from the UPLC-MS analysis was only moderate (AUCs between 0.62 and 0.76).
Several metabolomics studies have attempted to detect the breast cancer fingerprint in serum and plasma [1,23,24], showing high accuracy in models (AUC > 0.9), which discriminate breast cancer from healthy subjects. The majority of the models described in the aforementioned studies are derived by MS plasma or tissue analyses with a maximum of 100 advanced breast cancer and 100 controls, although another NMR-based metabolomic study employing a large serum/plasma cohort succeeded in monitoring and predicting BC relapse (accuracy = 71%) and discriminating early BC from metastatic BC patients (accuracy = 85%) [25]. Here, our large cohort analysis represents a much earlier cancer stage with greater power based on the larger sample size (999 women). NMR untargeted metabolomics data were incapable of discriminating/fingerprinting any of the patient groups ( Fig. 1) Moreover, one-way ANOVA analysis coupled with t-test was performed for the determination of the statistically significant (p < 0.05) differences of the observed cfDNA concentration changes for each case. For each comparison, cfDNA concentration is higher in the underlined group.
in this screening population. Moreover, using a targeted approach nineteen metabolites and 112 lipoproteins concentrations extracted by NMR data, were also statistically insignificant among the studied groups ( Supplementary Fig. 1). It is noteworthy that many plasma metabolites quantified herein are reported to change in invasive BC (e.g. L-glutamine, L-valine, creatine etc.) [1,23,24]. However, in this large cohort of early screen detected breast cancers none of these metabolites exhibited statistically significant variation in concentration (Supplementary Fig. 1). Such 'negative data' serves to reinforce the importance of performing screening studies in larger cohorts. Strikingly, our results are in agreement with a very recent study, where it was shown that NMR metabolomic data were multi-disease specific for patients risk stratification except from breast cancer [26]. Nevertheless, it is notable that the measured concentration of several plasma metabolites (i.e. creatine, histidine, valine, alanine and tyrosine) was found slightly (but not significantly) elevated in the plasma samples of women with invasive BC (Supplementary Fig. 1), which is in accordance to published literature [23,27]. An advantage NMR spectroscopy is in its high reproducibility (provided that sample collection, preparation and spectra acquisition parameters are the same for all cases) [21], which can allow meaningful comparisons between datasets acquired from different cohorts. With this in mind, we constructed MVA models that discriminated invasive cancer, in situ and benign samples for an independent cohort of healthy women with high accuracy based upon the calculated absolute concentrations of 17 plasma metabolites as well as of 112 lipoproteins. Loadings of the models with high classification accuracy, provided several potential biomarkers which many of them were in line with the aforementioned literature. Namely, results showed an alteration of amino-acids and TCA circle metabolism in the invasive and in situ cancer subjects since in both cases there was a decrease of citric acid and an increase of histidine and glutamine [23,27].
In addition, the increase of pyruvic acid in the plasma of cancer patients implies the altered glucose metabolism due to the presence of cancer cells (Warburg effect) [28,29]. Several plasma lipoproteins were also observed to significantly change (Supplementary Table 2) consistent with evidence that breast cancer is influenced by environmental factors and lipoprotein levels in turn have a strong relationship with diet [30]. Data on the specific lipoproteins we have identified are lacking, and merits further investigation.
The employment of UPLC-MS lipidomic and small molecule metabolites profiling provided improved discrimination accuracy between invasive BC and healthy controls (UPLC-MS assays mean accuracy = 65.4% and NMR assays mean accuracy = 51%). Attempting to reduce the MS data "noise" due to any medication or other diseases of the healthy controls, we focussed on analysis of 288 disease and medication-free subjects, which provided an improved but still not high, classification accuracy from the invasive BC. However, MS data from the 288 diseases/medication-free subjects identified a subgroup of 51 that were commonly predicted as invasive BC patients from all MS assays compared to the rest (n = 237).
Of note, these data are in agreement with data for plasma cfDNA concentration, which we have shown previously to be associated with progression free survival, response rate and overall survival in patients with metastatic breast cancer [3,31]. Clinical follow up revealed no unusual features for this group of 51 healthy subjects and all were confirmed as disease free at a census date of November 2019 suggesting that these features do not necessarily characterize a circulating cancer phenotype. Additionally, it has been shown recently that cfDNA is a significant biomarker of aging [32], however, in our study plasma cfDNA measured values showed insignificant contribution of age to the cfDNA differences between the studied groups.
Importantly, our screening study was carried out at a single site working to good clinical practice as quality assurance and following a validated standard operating procedure for plasma sample collection and processing helping to minimize any variation due to preanalytical processing. This is both an advantage and a limitation. Future studies will require standardization between institutions: for example, use of lidocaine as an anesthetic was found as a metabolite in our analyses and other external factors (e.g. diet) are well known to influence metabolomic findings. We also used cfDNA not circulating tumor DNA (ctDNA) due to the cost issues in a large diagnostic cohort such as this. Further studies may wish to include this after cfDNA analysis.
In aggregate, we describe here comparative analysis of plasma cfDNA with the metabolome in a large cohort of women recalled for mammography at Imperial College Healthcare NHS Trust, including healthy women and women with early detected breast cancer, ductal carcinoma in situ and invasive breast cancer. We did not find significant differences between groups for all metabolites, but found higher plasma cfDNA levels in invasive cancers than all other groups. We then stratified the supervised OPLS-DA analysis and total cfDNA concentration showing high discrimination accuracy between invasive cancers and healthy controls. We also compared OPLS-DA data for invasive breast cancers to a second independent control group of healthy individuals from the AIRWAVE study and found similar discrimination between breast cancers and healthy controls.
Our results not only confirm that standardization of collection and processing of biospecimens is central to reliable metabolomics studies but highlight the importance of control groups selection criteria for the -omics comparative studies. It is noteworthy that all univariate analyses of cfDNA concentration corroborate results from the MS based MVA models. To our knowledge, this is the first report of agreement between molecular phenomics (i.e., metabolomics) and plasma cfDNA levels for discriminating breast cancer from healthy subjects in a screening population. Follow on studies will involve analysis of candidate features in a larger validation series as well as comparing results with serial plasma samples taken at the next routine screening mammography appointment, but we provide foundations for its role in the diagnostic pathway for breast cancer.
MATERIALS AND METHODS Patients and samples
We recruited individuals from the Breast Screening and Monitoring Study (BSMS) who were recalled from mammography. The study protocol was approved by the Riverside Research Ethics Committee (Imperial College Healthcare NHS Trust; Tissue Bank Ethics/REC reference numbers: 12/LO/ 2019; 13/LO/1152; R10015-16A; 07/Q0401/20) and conducted in accordance with Good Clinical Practice Guidelines and the Declaration of Helsinki. All patients gave written informed consent prior to participation and were over 18 years of age. 20 ml blood was taken into K2 EDTA tubes (BD Biosciences) and processed to recover plasma and buffy coat within 2 h of collection and stored at −80°C for subsequent extraction of cfDNA and germline DNA as described previously [10]. The cohort included individuals with no breast disease, and women with biopsy confirmed benign breast disease, carcinoma in situ and those with invasive breast cancer. Driven by the LC-MS multivariate analyses (see below statistical methods) as well as clinical metadata (Supplementary Table 1), we formed several subgroups of samples due to the presence of features from medication (e.g., lidocaine, etc.). Furthermore, an additional subgroup was formed from the cancer/medication-free samples that was statistically classified as invasive breast cancer within high accuracy. This was also driven by the cfDNA assay results.
A second independent control group of healthy individuals was also analyzed from women recruited from the AIRWAVE study (MREC/13/NW/ 0588). The AIRWAVE Health Monitoring Study was established to evaluate possible health risks associated with the use of TETRA, a digital communication system used by the police forces and other emergency services. This is an ongoing long-term observational study following up the health of police officers and staff across the United Kingdom, with the ability to monitor both cancer and non-cancer health outcomes through data linkage. 53,280 participants have been recruited between June 2004 and March 2015 with a response rate averaging 50% of employees in participating forces. At baseline, participants completed an enrollment questionnaire (sent via routine administration or the occupational health service), or a comprehensive health screening performed locally, or both. Screened participants have now been followed-up for 7.5 years on average.
Each recruited individual provided a single EDTA 7 mL blood sample for subsequent plasma isolation and storage at −80°C. This cohort was used for the validation of the cancer/medication-free group, aiming at testing its NMR-based model robustness/predictive accuracy, and as an external (independent) cancer/medication-free cohort versus invasive cancer samples for the detection of any biomarkers.
Ultra-performance liquid chromatography-mass spectrometry (UPLC-MS) − 1 H Nuclear Magnetic Resonance (NMR) spectroscopy Plasma samples for UPLC-MS and NMR analyses were prepared and data acquired as published previously [33][34][35]. For UPLC-MS, the separation of lipophilic analytes by reversed-phase chromatography (lipid RPC) and the separation of hydrophilic analytes (e.g., polar and charged metabolites) by hydrophilic interaction liquid chromatography (HILIC) took place. MS positive and negative electrospray ionization modes produced lipid positive and negative (lipid RPC+ and lipid RPC− respectively) and HILIC positive (HILIC+) datasets. Solution 1 H-NMR spectra of all samples were acquired using a Bruker IVDr 600 MHz spectrometer (Bruker BioSpin) operating at 14.1. Further details about the quality control of both UPLC-MS and NMR data, metabolites quantification as well as experimental procedures can be found in supplementary materials.
Extraction and quantitation of plasma cfDNA
Cell-free DNA was isolated from 4 ml of blood plasma with the MagMAX Cell-free DNA Isolation Kit (Thermo Fisher Scientific) on the Kingfisher Flex instrument (Thermo Fisher Scientific) using the MagMAX cfDNA-4mL-Flex.bdz protocol and processed according to the manufacturer's instructions.
Statistical analysesmultivariate/univariate statistics Multivariate statistical (MVA) models, specifically Orthogonal Partial Least Squares-Discriminant Analysis (OPLS-DA) of NMR and UPLC-MS metabolomics data and clinical metadata were generated between study participants with invasive cancer (n = 105), in situ (n = 40) and benign breast disease (n = 214), and imaging or biopsy confirmed cancer-free controls (n = 614). Modeling was performed in MATLAB (MathWorks, version R2019b), using the PLS_Toolbox version 8.7.1 (2019) (Eigenvector Research, Inc., Manson, WA, USA 98831; software available at http:// www.eigenvector.com). All multivariate statistical models and their metrics were produced after cross-validation. Any correlation of metabolomics/ cfDNA data with subjects' age/height/weight (see Supplementary Table 1) was performed by refitting each multivariate model after adding each variable into the model and calculating its accuracy.
For all studied groups, age/height/weight were not appeared as statistically significant variables. Variables loadings data (i.e., metabolites' LC-MS/NMR features) and Variable Importance in Projection (VIP) scores from each multivariate OPLS-DA model were used to initially evaluate any significant feature (i.e., any metabolite that could drive the classification between studied groups). VIP scores estimate the importance of each variable in the projection used in a PLS model and is often used for variable selection. A variable with a VIP Score close to or greater than 1 (one) can be considered important in given model. Variables with VIP scores significantly less than 1 (one) are less important and might be good candidates for exclusion from the model [36]. Nevertheless, each variable's statistical significance (i.e. metabolites and lipoproteins concentration) was further tested by univariate (ANOVA) analyses via built in MATLAB functions (https://uk.mathworks.com/help/stats/one-way-anova.html). Any reported p-values were corrected for false discovery rate (FDR) (applying Benjamini-Hochberg FDR correction [22] using "fdr_bh" function (https:// www.mathworks.com/matlabcentral/fileexchange/27418-fdr_bh).
DATA AVAILABILITY
The datasets generated and/or analyzed during the current study are not publicly available due to individuals' privacy reasons but are available from PGT and JAS on reasonable request and formal legal agreement.
|
2023-01-26T06:16:02.805Z
|
2023-01-24T00:00:00.000
|
{
"year": 2023,
"sha1": "ba54cc2172a25c369631c1da671907177e86aea4",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41388-023-02591-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "ae4b69acafaf40e86c759f2b55decb387892b02c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
242590001
|
pes2o/s2orc
|
v3-fos-license
|
Conquering the Challenges of Sitting Vigil
Changes in healthcare delivery has resulted in a shift from inpatient to outpatient medical care including palliative and end of life care Hampton & Newcomb [1]. This leaves many family members as caregivers during the end of life. Joan Halifax [2] states, being with dying often means bearing witness to and accepting the unbearable and unacceptable reality of death. Sitting Vigil with a loved one can be rather challenging. In fact, the phrase sitting vigil may be too polite for the experience of awaiting the transition from life to death. A time filled with waiting for a long overdue death, sleeplessness, exhaustion, frazzled nerves and the constant postponing of life’s responsibilities and obligations as the transition from life to death has become a part of reality. Your loved one has not eaten or drank anything and stopped passing urine. There is no response to your touch or your words. Death is coming you just do not know when.
Introduction
and delegate tasks will ease the task of caring for a loved one. In addition, having knowledge of the loved one's wishes including legal documentation, such as advanced directives and living wills decreases the stress of decision making for the caregiver. Many of these documents can be obtained from your healthcare provider, funeral homes, law offices, and life insurance companies.
There is a need for public education and engagement about endof-life care issues. Efforts are needed to normalize conversations about death and dying Institute of Medicine [5]. I am thankful that my mother had legally made me her power of attorney, she had a living will, and she shared her wishes with me on several occasions.
In addition, the utilization of a hospice team provided much needed resources and eliminated financial strain. Having the knowledge of my mother's wishes and the access to the additional hospice support made going through the process uneventful and allowed for a peaceful vigil. Have the crucial conversations with loved ones and plan accordingly.
Death can linger
Death usually comes two ways suddenly or lingering. If you are sitting vigil you are experiencing lingering death. Death can linger like a bad cold for hours, days, weeks or even months. Nagging, gnawing, lasting, reluctant to leave and staying in its place longer than expected. Doctors, nurses, clergy are attempting to provide logic as you work your way through this difficult time. Responses include, "we can't tell you the day or time", "the body can be strong", "you have to give them permission to go" or "God isn't ready".
All the things you don't want to hear as you have moved from wanting every effort made to cure your loved one to surrendering to the fact that you must submit and make whatever you can out of these circumstances. The linger of death is an expected part of sitting vigil. When sitting vigil managing lingering death requires resilience, overcoming fear and guilt. When engaged in sitting vigil there must be an acceptance of the fact that you have entered the process of active dying or what Hui et al. [6], describes as the hours or days preceding imminent death during which time the patient's physiological functions wane. Watching these physiological changes occur can and make you confuse your desire for the lingering to end with a desire for your loved one to die. This internal conflict can be conquered by reframing thoughts, positive reflection and connecting with the value of your presence at this time.
Simplify daily life
During this time of emotional strain, it is very important to keep things as simple as possible. Be realistic regarding activities of daily living, it is apparent that there will be changes to your normal routine. Start with wearing comfortable clothes. Do not fret over mealtime or household chores, as these are not a priority at this time. This is the time to utilize resources. Having finger foods and light snacks readily available eliminated the need to cook large meals. However, when meals are desired ask friends to provide one pot meals, such as casseroles or items that can be easily stored and prepared. Delegate light household cleaning to family members and friends. This will be a rather challenging time for you and your family therefore, simplifying daily activities and asking for assistance will prove to be beneficial. In setting the atmosphere be reminded the time of transitioning is not for you but it is a special time for your loved one. The atmosphere should be set with their wishes in mind. For example, my mother loved bacon, so we fried bacon every morning to ensure that the aroma of bacon filled the house. In addition, she loved being around family, the hospice nurse encouraged us to have the normal sounds of family around her, such as the kids playing.
Touch
Often times while sitting vigil visiting families and friends enter the room of your loved one and wrap themselves around you with a warm embrace. It is soothing and comforting during this time.
Touch creates physical connection. Touch your loved one while sitting vigil. Kiss their cheek, rub their hands, touch their head, touch their hair, rub their arms. It lets them feel you are present through touch. Most importantly physical touch serves as a reminder that your loved one is still with you. Use touch to create memories of connecting and being present while sitting vigil. While engaging in touch be mindful that your loved one may or may not respond to your touch. I am reminded of my mother's responses, there were times during her transition that she would hold on to hands and then there were times that should would pull away. Feeling a bit perplexed, I spoke to the hospice nurse and she informed that there will be moments that the person will be trying to transition from one state to the next and may be trying to let go and if they pull away allow them to do so. I would also watch for changes in my mother's facial expressions when touching became overwhelming, watch for changes in expressions or irritation, this may be a sign that they need a break from touching. am a nurse having a team to rely on allowed me the opportunity to be the daughter and not the healthcare provider. A 24hour hotline was provided, which was very beneficial. Having someone available at all times eased much of my anxiety, which precipitated peaceable moments with my mother.
Take a break
It will also be necessary to provide space and time for your loved one to be alone. This is not an easy task, but they need personal time while going through transition. As a nurse I have witnessed dying clients who would transition as soon as family left, almost as if they wanted to be alone. Not everyone will desire to be alone but allowing a window of time for your loved one can be therapeutic for yourself as the caregiver and your loved one. This provides quiet personal time for them and presents an opportunity for caregivers and family members to take a few moments to take a much-needed break. For me the break would sometimes include a shower and change of clothing, a moment of prayer, or a quick breath of fresh air. The breaks will help sustain you through this challenging time.
Conclusion
Sitting vigil of a loved one can be a physical and emotional strain but it can also be a precious moment that one will cherish forever. The dying process is very personal and unique for each individual and their family members. However, having key strategies in place can make sitting vigil a rewarding experience for all who welcome the process. It was an honor to have the opportunity to sit vigil over my mother. It was an experience that I will hold close to my heart and share with others as a form of encouragement. Through this experience I have gained a deeper understanding and respect for life.
|
2021-10-04T15:44:10.885Z
|
2019-07-24T00:00:00.000
|
{
"year": 2019,
"sha1": "c5d1eead8ad1dd1e6292a11ac980fd19c2e3814c",
"oa_license": "CCBYNC",
"oa_url": "https://irispublishers.com/ijnc/pdf/IJNC.MS.ID.000529.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1848f9b249380146802a90e69fc07f974b6f8ad9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
270227925
|
pes2o/s2orc
|
v3-fos-license
|
Reducing edge loading and alignment outliers with image-free robotic-assisted unicompartmental knee arthroplasty: a case controlled study
Background Survivorship of medial unicompartmental knee arthroplasty (UKA) is technique-dependent. Correct femoral-tibial component positioning associates with improved survivorship. Image-free robotic-assisted unicompartmental knee arthroplasty enables preoperative and intraoperative planning of alignment and assessment of positioning prior to execution. This study aimed to compare the radiological outcomes between robotic-assisted UKA (R-UKA) and conventional UKA (C-UKA). Methods This retrospective case control study involved 140 UKA (82 C-UKA and 58 R-UKA) performed at an academic institution between March 2016 to November 2020, with a mean follow-up of 3 years. Postoperative radiographs were evaluated for mechanical axis and femoral-tibial component position. Component position was measured by two methods: (1) femoral-tibial component contact point with reference to four medial-to-lateral quadrants of the tibial tray and (2) femoral-tibial component contact point deviation from the center of the tibial tray as a percentage of the tibial tray width. Baseline demographics and complications were recorded. Results There was a higher mean component deviation in C-UKA compared with R-UKA using method 2 (17.2% vs. 12.8%; P = 0.007), but no difference in proportion of zonal outliers using method 1 (4 outliers in C-UKA, 5.1% vs. 1 outlier in R-UKA, 1.8%; P = 0.403). R-UKA showed no difference in mean mechanical alignment (C-UKA 5° vs. R-UKA 5°; P = 0.250). 2-year survivorship was 99% for C-UKA and 97% for R-UKA. Mean operative time was 18 min longer for R-UKA (P < 0.001). Conclusion Image-free robotic-assisted UKA had improved component medio-lateral alignment compared with conventional technique. Supplementary Information The online version contains supplementary material available at 10.1186/s42836-024-00259-x.
Introduction
Unicompartmental knee arthroplasty (UKA) is a commonly performed procedure for patients with isolated medial compartment knee osteoarthritis, with a > 90% patient satisfaction rate [1,2].Some reports from highvolume centers have demonstrated that survival rates were more than 90% at 20 years [3][4][5].However, the procedure itself is technically demanding, with a higher risk of component malposition compared to total knee arthroplasty (TKA).This may then lead to edge loading, accelerated wear and early loosening.
The advent of robotic-assisted surgery has been shown to reduce surgical error.This is achieved through imagebased (preoperative computer tomography), or imagefree planning prior to bone cuts.Accurate representation of component position and limb alignment during planning, as well as real-time tracking and feedback during bone cuts are proposed to minimize surgeon error.However, the precise degree of improvement brought about by this technology has not been well quantified in prior studies.
The purpose of this study was to determine if robotic surgery provides quantifiable improvement in mediallateral component alignment when compared with conventional techniques.
Materials and methods
This was a retrospective cohort study of 140 patients who underwent medial unicompartmental knee arthroplasty at an academic institution between March 2016 and November 2020, with a mean follow-up period of 3 years and a minimum of 8 months.58 patients underwent robotic-assisted medial UKA (R-UKA), while 82 received conventional surgery (C-UKA).The allocation of patients to each intervention group was determined by the availability of the robotic system at the time of surgery.Patient inclusion criteria were those with isolated medial compartment osteoarthritis or osteonecrosis of the medial femoral condyle, meeting the indications proposed by Kozinn and Scott.Those with varus deformity of up to 15° were included.Exclusion criteria included those with lateral unicompartmental replacement, TKA, inflammatory arthritis, or suboptimal X-rays.
The surgeries were performed by one of four experienced surgeons at a tertiary referral centre, each with a minimum of 5 years of joint replacement experience and a minimum of 30 UKA procedures per year.All components used in the surgeries were cemented, fixed-bearing, metal-backed on-lay designs.Journey UNI Unicompartmental Knee System (Smith & Nephew, Memphis, TN, USA) was utilized for the robotic group, whereas both the Journey UNI knee system and the Zimmer ZUK system (Zimmer Biomet, Warsaw, IN, USA) were utilized for the conventional group (Fig. 1a-c).
Surgical technique
The surgical target of both C-UKA and R-UKA was to make the tibial and femoral cuts perpendicular to the mechanical axis and produce an under-corrected varus alignment, typically between 3°-5°.The exact limb alignment was individualized based on the preoperative alignment.Soft tissue releases were minimized with a target laxity of 1-2 mm at final implantation.
In the C-UKA group, all surgeries were performed using a minimally invasive medial parapatellar approach.The surgical steps adhered to the conventional technique and utilized standard instruments as described in the manufacturer's manual.The procedure involved the removal of medial osteophytes, followed by correct coronal soft tissue balancing of the knee from full extension to deep flexion.Positioning of the femoral component was performed according to patient-specific anatomy, and the tibial component aligned perpendicular to the tibial mechanical axis.
For the R-UKA group, the Navio image-free robotic system (NAVIO: Journey UNI Unicompartmental Knee System; Smith & Nephew, Memphis, TN, USA) (Fig. 1d) was used.Partially threaded pins were inserted into the proximal tibia and distal femur for the attachment of optical tracking arrays.Osteophytes and loose bodies were first removed.Registration via mapping of the remaining cartilage and bony anatomy was completed in sequence.The optimal tibial slope was determined individually by referencing the lateral intact cartilage as anteromedial cartilage loss is expected in medial OA.Similarly, femoral flexion was matched with the patient's native anatomy.A soft tissue balancing algorithm was then initiated by applying valgus stress aiming at undercorrection of the mechanical axis.Real-time data showing medial laxity were obtained throughout the range of motion, and the individual components were adjusted intraoperatively (allowing up to 3° varus of the tibial component) to produce a medial laxity of 1-2 mm throughout the range of motion.Femoral and tibial component tracking and presence of edge loading were assessed, and component positions were fine-tuned prior to bone removal.A hand-held robotic burr was used to prepare the bone on the condylar surfaces, dynamically modulated by the speed and exposure of the motorized burr tip.After bone preparation, the surfaces were assessed, and trial components were inserted with alignment and soft tissue tension re-assessed.Once the knee was considered properly aligned and balanced, the final components were cemented into place (Fig. 2).
Outcome measures
Weight-bearing anteroposterior lower limb long-leg radiographs of the knees were taken pre-and postoperatively.All radiographs were taken with the knee fully extended and the knee and foot directed anteriorly.The films that most closely matched an ideal AP knee X-ray, as determined by a proximal tibia-fibular overlap of 1/3 the width of the fibular head, were selected.Lateral X-rays were not analyzed as the primary focus was on coronal component alignment.
For primary outcome measures, two orthopaedic residents measured the medial-lateral prosthesis positioning using two methods: 1) Quadrant method (Fig. 3): Femoral component midpoint position with reference to four equally-spaced quadrants of the tibial tray.Those with femoral midpoint lying in tibial tray zone 1 & 4 were considered component position outliers, and zone 2 & 3 were deems acceptable.2) Percentage deviation method (Fig. 4): Deviation between the components were measure by the distance between the midline of the femoral and tibial component (A), divided by the tibial tray width (B) and expressed as a percentage.This was done to account for variance in X-ray magnification.
The measurements were repeated by both residents for the evaluation of intra-and inter-observer errors.Preand postoperative limb alignments (Hip-Knee-Ankle angle) were documented.Secondary outcome measures, including postoperative limb alignment, aseptic loosening and duration of operation were documented.
Statistical analysis
The SPSS statistics software (IBM, Armonk, NY, USA) was used for the statistical analysis.The Student's t-test was employed to compare normally distributed continuous variables, with a significance level of P < 0.05 and a 95% confidence interval.The Mann-Whitney U test was used for continuous variables with equal variance that were not assumed (data without a normal distribution).The Chi-square test and the Fisher's exact test were utilized for the comparison of categorical variables.The inter-and intra-observer variability in measurements on X-rays was determined by the intraclass correlation coefficient (ICC).A range of ICC between 0.75 and 1.00 Institutional Review Board (IRB) approval was waived due to retrospective nature of this study.
Results
There were 53 females and 29 males, with an average age of 71 years (range, 50-89 years), and an average body mass index (BMI) of 26.4 ± 3.7 kg/m 2 , in the C-UKA group.In the R-UKA group, there were 46 females and 12 males with an average age of 70 years (range, 51-81), and an average BMI of 25.9 ± 3.4 kg/m 2 .The preoperative mechanical alignment of the operated knees was, on average, 8° ± 5° and 8° ± 4° varus in the C-UKA and R-UKA groups, respectively.The baseline demographics and preoperative mechanical alignment of the two groups were not statistically different (P < 0.05).The difference in preoperative Knee Society Knee Score (KSKS) of the two groups was statistically significant, though likely not clinically significant (55 vs. 50).The Knee Society Functional Assessment scores (KSFA) were comparable.Details are outlined in Table 1.
Robotic assistance significantly reduced the mean degree of component medial-lateral mismatch in terms of the femoral component midpoint deviation from the midpoint of the tibial component, measured by method 2. There was a mean improvement of 4.4% with the use of robotic assistance (17.2% vs. 12.8%, P = 0.007).Details of the results are presented in Table 2.With robotic assistance, there was a tendency towards fewer number of UKAs with component midpoint deviation of more than 20% from midline, shown by Supplementary Graph S1 as a side-by-side comparison bar chart.There was also a tighter interquartile range (6.8%-18% vs. 8.8%-24%) of component midpoint deviation performed with R-UKA compared to C-UKA, shown by Supplementary Graph S2 as a simple box plot.
The intraclass correlation coefficient (ICC) of measurements for the percentage deviation of the midpoints of the femoral/tibial components were checked for intra-observer and inter-observer variability.The intraobserver ICC was 0.957-0.96,and the inter-observer ICC was 0.974-0.99.This indicated that the degree of intraand inter-observer error with this measurement method was negligible, and it was a reproducible method of measuring component medial-lateral deviation.
With the quadrant method (method 1), position outliers were determined as those with femoral component midpoint at the extreme zones of the tibial tray.There was no significant difference between the two groups in the number of zonal outliers (1:57 vs. 4:78; P = 0.403).The Cohen's Kappa coefficient value for intra-observer variability was 0.931-1, and that of the interobserver variability was 0.238-0.249.This indicated that though there was a small intra-observer variability, there was a marked disagreement between observers using this method.
There was no difference between the two groups in terms of postoperative limb alignment (5.4 vs. 4.7, P = 0.250).There was a tendency toward a higher proportion of patients with ideal correction (1°-3° varus) in the robotic group (conventional 18:40 vs. robotic 14:68), but there was no statistical significance (P = 0.121).For complications, there was one case of unexplained pain that ultimately required a late revision to TKA in the R-UKA group.
Two-year survivorship was comparable between the two groups (99% vs. 97%), with one case of aseptic loosening in each group.Both cases were revised to TKA.Operative duration was significantly longer with robotic assistance (101 vs. 119 min, P < 0.001).The postoperative 1-year KSKS and KSFA were comparable between the two groups.Operative duration was significantly longer with robotic assistance (101 vs. 119 min, P < 0.001).Secondary outcomes are summarized in Table 2.
Discussion
Up to 96% of patients who undergo UKA have a probability of returning to their preoperative activity levels [6][7][8][9].However, long-term survival remains a significant concern for conventional UKA, despite its good functional outcomes.The revision rates for UKA were at around 4.5% at 2 years in the Australian and Swedish registries, with loosening being the primary cause of revision in patients under 65.At 10 years, survivorship drops to 73%-87%, against 93.3% for TKA [10].
Research indicated that mal-alignment in UKA can impact survivorship [3][4][5].Deviation from a safe range of component alignment can increase aseptic loosening risk.Specifically, tibial component coronal mal-alignment beyond 3°, posterior slope exceeding 7° [11], and mechanical limb alignment greater than 5° varus [12][13][14] have been linked to failure.Diezi et al. highlighted the problem of femoral and tibial component relative mismatch [15].They found that altering the coronal femorotibial contact angle could quadruple local PE liner stress, leading to accelerated wear and failure.Mediallateral mismatch may cause lateral tibial subluxation on the femur, potentially leading to loading of the medial edge of the tibial component or lateral femoral condyle impingement on the lateral intercondylar tibial spine [16,17].Up to 35% of UKA have significant medial-lateral mismatch [18], which predisposes to edge loading and catastrophic failure.Despite mobile-bearing UKA's round-on-round bearing geometry (compared to round-on-flat designs of fixed bearing UKA), protecting against edge loading and allowing for a higher degree of component tilting, accurate positioning is still crucial to preventing bearing dislocation due to medial-lateral mismatch [19,20].These findings emphasize the importance of accurate component medio-lateral alignment to minimize edge loading and optimize implant survival.
The influence of surgical experience and the learning curve on component mal-alignment in UKA is noteworthy.Data suggest that surgeons performing a minimal volume of 1 to 2 UKA surgeries per annum can have a failure rate as high as 4%.However, an inverse correlation is observed between the surgeon's experience and the revision rate.Specifically, surgeons performing over 10 UKA surgeries annually demonstrate a revision rate of 2%, which further diminishes to 1% for those performing more than 30 UKA surgeries per year [15,21,22].
Despite the proficiency gained with experience, conventional methods still present challenges, with component deviations from the preoperative plan observed in 40%-60% of the components implanted by even the most experienced surgeons [23,24].The complexity is amplified when minimally invasive surgical techniques are employed, with studies indicating a broad spectrum of tibial component alignment, ranging from 18° varus to 6° valgus [13,25].This highlights the potential advantages of robotic technology in addressing variables such as surgical technique and surgeon experience.Nevertheless, there is a need for more studies that quantify the improvements in component alignment achieved with UKAs [26].Conversely, some studies have reported no improvement in component alignment achieved with robotic surgery [27], although each had their own limitations in the study design.Notably, much of the existing research has primarily concentrated on improvement of component varus-valgus alignment and posterior slope with R-UKA.Our study sought to address this gap in the literature by focusing on component alignment in the medio-lateral plane, a critical factor of edge loading.
The current study hypothesized that, compared to conventional manual instrumentation, there would be less medio-lateral mismatch in component alignment in UKA performed with robotic arm assistance.Variability in component medio-lateral mismatch reduced by 4.4% (17.2% vs. 12.8%, P = 0.007) in this study, which was in line with previous studies that suggest robotic assistance improved component alignment.The difference in outliers detected by the quadrant method was not significantly different between the two groups (1:57 vs. 4:78; P = 0.403).However, the low inter-observer coefficient value of 0.238-0.249indicated a discrepancy in the zonal categorization among observers.It was hypothesized that this variation could be due to the proximity of some component midpoints to the intersection point between two zones.Therefore, it is likely an inaccurate method of identifying outliers.Regarding postoperative limb alignment, the R-UKA group showed a trend of having fewer outliers, although the difference was not statistically significant.The alignment of the limb was individualized based on the preoperative deformity, which contributed to the heterogeneity of the results.The prosthesis designs used in the study were the Zimmer ZUK and the Smith & Nephew Journey UNI, with ZUK showing survivorship of up to 90% at 14 years, and 98% at 6 years, comparable to our series [28].While there may be differences in the direction of peg holes and keel design between the two implants, the radius of curvature over the femoral component and tibial insert was similar, the effect on radiographic outcomes was insignificant.In this series, all surgeries were performed by surgeons with reasonable UKA volume, minimizing technique factor as a variable in the outcome for the C-UKA group.While there may be a learning curve for R-UKA, the likelihood of gross component mal-alignment due to inexperience is low, given the image-guided nature of robotic surgery and the surgeons' familiarity with conventional UKA.
Although measurements of the tibial/femoral contact point assumed comparable X-ray quality among patients, minute differences in the X-ray beam may generate X-rays with variable degrees of rotation in real life, despite best efforts.Tibio-fibular overlap may not be the ideal calibration for standardization owing to differences in patient morphology.This may represent a weakness in the study design.Though computer tomography would be the most accurate modality for assessing component alignment, the high cost and unjustified radiological exposure to patients make it less practical for a large sample size.For identifying outliers that could be at risk of edge loading, however, X-ray measurements were deemed adequate, as they often deviated significantly from the mean.Reproducibility of the percentage deviation method was also excellent, as demonstrated by a high ICC of > 0.9.
While this study, like others, demonstrated a reduction in error and variance of component alignment with robotic assistance, the difference in survivorship between the two groups was not statistically significant.The influence of alignment on function and survivorship post-UKA remains an area of uncertainty.Moreover, the alignment of components in other planes could also significantly contribute to component longevity.Chatellard et al. identified several component mal-alignments that significantly impacted prosthesis survival, including tibial component obliquity exceeding 3°, slope value over 5°, slope change over 2°, and divergence over 6° between tibial and femoral components [21].Hernigou et al. also discerned an elevated incidence of aseptic loosening associated with a posterior slope exceeding 7°, which was particularly pronounced in cases where the anterior cruciate ligament was absent [11].Barbadoro et al. [29] discovered that a varus angulation greater than 5° in the tibial component led to an increase in implant micromotion, which could potentially result in loosening.The current study did not consider additional coronal and sagittal alignment profiles due to the limitations of the study design.An optimal study design should incorporate both sagittal and coronal alignment to ascertain the most acceptable criterion for component alignment that minimizes loosening.
While R-UKA is a relatively recent technology, its short-to medium-term survivorship has shown encouraging results.A prospective multicenter study examined the 2-year outcomes of 1007 consecutive patients who underwent R-UKA and reported a worst-case survival rate of 96.0% at an average follow-up of 2.5 years [30].In a separate retrospective study, a cohort of 128 patients from five institutions was followed for an average of 2.3 years.The study revealed a survivorship rate of 99.2% for the Navio R-UKA [31].Furthermore, Kleeblad et al. reported a survivorship rate of 97% after following up 432 R-UKAs from four institutions over an average time of 5.7 years [32].A recent systematic review involving 38 studies demonstrated a survivorship rate of 96% at a 6-year follow-up [33].These short-term survivorship rates align with the rates reported in the cohort in the present study.However, it's important to clarify that this study focused on retrospective evaluation of the radiographical results.It did not attempt to correlate these results with survivorship, and, therefore, a detailed survivorship analysis was beyond the scope of this study.
The question of whether image-base or image-free system is superior remains unanswered due to the scarcity of comparative studies.A recent study conducted by DKH Yee et al. in 2023, which included 166 knees, was one of the few that compared the radiological outcomes of image-based and image-free robotic system for TKAs [34].The study found a slightly higher deviation from the pre-planned posterior slope in the image-based robotic system, and both had differing, but clinically insignificant component varus/valgus alignment.Moreover, it remains unclear whether the results from robotic TKA can be extrapolated to UKAs.Further research is needed to clarify this point.
Cost and increased operation time were additional concerns for R-UKA.Similar studies also showed increased surgical timing of up to 30 min [27].Cost-benefit analysis was not performed in this study, as a larger sample size and a longer-term follow-up period is required.Further follow-up studies are needed to translate the significance of component alignment to survivorship to justify the cost associated with routine use of robotic technology.Although patients were matched for baseline demographics, a randomized controlled study would be the most accurate way to determine whether robotic assistance enhance the accuracy of performing UKA.
Conclusion
Robotic-assisted techniques offer potential advantages in improving medio-lateral component alignment of unicompartmental knee arthroplasty.The precise preoperative planning, real-time assessment of ligament balancing and accurate bone preparation provided by robotic systems may help to reduce mal-position and edge loading.The current literature supports the use of robotic assistance in UKR to improve prosthesis alignment, but further research, including long-term studies on survivorship, is needed to establish its role in routine clinical use.
Fig. 1 a
Fig. 1 a Zimmer ZUK; b Journey UNI knee; c Conventional UKA instrument; d Navio image-free robotic system and hand piece
Fig. 2 a
Fig. 2 a Preoperative X-ray; b postoperative X-ray of R-UKA; c Navio image-free robotic system intraoperative planning
Fig. 3 Fig. 4
Fig. 3 Quadrant method to determine component alignment, recorded as the tibial tray quadrant intersected by the midline of the femoral component
Table 1
Patient demographic characteristics
Table 2
Secondary outcome measures
|
2024-06-05T13:11:26.580Z
|
2024-06-05T00:00:00.000
|
{
"year": 2024,
"sha1": "14fda057727c92598269d1474fdcb00cedf8f396",
"oa_license": "CCBY",
"oa_url": "https://arthroplasty.biomedcentral.com/counter/pdf/10.1186/s42836-024-00259-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cebd2fa149d151490399afa8b4f3b8064ce776df",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14544541
|
pes2o/s2orc
|
v3-fos-license
|
EGCG, a major green tea catechin suppresses breast tumor angiogenesis and growth via inhibiting the activation of HIF-1α and NFκB, and VEGF expression
The role of EGCG, a major green tea catechin in breast cancer therapy is poorly understood. The present study tests the hypothesis that EGCG can inhibit the activation of HIF-1α and NFκB, and VEGF expression, thereby suppressing tumor angiogenesis and breast cancer progression. Sixteen eight-wk-old female mice (C57BL/6 J) were inoculated with 10^6 E0771 (mouse breast cancer) cells in the left fourth mammary gland fat pad. Eight mice received EGCG at 50–100 mg/kg/d in drinking water for 4 weeks. 8 control mice received drinking water only. Tumor size was monitored using dial calipers. At the end of the experiment, blood samples, tumors, heart and limb muscles were collected for measuring VEGF expression using ELISA and capillary density (CD) using CD31 immunohistochemistry. EGCG treatment significantly reduced tumor weight over the control (0.37 ± 0.15 vs. 1.16 ± 0.30 g; P < 0.01), tumor CD (109 ± 20 vs. 156 ± 12 capillary #/mm^2; P < 0.01), tumor VEGF expression (45.72 ± 1.4 vs. 59.03 ± 3.8 pg/mg; P < 0.01), respectively. But, it has no effects on the body weight, heart weight, angiogenesis and VEGF expression in the heart and skeletal muscle of mice. EGCG at 50 μg/ml significantly inhibited the activation of HIF-1α and NFκB as well as VEGF expression in cultured E0771 cells, compared to the control, respectively. These findings support the hypothesis that EGCG, a major green tea catechin, directly targets both tumor cells and tumor vasculature, thereby inhibiting tumor growth, proliferation, migration, and angiogenesis of breast cancer, which is mediated by the inhibition of HIF-1α and NFκB activation as well as VEGF expression.
Introduction
The term 'green tea' refers to the product manufactured from fresh tea leaves by steaming or drying at elevated temperatures with the precaution to avoid oxidation of the polyphenolic components known as catechins [1]. The natural product (−)-epigallocatechin-3-gallate (EGCG) accounts for 50-80% of catechins in green tea, representing 200-300 mg in a brewed cup of green tea [2]. Several other catechins such as (−)-epicatechin-3-gallate (ECG), (−)-epigallocatechin (EGC), and (−)-epicatechin (EC) are found in lower abundance in green tea [3]. EGCG is defined as a major green tea catechin that contributes to beneficial therapeutic effects, including anti-oxidant, anti-inflammatory, anti-cancer, and immunomodulatory effects [4][5][6]. Studies conducted on cell-culture systems and animal models as well as human epidemiological studies show that EGCG in green tea could afford protection against a variety of cancer types [7]. Many studies have shown that EGCG produces anti-cancer effect by modulating the activity of mitogen-activated protein kinases (MAPKs), IGF/IGF-1 receptor, Akt, NFκB and HIF-1α [8][9][10][11][12]. A case-control study including 501 breast cancer cases and 594 controls shows that green tea consumption has a significant trend of decreasing risk in a dosedependent manner, after adjusting for potential confounding factors [13]. However, the investigations of green tea or EGCG in breast cancer using animal model are very limited, and the role of EGCG in breast cancer therapy is poorly understood.
The growth and expansion of a tumor is mainly dependent on angiogenesis, the formation of new capillaries from pre-existing blood vessels. Avascular tumors are those that do not grow beyond a maximum size of 1 to 2 mm 3 in the absence of neovascularization, and it may be eliminated by a normal immune system [14]. Angiogenesis requires stimulation of vascular endothelial cells through the release of angiogenic factors. Of these, the vascular endothelial growth factor (VEGF) is the most critical regulator in the development of the vascular system and is commonly overexpressed in a variety of human solid tumors including breast cancer [15]. Cancer cells are under greater hypoxia and oxidative stress than normal cells. Oxygen radicals and hypoxia co-operatively promote tumor angiogenesis [16]. Hypoxia causes the activation of HIF-1, in which it stimulates VEGF expression. HIF-1 levels are also increased by oxygen radicals. In addition, oxygen radicals activate NFκB that also increases VEGF expression. VEGF is a key angiogenic factor that stimulates the growth of tumors including breast cancer, in which VEGF exerts paracrine (especially angiogenesis) and autocrine (proliferation and migration) effects to promote progression of breast cancer [17]. As mentioned above, we believe that EGCG can block highly activated NFκB and HIF-1α pathways in breast tumor. Therefore, we hypothesizes that EGCG directly targets both of tumor cells and tumor vasculature, thereby inhibiting tumor growth, proliferation, migration, and angiogenesis of breast cancer, which is mediated by the inhibition of HIF-1α and NFκB activation as well as VEGF expression. Also, EGCG treatment has no significant effects on the body weight, heart weight, angiogenesis and VEGF expression in normal tissues such as the heart and skeletal muscle.
To test this hypothesis, the present study aimed to determine the following: (a) whether a relative high oral dose of EGCG inhibits tumor growth, tumor angiogenesis, and VEGF expression in an immunocompetent mouse model (C57BL/6) of breast cancer; (b) whether oral EGCG treatment affects angiogenesis and VEGF expression in normal tissues such as the heart and skeletal muscle in the same mice; and (c) whether EGCG inhibits proliferation, migration, VEGF expression, the activation of HIF-1α and NFκB in cultured mouse and human breast cancer cells (E0771, MCF-7 and MDA-MB-231).
Materials and methods
Chemicals and cell lines EGCG was purchased from Sigma Chemical Co. (St. Louis, MO). The mouse breast cancer cells (E0771) which were originally isolated from an immunocompetent C57BL/6 mouse, were provided by Dr. Sirotnak FM at Memorial Sloan Kettering Cancer Center, New York, NY [18]. Human estrogen-receptor positive breast cancer (MCF-7) cells and human triple negative breast cancer (MDA-MB-231) cells were purchased from the American Type Culture Collection (Rockville, MD). All breast cancer cells were maintained as monolayer cultures in RPMI Medium 1640 (GIBCO) supplemented with 10% FBS (HyClone), 100 U/ml penicillin, 100 μg/ml streptomycin, and 0.25 μg/ml amphotericin B, and incubated at 37°C in a humidified 5%CO 2 /air injected atmosphere.
Animal protocols
The protocols were carried out according to the guidelines for the care and use of laboratory animals implemented by the National Institutes of Health and the Guidelines of the Animal Welfare Act and were approved by the University of Mississippi Medical Center's Institutional Animal Care and Use Committee. 16 female C57BL/6 mice at 7 weeks of age were purchased from Jackson Laboratory (Bar Harbor, Maine). The mice were allowed to acclimate for 1 week with standard chaw diet (Teklad, Harlan Sprague Dawley; Indianapolis, IN) and tap water before beginning the experiments. The eight week old female mice (n = 16) were inoculated with 10^6 E0771 cells suspended in 100 μl of phosphate-buffered saline into the left fourth mammary gland fat pad. Then, 8 mice received EGCG (25 mg/50 ml) in drinking water for 4 weeks and 8 control mice received drinking water only. Each mouse (20 g) usually drank 2 to 4 ml of water per day. Therefore, EGCG was given around 50 to 100 mg/kg/day to the mice. The body weight of the mice was monitored weekly. Tumor size was monitored every other day in two perpendicular dimensions parallel with the surface of the mice using dial calipers. At the end of the experiment, blood samples, tumors, heart and limb muscles were collected for measuring VEGF expression using ELISA and average microvascular density (AMVD) or capillary density (CD) using CD31 immunohistochemistry.
Morphometric analysis of angiogenesis in tumor, the heart and limb muscles The quantification of blood vessels in mouse breast tumor, the heart and limb muscle was determined with the modification of a previously reported method [17,19]. Briefly, the tissues were fixed in 4% neutrally buffered paraformaldehyde. For the heart left ventricular and limb muscle samples, consecutive thin transverse cryosections (5 μm) were cut along the base-apex axis. Consecutive thin cryosections (5 μm) of OCT compound (Sakura Finetek, Torrance, CA) embedded tissue samples were fixed in acetone at 4°C for 10 min. After washing in phosphatebuffered saline (PBS), the sections were treated with 3% H 2 O 2 for 10 minutes to block endogenous peroxidase activity and were blocked with normal rabbit serum. Then, the sections were washed in PBS and incubated with rat anti-mouse CD31 (PECAM-1) monoclonal antibody (BD Pharmingen, San Diego, CA) at a 1:200 dilution overnight at 4°C. Negative controls were incubated with the rat serum IgG at the same dilution. All sections were washed in PBS containing 0.05% Tween-20, and were then incubated with a 2 nd antibody, mouse anti-rat IgG (Vector laboratories, Burlingame, CA) at a 1:200 dilution for 1 hour at room temperature again followed by washing with PBS containing 0.05% Tween-20. The sections were incubated in a 1:400 dilution of Extravadin Peroxidase (Sigma, St. Louis, MO) for 30 min. After washing in PBS containing 0.05% Tween-20, the sections were incubated in peroxidase substrate (Vector laboratories, Burlingame, CA) for 5 min. The sections were washed in PBS containing 0.05% Tween-20 and were counterstained with hematoxylin. A positive reaction was indicated by a brown staining. The microvascular vessels were quantified by manual counting under light microscopy. A microscopic field (0.7884 mm 2 ) was defined by a grid laced in the eye-piece. At least 20 microscopic fields were randomly acquired from each tumor for analysis. Any endothelial cell or cell cluster showing antibody staining and clearly separated from an adjacent cluster was considered to be a single, countable microvessel. The value of average microvascular density (AMVD) or capillary density (CD) was determined by calculating the mean of the vascular counts per mm 2 obtained in the microscopic fields for each tissue sample.
Measurements of protein levels of VEGF by ELISA
Protein levels of VEGF in plasma, breast tumor, the heart, the limb muscle, and the medium cultured with E0771 cells were determined using mouse VEGF ELISA kits (R&D Systems, Minneapolis, MN), according to the manufacturer's instructions. The total proteins of breast tumor, the heart, the limb muscle, and cultured E0771 cells were extracted using NE-PER Cytoplasmic Extraction Reagents (Pierce, Rockford, IL), according to the manufacturer's protocol. The total protein concentration of these tissue extractions was determined using a Bio-Rad Protein Assay (Bio-Rad Laboratories, Hercules, CA). The protein concentrations of VEGF were normalized and expressed as pictograms per milligram of total tissue or cell extraction protein.
Proliferation assay of cultured breast cancer cells
The E0771, MCF-7, and MDA-MB-231cells were seeded into 6-well tissue culture plates using RPMI Medium 1640 (GIBCO) supplemented with 10% FBS (HyClone), 100 U/ml penicillin, 100 μg/ml streptomycin, and 0.25 μg/ ml amphotericin B, and incubated at 37°C in a humidified 5%CO 2 /air injected atmosphere. When the monolayer reached about 80% confluence, the cells were washed with PBS and incubated with fresh RPMI Medium 1640 with 10% FBS in the absence and presence of EGCG (0, 10, 50 μg/ml) for 18 hours. 3H-thymidine incorporation assay was used to determine the cell proliferation during the last 6 hours of incubation as previously described [20].
Migration assay
Migration was determined using BD BioCoat Matrigel Invasion Chamber (BD Bioscience Discovery Labware, Sedford, MA) according to a previous study, in which only invasive cells digested the matrix and moved through the insert membrane [21]. 1 × 10 5 E0771 cells per well in 0.5 ml medium (RPMI Medium 1640) were seeded in the matrigel-coated upper compartment (insert) of a Transwell (24-well format, 8-μm pore) in the absence of and presence of EGCG (0, 10, 20, 50 μg/ml) and the medium with 10% FBS was added to the lower part of the well. After overnight incubation at 37°C and 5% CO 2 , cells on the upper surface of the insert were removed using a cotton wool swab. Migrated cells on the lower surface of the insert were stained using DiffQuit (Dada Behring, Düdinen, Switzerland). The images of migrated cells were taken and the number of migrated cells was counted using a microscope (Leica, Germany) in a 20× objective.
HIF-1α and NFκB activation (motif binding) assays
We determined HIF-1α and NFκB activation in cultured E0771 cells in the absence and presence of EGCG (0 and 50 mg/ml) to investigate whether the down-regulation of VEGF by EGCG is associated with the inhibition of HIF-1α and NFκB activation (n = 6). The nuclear proteins were extracted by using Active Motif (Carlsbad, CA) nuclear extract kit. 20 μg nuclear proteins from each sample was used in the TransAM HIF-1α or NFκB p65 kit (Active Motif), which can measure the binding of activated HIF-1α or NFκB to its consensus sequence attached to a microwell plate, according the manufacturer's instructions.
Statistical analysis
All determinations were performed in duplicated sets. Where indicated, data is presented as mean ± SE. Statistically significant differences in mean values between the two groups were tested by an unpaired Student's t-test. Linear regression was performed by the correlation analysis between two continuous variables. A value of P < 0.05 was considered statistically significant. All statistical calculations were performed using SPSS software (SPSS Inc., Chicago, IL).
A relative high oral dose of EGCG significantly inhibits the progression of breast cancer growth
We used a mouse breast cancer model that mimics the human disease, in which the mouse breast adenocinoma (E0771) cells were injected into the pad of the fourth mammary gland of female immunocompetent mice (C57BL/6). Immediately after the inoculation of E0771 cells, the eight week old female mice (n = 8) were given EGCG at 50 to 100 mg/kg/day in drinking water for four weeks and the control group (n = 8) was given regular drinking water only.
Tumor size was then monitored every other day in two perpendicular dimensions parallel with the surface of the mice using dial calipers. As indicated in Figure 1A, the tumor cross section area was significantly reduced in the EGCGtreated group compared to the control group two weeks after the breast cancer inoculation. At the end of experiment, the tumor cross section area was reduced by 65% (P < 0.01) in EGCG-treated group compared to the control group ( Figure 1A), which was consistent with the reduction in tumor weight ( Figure 1B) in EGCG-treated group compared to the control group (0.37 ± 0.15 vs. 1.16 ± 0.30 g; P <0.01). Clearly, EGCG treatment at 50 to 100 mg/kg/d in drinking water significantly inhibited the progression of breast cancer growth in the female mice by decreasing the tumor size and reducing the growth curve of breast cancer. However, there was no significant difference in the body weight, heart weight, kidney weight, or urinary protein between the EGCG-treated mice and the control mice.
EGCG suppresses breast tumor angiogenesis and VEGF expression in mice
Growth and expansion of tumor mass are strictly dependent on angiogenesis because neovascularization permits rapid tumor growth by providing an exchange of nutrients, oxygen, and paracrine stimuli to the tumor [22]. Therefore, in this study, we used a morphometric analysis of immunohistochemical staining for CD31 to determine the effect of EGCG on breast tumor angiogenesis in mice. Representative images of CD31 staining of the breast cancer tumors showed that the EGCG-treated tumor had lesser microvessels than the control tumor ( Figure 2A). Morphometric analysis (Figure 2A) indicated that PDTC treatment caused a significant decrease in average microvessel density (AMVD, the number of microvessels per mm2 area) of breast tumors compared to the control breast tumors (109 ± 20 vs. 156 ± 12 microvessels number per mm^2; n = 8; P < 0.01). These results also suggest that a pronounced decrease in tumor angiogenesis is associated with a decrease in tumor size of breast cancer tumor in the female mice treated with EGCG compared to those in the control mice. Figure 2B also demonstrated that EGCG treatment reduced plasma VEGF levels over the control mice (26.48 ± 3.76 vs. 40.79 ± 3.5 pg/ml; n = 8; P < 0.01) and tumor VEGF expression over the control mice (45.72 ± 1.4 vs. 59.03 ± 3.8 pg/mg; n = 8; P < 0.01). These findings suggest that the inhibition of tumor angiogenesis in mice by EGCG is due to the down-regulation of VEGF because VEGF is a key angiogenic factor.
EGCG directly inhibits proliferation and migration of breast cancer cells
We used a 3H-thymidine incorporation assay to determine the effects of EGCG on the proliferation of cultured mouse breast cancer cells (E0771), human estrogen receptor positive breast cancer cells (MCF-7), and triple negative breast cancer cells (MDA-MB-231). Figure 3A showed that E0771 cells treated with EGCG caused a dose-related decrease in 3H-thymidine incorporation, decreasing by 22% at 10 μg/ml and by 77% at 50 μg/ml, compared to the control group (n = 6; P < 0.01). We examined the inhibitory effect of EGCG on E0771 cell migration using BD BioCoat Matrigel Invasion Chamber. Figure 3B demonstrates that EGCG at 10, 20, and 50 μg/ml caused a dose-dependent reduction of migrated breast cancer (E0771) cells, decreasing by 25%, 48%, and 71%, respectively, compared to the control group (n = 6; P < 0.01). In the another experiment, as shown in Figure 3C, we demonstrated that EGCG at 50 μg/ml significantly inhibited the proliferation of human estrogen receptor positive breast cancer cells (MCF-7) and triple negative breast cancer cells (MDA-MB-231) by 91% and 52%, respectively, compared to the control group (n = 6; P < 0.01), but not at 10 μg/ml. These in vitro findings illustrate that EGCG can directly target breast cancer cells by inhibiting the proliferation and migration. Figure 1 The inhibition of the progression of breast cancer growth by oral EGCG in the immunocompetant female mice (C57BL/6) allografted with mouse breast cancer (E0771) cells. EGCG at 50 to 100 mg/kg/day in drinking water for four weeks significantly reduced a growth curve of breast cancer monitored by the tumor cross section area by 65% ( Figure 1A, P < 0.01; n = 8) and tumor weight ( Figure 1B The down-regulation of VEGF expression by EGCG is associated with the inhibition of HIF-1α and NFκB activation HIF-1 and NFκB pathways are highly activated in breast tumor, in which they can co-operatively promote tumor angiogenesis by increasing VEGF expression [16]. We used VEGF ELISA kit and HIF-1α and NFκB activation (Motif Binding) assays to determine whether EGCG could suppress HIF-1α and NFκB activation and VEGF expression in cultured mouse breast cancer (E0771) cells. Figure 4A showed that EGCG at 50 μg/ml significantly inhibited VEGF expression (1752 ± 49 vs. 2254 ± 91 pg/mg; n = 6; P < 0.01) in cultured E0771 cells, compared to the control. In the same experiment, EGCG at 50 μg/ml Figure 3 EGCG caused a dose-related inhibition in 3H-thymidine incorporation, decreasing by 22% at10 μg/ml and by 77% at 50 μg/ml (Panel A, n = 6, P < 0.01), and in migration (Panel B, n = 6, P < 0.01) in cultured E0771 cells, compared to the control group. In Panel C, EGCG at 50 μg/ml significantly inhibited the proliferation in cultured MCF-7 and MDA-MB-231 cells, compared to the control group (n = 6; P < 0.01), respectively. also significantly suppressed the activation of HIF-1α (0.11 ± 0.02 vs. 0.24 ± 0.02; P < 0.01; Figure 4B) and NFκB (1.15 ± 0.21 vs. 1.61 ± 0.32; n = 6; P < 0.01; Figure 4C), compared to the control, respectively. These results suggest that the inhibition of HIF-1α and NFκB activation contributes to the down-regulation of VEGF expression.
Oral EGCG treatment has no effects on angiogenesis and VEGF expression in normal tissues such as the heart and skeletal muscle in mice
The data showed that there was no significant difference in the body weight (22.38.25 ± 0.51 vs. 22.94 ± 0.57; n = 8; P = 0.9437), heart weight (84.7 ± 11.2 vs. 85.1 vs. 10.6 mg; n = 8; P = 0.3546), or kidney weight (237.5 ± 9.2 vs. 240.1 ± 8.9 mg; n = 8; P = 0.3735) between the EGCG-treated mice and the control mice. Figure 5A showed that EGCG treatment did not affect the capillary density (number of capillary/mm^2 area) (3270 ± 162 vs. 3103 ± 226 #/mm^2; n = 8; P = 0.5215) analyzed by CD31 immunochemistry and morphometric analysis, and VEGF expression (261 ± 22 vs. 245 ± 19 pg/mg; n = 8; P = 0.4517) determined by ELISA in the mouse heart, compared to the control group, respectively. Figure 5B showed that there was no significant difference in the capillary density (370 ± 55 vs. 381 ± 44 #/mm^2; n = 8; P = 0.5401), and VEGF expression (225 ± 16 vs. 214 ± 20 pg/mg; n = 8; P = 0.7825) in the limb skeletal muscles between the EGCG-treated mice and the control mice, respectively. These findings illustrate that EGCG does not significantly affect angiogenesis and VEGF expression in the normal tissues such as the heart and skeletal muscles.
Discussion
The major new findings from this study include: 1) a relative high oral dose of EGCG significantly inhibits the progression of mouse breast cancer growth in female immunocompetent mice; 2) EGCG significantly suppresses breast tumor angiogenesis and VEGF expression in these mice; 3) EGCG treatment does not significantly affect angiogenesis and VEGF expression in the normal tissues such as the heart and skeletal muscles in the same experiment; 4) EGCG directly inhibits proliferation and migration of cultured mouse and human breast cancer cells; and 5) the down-regulation of VEGF expression by EGCG is associated with the inhibition of HIF-1α and NFκB activation. These findings support the hypothesis that EGCG, a major green tea catechin directly targets both of tumor cells and tumor vasculature, thereby inhibiting tumor growth, proliferation, migration, and angiogenesis of breast cancer, which is mediated by the inhibition of HIF-1α and NFκB activation as well as VEGF expression. Also, EGCG treatment has no significant effects on angiogenesis and VEGF expression in normal tissues such as the heart and skeletal muscle.
An important finding of this study is that a relative high oral dose of EGCG treatment at 50 to 100 mg/kg/day in drinking water significantly slows a growth curve of breast cancer in C57BL/6 female mice compared to the control group, which is characterized by 65% and 68% reduction in the tumor cross section area and tumor weight, respectively. Clearly, oral EGCG treatment is very effective in suppressing progression of breast cancer in a wild type immunocompetent mouse model. Ullmann et al. reported that peak plasma concentrations were greater than 3 μg/ml after oral dose of 1600 mg in healthy human subjects [23]. We believe that oral dose of 50 to 100 mg/kg/day in human can reach the effective plasma concentrations of EGCG against breast cancer. Recent methods developed for the stereoselective total synthesis of EGCG, and structurally related catechins, could provide new sources of these compounds for biomedical use [24]. Our next step is clinical trial for EGCG in breast cancer therapy.
Cancer cells are under greater hypoxia and oxidative stress than normal cells. 8-hydroxy-2'-deoxyguanosine, a major marker of constitutive oxidative stress is almost 10 times more prevalent in invasive ductal breast carcinoma cells than in normal control samples from the same patient [25]. Tumor cells overproduce reactive oxygen species (ROS) by alterations to metabolic pathways in tumor cells [26], an inadequate tumor vascular network [16], and macrophage infiltration of the tumor [27]. Breast carcinomas support their growth by stimulating angiogenesis. Blood flow within these new vessels is often chaotic, causing periods of hypoxia followed by reperfusion. The generation of ROS by reperfusion further causes oxidative stress within breast carcinomas. Also, a breast carcinoma rapidly outgrows its blood supply, leading to glucose deprivation and hypoxia. Glucose deprivation rapidly induces oxidative stress within breast carcinoma cells [28]. Clearly, hypoxia and oxidative stress are found together within the breast carcinoma, in which VEGF production can be augmented by synergy between oxygen radicals and tumor hypoxia. Oxygen radicals and hypoxia cooperatively promote tumor angiogenesis [16]. Hypoxia causes the activation of HIF-1, in which it stimulates VEGF expression. HIF-1 levels are also increased by oxygen radicals. In addition, oxygen radicals activate NFκB that also increases VEGF expression. Thus, the compound blocking HIF-1and NFκB pathways can significantly inhibit VEGF expression and angiogenesis in carcinomas including breast carcinomas.
In this study, we found that the significant inhibitions of tumor growth and tumor angiogenesis of breast cancer in female mice by EGCG were associated with suppressing the activation of HIF-1α and NFκB, and decreasing VEGF expression in breast carcinoma cells. VEGF is a key angiogenic factor that stimulates the growth of tumors including breast cancer, in which VEGF exerts paracrine (especially angiogenesis) and autocrine (proliferation and migration) effects to promote progression of breast cancer [17]. VEGF overexpression and the activation of HIF-1α and NFκB pathways in breast cancer are strongly linked to rapid growth of tumors and worse prognosis [16,29,30]. Oxygen radicals and hypoxia co-operatively promote tumor angiogenesis, in which VEGF overexpression is stimulated by the activation of HIF-1α and NFκB pathways in breast cancer [16]. The present findings indicate that EGCG significantly inhibits VEGF expression by suppressing the activation of HIF-1α and NFκB pathways, thereby inhibiting tumor growth, proliferation, migration, and angiogenesis of breast cancer. Our results are supported by the previous findings as follows: 1) EGCG suppressed tumor growth by blocking the induction of VEGF in human colon carcinoma cells [31]; 2) EGCG inhibited VEGF/VEGFR axis by suppressing the expression of HIF-1α in human colorectal cancer cells [32]; and 3) EGCG inhibited cancer progression by decreasing NFκB activation [33]. Progression stage is the final phase of cancer development, an uncontrolled growth of cancer cells occurs. In this stage cancer cells are under greater hypoxia and oxidative stress, in which many transcription factors, such as HIF-1α and NFκB, are activated leading to transmit aberrant signals resulting in abnormal functions such as tumor angiogenesis, cancer invasiveness and metastasis. Present findings illustrate that EGCG can inhibit multiple key cellular signals resulting in inhibiting tumor angiogenesis and breast cancer progression. Also, accumulating evidence shows that EGCG can target all stages of cancer development by blocking multiple cellular proteins involved in diverse cellular signal transduction pathways: proliferation, differentiation, apoptosis, angiogenesis or metastasis [34]. In future study, we will investigate the therapeutic potentials of EGCG combined with VEGF receptor inhibitor, Notch inhibitor, HIF-1 inhibitor, or NFκB blocker in breast cancer therapy.
In present study, we demonstrated that EGCG treatment reduced plasma VEGF levels by 35% over the control mice, which was associated with more than 65% reduction of tumor weight in EGCG treated breast cancer mice, compared to untreated breast cancer mice. These findings are consistent with breast cancer patients that EGCG treatment reduced serum levels of VEGF [35]. A study on 200 women showed that serum VEGF levels were significantly higher in breast cancer patients compared to control [36]. Systemic VEGF levels were reduced significantly in the breast cancer patients following tumor excision [36]. We believe that oral EGCG treatment could reduce the tumor-related blood VEGF levels.
Interestingly, the present study shows that EGCG treatment does not significantly affect angiogenesis and VEGF expression in the normal tissues such as the heart and skeletal muscles in the same experiment. The present study first time shows that oral EGCG treatment significantly inhibits angiogenesis, VEGF expression, and growth in breast tumor, but no such effects on the normal tissues such as the heart and limb muscles in the same mice. The different effects of EGCG in tumor and normal tissues can be explained by that cancer cells are under greater hypoxia and oxidative stress than normal cells. VEGF expression and angiogenesis are very stable in normal matured tissues in which they are regulated by metabolic balance within the tissue. However, angiogenesis is stimulated by significantly increased VEGF levels, activated HIF-1α and NFκB pathways in cancer. We also found that there was no significant difference in the body weight, heart weight, or kidney weight between EGCG-treated mice and the control mice. This is an exciting possibility, because EGCG is a drug of low toxicity.
Antiangiogenic therapy is an attractive approach for cancer treatment including breast cancer, in which these agents include monoclonal antibodies (mAbs) and the tyrosine kinase inhibitors (TKIs) of VEGF pathway. Implicated in many physiological processes, VEGF pathway inhibition can lead to on-target side effects, such as hypertension, proteinuria, thromboembolic events, or congestive heart failure [37][38][39]. The incidence of hypertension rate was up to 35% with bevacizumab, a monoclonal antibody against VEGF-A [40,41]. Ultimately, considering the modest clinical benefit on the one hand, and the increase in toxicity on the other, the US Food and Drug Administration withdraw its approval of the breast cancer treatment for bevacizumab [42]. As mentioned above, EGCG is a drug of low toxicity, and significantly inhibits angiogenesis in breast tumor (under greater oxidative stress), but not in the normal tissues (no oxidative stress) such as the heart and limb muscles in the same mice. Thus, EGCG may overcome the existing barriers -the mAbs and TKIs of VEGF pathway-induced ontarget side effects. However, the further studies are needed.
In conclusion, our results indicate that oral administration of EGCG, a major green tea catechin, significantly inhibits tumor growth and tumor angiogenesis of breast cancer, but no effect on angiogenesis in the heart and limb muscles in an immunocompetent mouse model using mouse breast cancer (E0771) cells. EGCG directly suppresses the proliferation or migration of cultured mouse breast cancer cells as well as the proliferation of human breast cancer cells (MCF-7 and MDA-MB-231). These anticancer effects of EGCG seem to be mediated by blocking multiple intracellular signaling cascades such as HIF-1α and NFκB pathways. The mechanistic advance of EGCG on inhibiting tumor angiogenesis is very unique, in which EGCG does not target angiogenesis in normal tissue. Accumulating evidence indicates that EGCG displays a vast array of cellular effects involved in all stages of cancer development. The multiple targets on cancer and less side effects of EGCG will lead a successful targeted therapy for cancers including breast cancer. The potential therapeutic targets of EGCG in cancer therapy are needed to be further explored. Our next step is clinical trial for EGCG in breast cancer therapy. The combination of EGCG with other targeted compounds such as VEGF receptor inhibitor, Notch inhibitor or HIF-1 inhibitor could lead to a very effective specific targeted breast cancer therapy. Submit your manuscript at www.biomedcentral.com/submit
|
2017-06-25T09:12:21.760Z
|
2013-05-02T00:00:00.000
|
{
"year": 2013,
"sha1": "ee15b95af48dd3a476cd2a449facf41ac6b33f38",
"oa_license": "CCBY",
"oa_url": "https://vascularcell.com/index.php/vc/article/download/10.1186-2045-824X-5-9/207",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3d34eb0c1c8bc3a1b910a523fdba527ee67554a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55138833
|
pes2o/s2orc
|
v3-fos-license
|
A Review of Parliament-Foreign Policy Nexus in South Africa and Namibia
After a review of selected literature on foreign policy and parliament-foreign policy nexus in South Africa, this article examines the nature of ‘Parliamentary diplomacy’, with special focus on Parliamentary Committees on Foreign Affairs [PCFA] in South Africa and Namibia since 2000. By means of descriptive approach and content-analysis of documentary sources and conversational interviews, it further explores the extent of executive-legislative frictions over foreign affairs in both countries and the raison deter for parliamentary interest in foreign affairs, which is located within the orbit of National Interest. It argues that the executive-legislature friction over foreign policies may not be resolved sooner, more so that there are other actors seeking to influence the direction of foreign policy in both countries.
difficulty SA faces in trying to promote human rights and democracy, while simultaneously trying to build bilateral and multilateral diplomatic relations with countries known for human rights violations.The study suggest that SA may have a capacity to act as a type of role model, but that it needs to weigh decisions on a case by case basis.Another Raymond Suttner`s[1996] work on Foreign policy of the new South Africa: a brief review, comments on the 'near-existence' of foreign policy in the new South Africa, and describes who the main actors in foreign policy are, how they are coordinated, the status of the multilateral relations, civil society and its organs, the problems with human rights, the question of identity, and the commitment of foreign policy to democracy in this new dispensation.
Still in the 90s Greg Mills [1997] warns about the variety of roles that SA perceives itself in and the limitations the country has to contend with in its position between the West and Africa, having an option to play an expanded role in Africa and the risk that regional involvement will detract from domestic imperatives.The study addresses the main objective of SA's foreign policy, strategies to adopt to achieve it, and the attributes needed to ensure that correct policies are adopted and followed.Again, Greg Mills[1996] in his South African foreign policy: the year (1996) in review, states that, since April 1994, South Africa foreign policy has attempted to steer a neutral path, and concentrated on `universality', expanding SA representation abroad, and increasing diplomatic presence in South Africa.Greg Mills contends that the `New South' concept may revitalize a foreign policy which lacks overall direction, is over focused (and under spent) on Africa, and has organizational difficulties causing it to have a bad name.The 1996 review further advises that foreign policy should rest in the hands of elected officials and policy professionals rather than in the hands of the Presidents advisors.In a study by Roland Henwood [1997] he divides the development of SA`s foreign policy into two phases: National Party rule and ANC led government (1994-), including the transition period of 1990 to 1994 which formed the foundation of post -1994 foreign policy.The study details aspects of both periods, and continues with a brief look at foreign policy formulation and implementation, as well as SA`s relations with the `problematic' states such as Cuba, Libya, Iran, Syria and Peoples Republic of China Greg Mills`s[1997] briefly reviews the successes SA achieved in the field of foreign policy, and look at problems encountered, and the restructuring of the Department of foreign affairs (DFA).The review gives attention to the interpretation of foreign policy, identification of priorities, budget, the importance of foreign economic relations, the regional dimension, and, SA in Sub Sahara Africa In Anthoni van Nieuwkerk`s [1998] study on South Africa's emerging Africa policy examines the emerging Africa posture of the post-1994 South African government, focussing the discussion of foreign policy in the new South Africa around four themes: the views of foreign policy makers in Pretoria, the views of analysts and critical scholars, the fact of African reality, and, foreign policy insights gained from the discussion.Denis Venter `s[1998] South African foreign policy in the African context points out that South Africa's foreign policy has gone (and is still going) through a process of profound change, and that the dimensions of its relationship with Africa are likely to focus on issues such as socio-economic development, trade, technical aid, migration, resource management and ecological concerns rather than on narrow military security issues.
Greg Mills[1998] updates an earlier and abridged version of this chapter published in 1997, to identify the main foreign policy tracks followed in SA since 1994.Greg posed the following questions; what the main overriding objective of the policy is, what strategies to be followed to achieve it, what tactics to be followed to steer it towards ensuring that correct strategies are adopted and followed, and what other factors will shape the pursuit of this policy In Greg Mills`s [1999] South African foreign policy in review, the Asian currency crises, the `millennium bug', proliferation of weapons, and continuing instability in parts of southern Africa were identified to have dampened the high hopes of a true African renaissance in post cold war, post apartheid Africa.Greg situates SA`s foreign policy in international and regional context, and suggests that rationalization and cost cutting will be necessary, especially in view of global events.
Zondi Masiza [1999] also argues that the political parties that contested the June 1999 elections in South Africa, hardly raised foreign policy as an issue.He tries to explain the silence on foreign policy issues during the elections and asks if South African public opinion on foreign policy is strong enough to influence its direction at all.But Jakkie Cilliers[1999] provides a broad framework for reviewing South Africa's emerging foreign policy identity on the eve of the second elections in June 1999 and the turmoil that has come to characterize much of the African continent in recent years.Jakkie points out that without stability there will only be war, poverty and continued marginalization of Africa, and no chance for economic development and growth.Philip Nel [1999] conducts two separate surveys on the foreign beliefs of South Africans, based on the same questions, for `mass opinion' on the one hand and `elite opinion' on the other hand.The study shows that South Africans are much more concerned about domestic problems than they are about foreign policy issues.The study briefly discusses the decision by the South African government to establish full diplomatic relations with the Peoples Republic of China (PRC) and to break ties with Taiwan (ROC).
John Seiler [2000] in his towards fresh perspectives in South Africa's foreign policy analysis, critically assesses Francis Kornegay and Chris Landsberg`s[2000] claims that South Africa's foreign policy is dominated, to its detriment, by the old guard.Suggest a different set of assumptions to support in public analysis the formulation and carrying out of South Africa's foreign policy.Bronwen Manby [2000] sets out the inconsistencies between theory and practice in South Africa's foreign policy, in relation to issues of human rights.The study further outlines the seven principles of South Africa's foreign policy and focuses on South Africa's foreign policy in practice (human rights regime, peacekeeping, bilateral relations with East Asia, Nigeria and Lesotho, and arms sale).The study concludes that while South Africa's theoretical commitment to human rights has been fully realized, it is not the light that guides the Department of Foreign Affairs (DFA).
Audie Klotz[2000] in Migration after apartheid: deracialising South African foreign policy, argues that the status in South African immigration policy derives from identity politics, rather than embracing the outside world, deracialisation, xenophobia now prevails.The study concludes that the rising tide of xenophobia against the influx of fellow Africans creates a potent barrier to reforms in immigration policy.Vincent Williams[2001] also points out that bilateral, multilateral and/or regional agreement between countries in Southern African region tend to focus on cooperation in the economic and security spheres.The labour environment, however, highlights the marked discord between South Africa's pronounced foreign policy objectives and its domestic migration policy and legislation.The study discusses the efforts to draft a regional migration protocol, the suspension of the SADC Draft Protocol on Free Movement in 1999, and the White Paper on International Migration, to ask what the link is between foreign policy and migration policy.
Maxi Schoeman[2002], contends that it is South Africa's (and Africa's) position in the global political economy that is presently occupying the mind of its foreign policy makers.The study briefly looks at South Africa's foreign policy objectives, structures and strategies, the prerequisites needed to enhance its international status and then touches upon initiatives such as the renaissance idea, the NAI or MAP initiative, and drawbacks experienced.
Maxi Schoeman and Chris Alden [2003] provide an overview of South Africa's quiet diplomacy towards Zimbabwe.In order to understand the constraints placed on South Africa's policy actions.The study explores the role and actions of the international, mainly Western community and the foreign policy behavior of African countries.The study also deals with an analysis of the constraints on South Africa's policy making.Another example of a seminal work on post-1994 SA Foreign policy is the Chris Alden & Garth Le Pere`s[2003] Adelphi Paper, which provides a succinct analysis and assesses South African foreign policy from the onset of the democratic transition to 2003 and focusing on the question of South African leadership in the context of this transition.
Chris Landsberg and David Monyae[2006] also reviews how South Africa's principal foreign policy actors define the countries international role conceptions and discuses the countries view of its global role.The study considers seven south Africa-specific international roles, namely voice, example setter, mediator-integrator and regional sub-system collaborator, the diplomat, bridge builder, activist multilateralist, and faithful ally.
Studies on Parliament-Foreign Policy Nexus:
With reference to parliament's role in foreign policy making in SA there a very few studies in this area.Some of the available sources include Parliament and foreign policy by Raymond Suttner [1996] which debates the question of whether parliament should be concerned with formulation of foreign policy or not, and discusses the situation in South Africa after April 1994, where as yet no institutionalized mechanism exists whereby a creative relationship between the department and the portfolio committee concerned with foreign policy can be formed.In her work Jo-Ansie van Wyk [1997] deals with the broader context of SA's external relations in 1996.The work is organized around the activities of parliamentary bodies and instruments concerned with foreign policy issues, and the influence, if any, of these institutions on foreign policy decision making.Tim Hughes [2001] has outlined the role played by the parliamentary portfolio committee on foreign policy (PCFA), summarizing its activities for the year 2000-2001, and provides a brief evaluation of the performance of the committee.The author asks whether the committee is doing enough and concludes with recommendations for its functions in the future Jo-Ansie van Wyk [2001] provides a frank view of structures and procedures involved in foreign policy making in South Africa, especially from 1999 to 2 June 2000 elections.Again Tim Hughes [2005] examines the process and exercise of democracy in all the parliaments of the region.Tim Hughes tries to contribute to strengthening parliamentary democracy throughout Southern Africa and makes recommendations on how its application and implementation in each country can be improved, strengthened and sustained.Philip Nel and Jo-Ansie van Wyk[2003] examines some of the ostensible public participation deficiencies encountered in foreign policy making in South Africa.The authors argued that the citizenry of South Africa is largely excluded from decision making on public policy issues beyond the boarders of their state.This contributes to their disempowerment in the face of seemingly inevitable and anonymous forces of globalization, and adds to their alienation from and apathy towards foreign policy.
In summary, there is no doubt that the selected studies under review deal extensively with the core of South Africa's foreign policy in terms of options and actions and to certain extent the role of the parliament.However, there has been no significant comparative study of at least two similar countries in southern Africa to show if the experiences of developing or new democracies differ or are similar, and to what extent within this context, and in what way[s] can these common experiences be managed in the interest of deepening democratic practices and processes.
Motivation & Methodology
As noted earlier, the most important observation or gap is that quite a number of studies on South and Southern Africa's parliament have not compared experiences of similar countries in SADC with specific reference to Parliamentary role in foreign affairs, either as a critical part or as consistent opposition to executively determined policies.Thus a comparative study of two similar post-1990 prime democracies in Southern Africa which intend to show the quality of executive-legislature cordiality [or otherwise] over foreign affairs cannot but be regarded as very significant enterprise.Therefore the overriding motivation for this study is broadly to add up to the now growing literature on parliamentary activism in Southern Africa.Specifically, the study holds the promise of graphically describing the role and relevance of South Africa and Namibia `s third wave parliaments in foreign affairs vis a vis their relationship with the executive, and by extension utilising the window of opportunity it provides to measure the texture and the state of health of democratic institutions in Southern Africa.It is against this background that this study is designed to answer the following questions: First, what is the role of PCFAs in the legislature of South Africa and Namibia, and what are the similarities as well as differences in the attitudes and practices of both PCFAs towards foreign policy issues?Second, what are the challenges that have faced, and still facing either or both parliaments, with reference to executive-legislative relations over foreign affairs?Finally, in addition to constitutional mandate, what is the other reason[s] for parliamentary interest in Foreign affairs?
With regards to sources of data and methodology, data were predominantly sourced from primary sources such as the media reports, library and personal interviews with the actors/MPs/the member of Foreign Affairs Committee in both Parliaments.This approach helps to take advantage of the benefit of current history and political process in the ways they have unfolded and interacted.Again, it has helped also to see clearly why and how parliamentarians in Namibia and South Africa have engaged or have not been involved [as they would have wanted to] in diplomatic matters and foreign affairs overtime.
Parliament and Foreign Policy in South Africa and Namibia
Given the history of long struggle for liberation in both South Africa and Namibia it is not surprising to discover that both countries foreign policy objectives reflects the desire to advance the cause of peace, freedom in Africa and by extension the international community.According to Namibia Constitution [1998] article 96 highlighted the country foreign policy objectives as follows: 'The State shall endeavour to ensure that in its international relations to: [i] adopts and maintains a policy of non-alignment; [ii] promotes international peace and security; [iii] creates and maintains just and mutually beneficial relations among nations; [iv] fosters respect for international law and treaty obligations; [v] encourage the settlement of international dispute by peaceful means' .In the case of South Africa, the Foreign Policy Discussion paper [1996] which is retrievable from the government website outlines principles which serve as guidelines in the conduct of South Africa's foreign relations.These include: [i]a commitment to the promotion of human rights; [ii]a commitment to the promotion of democracy; [iii]a commitment to justice and international law in the conduct of relations between nations; [iv]a commitment to international peace and to internationally agreed-upon mechanisms for the resolution of conflicts; [v]a commitment to the interests of Africa in World Affairs; and[vi] a commitment to economic development through regional and international cooperation in an interdependent world." Hence such critical concern such as the State's external relations is of paramount interest to all parliaments.Even then, this engagement of the legislature with foreign affairs and the whole range of activities of the legislature that criss-cross the terrain of external relations and engagement with diplomatic community, aptly described here as ''parliamentary diplomacy'' is also not the job of the whole house in any representative democracy.Representation is one of the hallmarks of modern democracy and the arm/organ of the state that most illustrate this assertion is the parliaments/legislature.In South Africa and Namibia the two countries that both became independent in the 1990s have adopted bicameral legislature.The Namibian National Council consisting of 26 members is the Upper Chamber, while the National Assembly which is the lower house has 78 members.But South Africa has a total of 490 Parliamentarians, with National Assembly consisting of 400 members, while the National Council of Provinces [NCOP] has 90 members.
A division of responsibilities and competencies, with checks and balances built into the political system to prevent the abuse of executive powers, is a feature of all liberal democracies, whether parliamentary, presidential or some sort combination of the two.Thus one key role of the legislature is to check, challenge, monitor and legitimize policies undertaken in the name of the state by the executive branch of government.Indeed, it could be argued that, if there is no tension between a parliament and the executive, the former is not performing its proper role.Specifically there are Parliamentary Committees on Foreign Affairs [PCFAs] that often created to deal with issue of foreign relations and international/diplomatic affairs.
In South Africa and Namibia there are Parliamentary Committee on Foreign Affairs [PCFAs] which though vary in numerical strength and issues and concerns covered, they all took cognisance of the multi-party nature of both countries.The Parliamentary Committees on FA in both countries reflect the political parties in Parliament, but in proportion to their percentage in the whole house.This practice also applies within the context of gender-mainstreaming.In South Africa, Thirty three percent [33%] of the Joint PCFA are women, while in Namibia; Twenty seven percent [27%] are women.
The Role of PCFAs in South Africa and Namibia: Globally parliaments had been widely expected to decline in significance in the later part of the twentieth century, but instead they have developed new and vital political roles and have innovated in their institutional structure-most currently in newly organised or invigorated parliamentary committees, not only in a few parliaments, but across most political cultures and systems.Even as newly democratic parliaments throughout Africa experiment with elaborate committee structures, those with older highly developed committee system are reaching for more varied and flexible alternatives.In short parliamentary committees have emerged as vibrant and central institutions of democratic parliaments of today's Africa.Further in most parliaments PCFA is one of the fundamental portfolio committees, and hardly can we find any country's parliament in Africa without a PCFA.This is predicated on the nexus between national interest and foreign affairs as a major platform to advance the same.Thus in Namibia, the duty of the parliamentary standing committee on foreign affairs, defence and security is to: [i] Consider any matter it deems relevant to defence; home affairs; foreign affairs; Namibia central intelligence service (NCIS) and prisons and correctional services; [ii]Consult and liaise with such offices, ministries and agencies as necessary; [iii] Exercise oversight function with regard to Namibia's foreign policy and its relations with other states on matters of defence and security ; [iv]Investigate issues relating to the policies, standards and procedures followed by the Namibian central intelligence service; and [v] Probe issues relating to human right violations; obtain information from government or other sources regarding any real or perceived threat to the security of the republic of Namibia; enquire into and monitor international Protocols, conventions and agreements that may affect Namibia's foreign policy, defence and security, and where necessary, make recommendations to the national assembly.
In South Africa, Tim Hughes [2002] has argued that the PCFA is fundamentally created and tasked with maintaining oversight of: -the exercise of national executive authority within the sphere of foreign affairs -the implementation of legislation pertaining to the spheres of foreign affairs -any executive organ of the State within the sphere of foreign affairs; and any other body or institution in respect of which oversight was assigned to it.
The PCFA also enjoys considerable specific powers.It may monitor, investigate and make any recommendations concerning any constitutional organ of state within its purview.The committee is granted such powers with regard to the legislative programme, the budget, rationalisation, restructuring, functioning, structure or staff and policies of any organ of state or institution.Furthermore, the committee is to consider all bills and amendments to bills referred to it.A further role unique to the PCFA is the consideration and approval of all international conventions and treaties prior to their ratification by Parliament.In the new millennium, the means of engagement and involvement remain largely the same as before and these include the following: Briefing and Question time: Briefing is an age old mechanism for parliamentary involvement in foreign affairs.It also includes parliamentary sessions on debates, briefings, question time and press releases.In South Africa the first PCFA briefing session in the new millennium was held on the 3 rd of February 2000.Indeed the first four PCFA sessions were primarily for briefing, which includes the Between 3 rd February 2000 and 1 st June 2008, there are about one hundred and ninety [190] entries with reference to documented activities or meetings of the Joint Parliamentary Committee on Foreign Affairs [PCFA].However almost one hundred and fifteen [115] or about 57 percent of these were briefings and reporting/question time sessions.In terms of regularity the Parliamentary Committee on Foreign Affairs in South Africa is relatively active with about four meetings/ press releases per month.The subject matter seems also to weigh heavily in favour of African issues, which claims almost forty [40%] percent.This is not unconnected with South Africa's new role in Africa as political gladiators as well as a major player in Africa's new and emerging market for foreign direct investment.More so that the executive sector of government seem to be leading the African renaissance project in the wake of the transformation of OAU to AU in Durban in 2002.
But in Namibia the entries from 2004 to 2008 contains only one item that deals directly with the issue of foreign affairs.Hence it is argued that there is no better evidence to proof that there is a kind of low-level parliamentary diplomacy in Namibia.Almost all external and foreign matters were exclusively dealt with by the executive.With specific reference to the activities of the Parliamentary Committee responsible for foreign affairs, Security and Defence, the Committee only reported on its visit to the hardship mission in March 2005.The three other reports between 2004 and 2006 were all about the defence component of the committee's responsibilities.These reports includes the Report on the visit to military installation [2005], the Report on the motion on the increase in criminal activities and violence on innocent and vulnerable Namibians [November 2006], and the Report of the committee on visits to police stations/cells, prisons, border posts and military installations in the North-east and southern regions in 2006 Visitation/Representation/Fact-finding Missions: This is another means by which the Parliaments particularly the PCFAs all over seek to play an active role in the monitoring of national interest that borders on foreign affairs.In Namibia the National Assembly's standing committee on foreign affairs, defence and security visited some Namibian diplomatic mission classified as 'hardship missions' in July 2007.It was reported in the [Parliament Journal, 2007] that the committee members discussed with head of heads of missions, their staff (both Namibian and locals) about the difficulties that they experience in fulfilling their duties.The committee looked at the following issues that affect Namibian hardship missions: economic situations; security situations; effectiveness of communication with the host country; water and electricity supply; education; health and accommodation.
In South Africa PCFA is regarded as a 'mobile committee', because the members of committee travel often on missions, going by the nature of its mandate.As a result there are numerous reports of visitation to other countries and fact-finding missions.However to what extent these reports are fed into executive decisions on those issues are still very hard to ascertain.Thus in the following section the article tries to explore the nature of executive-legislative friction over foreign affairs in South Africa and Namibia since the beginning of the new millennium.
Evidence of Executive-Legislative Friction over Foreign Affairs
After a series of conversational interview with a number of parliamentary actors and observers in Namibia and South Africa, it became very obvious that there is a kind of low-level legislative-executive friction over the conduct of foreign affairs in both countries.There a number of indicators to illustrate this point and these include: Complaints over Budget and Foreign Policy Process: First, the budget of the Ministry of Foreign [MFA] affairs in Namibia and Department of Foreign Affairs [DFA] in South Africa in practice do not get scrutinised by the PCFAs and the committee is seldom consulted on the same issue.This often draws the flak of the PCFAs in both countries.The crux of the friction is that the PCFA's budget remains the prerogative of the executive in both countries.With specific reference to Namibia, the PCFA do not provide opinion and the estimates submitted often get jettisoned or not considered most of the times.According to the PCFA Clerk interviewed in Windhoek, [who had worked at the National Parliament for twelve years], ''most of the times, to the surprise of the PCFA the actual allocation only gets known on the floor of the whole house or worse still on pages of newspapers '[Personal Interview, November 2007].Secondly, anecdotal evidence suggest that the PCFA do not get to input on foreign policy both at the design stage and are often not involved at the implementation level.The Clerk interviewed further provided evidence to corroborate this point.According to the Clerk, ''I don't remember when the PCFA get invited by the Ministry of Foreign Affairs [MFA] for discussion on any foreign policy issue….nocontrol at all and yet only PCFA is mandated to do this.. '[Personal Interview, November 2007] The Clerk of the PCFA further revealed that ' it was not until 2006 when there was a major uproar about military-related incident in the Northern part of Namibia, that the PCFA & Defence began to directly make recommendations '[Personal Interview, November 2007].On the part of the executive the PCFA visits to Namibia embassies [hardship mission] abroad, though it relates to oversight functions of the PCFA, yet this exercise seem to have attracted serious reservations and complaints from the executive arm of government.The PCFA was perceived as interfering in the business of administering the country's foreign affairs.
Limited Role in Envoy Nomination & Deployment:
With reference to Ambassadorial selection and posting the PCFAs in both countries feel sidelined.The PCFAs also often complain that they have very limited, if any, role in the selection and deployment of Ambassadors, High Commissioners and Consul-Generals.Evidence gathered also suggests that the PCFAs do not often have opportunity to scrutinise the credentials as well as integrity of the would-be envoys before posting.This is a profound concern, yet the PCFAs could not do much within this context, because this practice and responsibility is not regarded as a constitutional mandate.Again, apart from seeking involvement in the selection, scrutinising and ratification of South African and Namibia's envoys going abroad, the parliament should also be a major place for courtesy calls by incoming foreign diplomats soon after presentation of their letters of credence.It is further argued that because of the significant status, the legislature enjoys in any democratic governance, departing envoys should recognise this role and ensure to appreciate the need to include the parliament in their farewell tours.This seems not be a common practice as yet in Namibia and South Africa, though there are exceptions.For example, the Parliament Journal, 2007 recorded the farewell visit of the U.S Ambassador to Namibia, Ambassador [Mrs] Joyce Barr when she paid a courtesy visit to the Speaker of the National Assembly Hon.Dr.Theo-Ben Gurirab on 17 July 2007 to bid farewell to the Parliament.
Limited role in the ratification of treaties, conventions and protocols: After a rigorous content-analysis of the various Annual Reports released by the Namibia`s Ministry of Foreign Affairs[MFA] and South Africa's DFA, it became obvious that it is actually in theory that both Parliaments are supposed to advise government on ratification of international treaties and conventions.With specific reference to Namibia, most of the treaties signed between 2005 and 2006, from Stockholm Convention on Persistent Organic Pollutants, to the Agreement Concerning the Treatment of War Graves of Members of the Armed Forces of the Commonwealth in the Territory of the Republic of Namibia did not enjoy any significant verification or contribution from the parliament.Rather the Attorney-General of the Republic of Namibia seemed to have been the preferred partner on these international matters.In fact, the MFA Annual Report [2006] noted that 'The Attorney-General being the principal legal adviser to government approves all the principal bilateral and multilateral agreements before they are entered into' Decision on strategic matters without recourse to PCFA: In the late 1990s troops or peace-keeping force were sent from Namibia, to intervene in the DRC conflict and specifically to support the Kinshasa regime, led by Laurent Kabila, until he was assassinated by Congolese armed rebels.This action led to mild friction between the executive and the Legislature based on the argument that, the executive cannot send the military out on such a sensitive and high risk missions without the consent of the citizenry through the parliament.Another example of this limitation in the new millennium relates to the inability of the parliament to engage in debate that will lead to government position on UN Reforms: While the debate on UN reforms lasted with reference to African representation and position on the issue, the Namibia legislature did not officially discuss the issue nor were the parliamentarians able to offer any advise with reference to official government position.However, unlike the self-imposed or executively-orchestrated limitation experienced by the parliament the, executive arm of the government issued a statement on the subject-matter in September 2005.Isaak Hamata[ 2006] in his piece reported how President Hifikepunye Pohamba in his address to the 60 th General Assembly of Heads of State and Government pledged Namibia's willingness to assist in the democratisation of the world body.The President further stated 'if the UN is to continue serving the interests of the world, it needed to be reformed……we must be guided by the very principles of democracy, equity, justice and fairness for all.At the centre of this overdue exercise must be the compelling need to better serve all peoples, regardless of their race, religion or status of development'.Documentary sources and personal interviews have established the fact that South Africa under Thabo Mbeki sought to promote economic justice and redesign a fairer global North-South relations.In the new millennium presidential diplomacy was at work when at various international forums the former president of South Africa harped on the importance of debt relief and elimination of poverty in Africa.In June 2002, it was the president that presented NEPAD to the G8 in Kananaski/Canada on behalf of Africa, and in August/September South Africa hosted the UN World Summit on Sustainable Development.By June 2003 President Mbeki had handed over the chairmanship of both the Non-Aligned Movement [NAM] and the African Union, but his government continued to devote much attention towards resolving the conflict in the Great Lakes region.Again since South Africa took up a non-permanent seat at UN Security Council, the country has tried to present and pursue the African agenda, though it has also come under intense international criticisms for voting not to take a stand against human rights abuses in Myanmar and opting for quite diplomacy approach towards Zimbabwe.The central argument here is that, most of the initiatives above were presidential in nature, from conceptualisation to implementation.In other words, from the NEPAD idea to how South Africa voted on the Security Council, the executive/presidency was completely in charge.
From the foregoing, as much as the parliament desires to be involved in Foreign affairs the South African and Namibian experiences have shown that 'presidential diplomacy' often supersedes 'parliamentary diplomacy'.i.e the executive arm of government have pre-eminently being in charge of the business of managing the foreign and diplomatic relations of both countries.But what informs Parliamentary interest in Foreign affairs?This question will be addressed in the next segment of this article.
Explaining Executive-Legislative Friction over Foreign Affairs
In addition to constitutional requirement, why do parliamentarians seem to be interested in Foreign Affairs and how diplomatic business is conducted in any democracy?By means of conversational interview in both countries, it has come across that the idea of national interest is very central to the raison d`etre for parliament-executive friction over Foreign Affairs.Although the term national interest is somewhat ambiguous, one can agree with Peter Shearman [1997], who usefully defines it in terms of the common good of a society within the bounds of a nation-state.That is to say, although between groups in domestic society there are conflicting interests, there exist general and common benefits to society that all members share irrespective of individual or group preferences on other issues.The basic common interests of any state are survival for itself and its population, maintaining the territorial integrity of the state, and enhancing its status and position in relation with other states.Conceptions of the national interest provided a powerful dynamic for mobilizing domestic society around specific political programmes and issues.A constant feature of domestic politics in all types of pluralist political systems is competition between political groups to be seen as the one group that offers the best safeguard for maintaining national interests.
National interests are linked to perception of identity.Images of a nation and its place in the world can be drawn upon to mobilise what William Bloom[1990] refers to as a 'national identity dynamic' with government and opposition groups drawing upon, creating, and manipulating these images for their own ends in a struggle for political power.The assumption here is that political elites manipulate a social-psychological dynamic relating to a conception of national identity which is itself determined by the external environment.In other words conception of the national self are linked to perceptions of the external other.Without taking this socio-psychological argument too far, these idea of national identity linked to national security and perceptions of the international environment are useful for understanding the executive-legislative hassles, though at low-level, over foreign affairs in South Africa and Namibia in the new millennium.
Foreign policy and diplomacy can be viewed as the means to ensure the objective of defending national interest and, hence, simultaneously the strengthening of national identity.Foreign policy also provides, as Philip Cerny [1979] has put it: ''the specific instrument par excellence at the disposal of elites hoping to mobilise the population of a legally-recognised nation state towards legitimation and political integration'.There are four important reasons why foreign policy and competing conceptions of national interests should be so powerful in the mobilisation of domestic society.
First, national interests are universal interest shared by all members of the society, transcending other cleavages based upon ethnicity, religion, culture, or class.Hence political groups are provided with the most potent force for mobilising the widest possible sections of the society.
Second, foreign policy provides a perfect discourse of politics that allows for escape from objective verification.Unlike specific economic or social policies, the feature of foreign policy, designed to defend the national interest, are removed from the same standards of immediate or short-term tests that can easily lead to failure.
Third, foreign policy is often more emotional as an issue affecting society, but it is often far more remote in terms of its impact on the individual.As an emotive issue the mass national public will always react favourably to policies which seem to enhance the national interest, and negatively to policies which seen as undermining it.
Fourth, foreign policy facilitates, much more readily than domestic policies, opportunities for the emergence of strong and charismatic leaders, who, rapping themselves in the national flag and the rhetoric of national identity, portray themselves as the only effective defenders of the national idea.
Conclusion and Summary
In conclusion, the inability of the parliaments to influence the executive often on strategic diplomatic matters and the seeming second fiddle on foreign affairs may be of course due to one other reason.In addition to constitutional limitations, another reality is the multiple actors and forces exacting influence of the executives in this age of globalisation.The role of other actors, such as foreign powers, opposition political parties, the civil society/third sector and the media, are as crucial as that of national parliaments if not more.These other actors do influence the State behaviour more often than imagined and they can be extremely strong in pushing agenda through the executive arm of government.Sometimes this is being done by literally arm-twisting the executive in technical negotiations.
In summary the study has established the fact that the post-1990 broad-based and all inclusive democratic governance in Africa, with specific reference to Namibia since 1990 and South Africa in 1994, also incorporates as well a great deal of parliamentary activism.We have been able to establish that foreign policy is always a contested ground between executive and legislature, with the latter always [even in developed democracies] coming through as playing second fiddle in foreign affairs.The study located the attractiveness of foreign affairs both to the parliament and executive within the orbit of national interest, as defined by policy elite.The article further describes the parliamentary approach to participating in foreign affairs through the PCFAs.The article argues that though the PCFAs are tasked with specific oversight responsibilities relating to foreign affairs are nevertheless often find it difficult to do enough or do more.The study utilises several indicators to illustrate the PCFA/Parliament's frustration with the executive over foreign affairs administration.In conclusion, the article noted the factor of multiple actors as one reason the parliament-executive friction over foreign affairs may not be resolved soon.
[i] Briefing by P.Hain, the British Minister of State for Foreign & Commonwealth Affairs on 3rd February;[ii] Briefing by the Minister on South Africa Activities in Africa on 15 February[iii] Budget briefing on 1 st March 2000 and [iv] Briefing on the Indian Ocean Rim on 8 March 2000.
|
2018-12-11T08:14:18.768Z
|
2009-08-02T00:00:00.000
|
{
"year": 2009,
"sha1": "d0f8cc37ed5f6367b72f062b5d944abeb4d6b383",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/jpl/article/download/3680/3268",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d0f8cc37ed5f6367b72f062b5d944abeb4d6b383",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
955436
|
pes2o/s2orc
|
v3-fos-license
|
Reduced expression of brain cannabinoid receptor 1 (Cnr1) is coupled with an increased complementary micro-RNA (miR-26b) in a mouse model of fetal alcohol spectrum disorders
Background Prenatal alcohol exposure is known to result in fetal alcohol spectrum disorders, a continuum of physiological, behavioural, and cognitive phenotypes that include increased risk for anxiety and learning-associated disorders. Prenatal alcohol exposure results in life-long disorders that may manifest in part through the induction of long-term gene expression changes, potentially maintained through epigenetic mechanisms. Findings Here we report a decrease in the expression of Canabinoid receptor 1 (Cnr1) and an increase in the expression of the regulatory microRNA miR-26b in the brains of adult mice exposed to ethanol during neurodevelopment. Furthermore, we show that miR-26b has significant complementarity to the 3’-UTR of the Cnr1 transcript, giving it the potential to bind and reduce the level of Cnr1 expression. Conclusions These findings elucidate a mechanism through which some genes show long-term altered expression following prenatal alcohol exposure, leading to persistent alterations to cognitive function and behavioural phenotypes observed in fetal alcohol spectrum disorders.
Fetal alcohol spectrum disorders (FASD) describe the continuum of phenotypic effects that may result from prenatal alcohol exposure (PAE). PAE is the most common cause of preventable neurodevelopmental disorders in North America [1,2] and is associated with attention deficit, impaired learning and memory, and hyperactivity [3], as well as an increased risk for anxiety and mood disorders [4]. These cognitive and behavioural changes persist throughout the life of an individual following PAE, though the mechanisms involved in maintaining these life-long changes are not well understood. However, it has been suggested that the effects of PAE may involve long-term changes in gene expression [5] that may be maintained through alcohol-induced epigenetic changes. In particular, we have previously reported that the expression of microRNAs (miRNAs) may be globally altered in the adult mouse brain following PAE [6], which supports recent data by other groups [7,8]. More specifically, these changes in miRNA expression may subsequently alter the expression of target genes, with one miRNA having the potential to regulate many different genes [9]. One such gene may be cannabinoid receptor 1 (Cnr1).
We have previously shown that early neonatal ethanol exposure in mice results in reduced Cnr1 gene expression in the adult brain [5]. Cnr1 acts within the endocannabinoid (eCB) system, involved in modulating neurophysiological processes controlling mood, memory, pain sensation, and appetite [10]. Cnr1 is also thought to be involved in the neuropharmacological effects of alcohol [11] through inhibition of glutaminergic and GABAergic interneurons [12]. Variations in this gene or alterations in its expression are also associated with mood disorders, particularly fear and anxiety phenotypes [13].
Here, we use a C57BL/6J mouse model of binge-like exposure during the period of synaptogenesis [5] to assess a potential relationship between Cnr1 and its putative regulatory miRNA, miR-26b. We evaluated the inverse expression patterns of these two transcripts, hypothesizing that the up-regulation of the miRNA following PAE may in part be responsible for the observed reduction in transcript of a target gene in the adult brain. In these experiments, mice were exposed to two acute doses of alcohol (5 g/kg) at neurodevelopmental times representing the human third trimester equivalent. This method has been previously reported and induces a peak blood alcohol level of over 0.3 g/dL for 4 to 5 hours following injection, and is sufficient to induce neuronal apoptosis and result in FASD-related behaviour [5,14,15]. Our results suggest that ethanol exposure during neurodevelopment may exert its long-term effects by altering the expression of regulatory miRNAs, which may then reduce the expression of a number of target genes that may contribute to the spectrum of phenotypes observed in FASD.
Gene expression data previously was generated through microarray analysis (GEO # GSE34539) of RNA isolated from whole brain tissue of 60-day-old male mice exposed to binge-like levels of alcohol during the third trimester equivalent on postnatal days 4 and 7 (see [5] for methods). miRNA expression array data (GEO # GSE34413) was also generated from the same sample (see [6] for methods).
Analysis of these data show a reduction of Cnr1 (fold change = −1.33, P = 6.07 x 10 -5 ) in ethanol-treated brains as compared to the saline controls. Also, the miRNA miR-26b increased in ethanol-treated mice (fold change = 1.284, P = 0.0364) compared to controls.
The potential interaction of the genes and miRNAs identified as differentially expressed by the array studies were analysed using Ingenuity's® Micro-RNA Target Filter. This analysis identified miR-26b as a high-confidence predicted regulator of Cnr1 expression.
The reduction of Cnr1 transcript was confirmed by real time RT-PCR [5], showing a 1.14-fold decrease in expression in ethanol-treated male brains as compared to matched controls (P = 0.004; Figure 1A). Further, we demonstrated a significant increase in the level of miR-26b miRNA in ethanol-treated samples (fold change = 3.71, P = 0.012) compared to matched controls (see [6] for methods) ( Figure 1B). This inverse relationship within the same sample set suggests that the two observations may be biologically related. This potential interaction was further analysed using the TargetScan® Human 6.2 predictor for miRNA targets [16], which shows that the seed region of miR-26b possesses complementarity to the 3'-UTR of the Cnr1 transcript and has a significant potential to bind this region ( Figure 2). The probability of conserved targeting (P CT ) analyses the preferential conservation of binding sites [16]. It has the advantage of identifying targeting interactions that are not only more likely to be effective but also those that are more likely to be consequential for the animal, given the evolutionary conservation. The analysis Figure 1 Analysis of Gene and miRNA expression via qPCR. (A) Change in Cnr1 mRNA levels in male control and alcohol-treated whole brain samples normalized to control. This figure was reproduced with permission from the authors [5]. (B) Change in miR-26b levels in male control and alcohol-treated whole brain samples normalized to control. Data are fold change ± SEM. Control n = 5, alcohol n = 5. *P <0.01, **P <0.05. calculated a P CT score of 0.84, which indicates a significant degree of confidence in the predicted interaction. Next, we evaluated expression of Cnr1 and miR-26b to confirm their relative expression levels.
miR-26b is encoded from an intron of small Cterminal domain phosphatase [17]. Interestingly, it is involved in neuronal differentiation as its transcription results in a negative feedback loop that is absent in neural stem cells [18]. miR-26b has also been shown to regulate the expression of brain-derived neurotrophic factor (BDNF), a gene strongly implicated in neurodevelopment and related disorders (i.e., schizophrenia) [19], including the effects of PAE [5].
This altered expression of miR-26b may have the ability to affect downstream gene expression by binding to the mRNA transcripts of its target genes. We have demonstrated that miR-26b shows complementarity to a region of the 3'-UTR of the Cnr1 transcript (Figure 2), which gives it the potential to regulate the expression of Cnr1. This regulation by miRNAs generally occurs through blocking of translation and/or promoting degradation of the target transcript [9]. The up-regulation of miR-26b correlates with the reduced Cnr1 transcript observed in the adult brain of mice neurodevelopmentally exposed to alcohol. [7] Our results suggest that this regulatory mechanism also occurs in vivo, and that the stable alteration of miRNA as a result of neurodevelopmental teratogenesis may affect longterm gene expression of its target transcript(s) long after exposure.
It is possible that relationships such as these may have the ability to influence the aberrant behavioural phenotypes seen in FASD. The eCB system, for instance, plays a strong role in anxiety-related behaviour [20], which has been shown to increase in adult mice following PAE [21]. Previous studies evaluating Cnr1 knockout mice have demonstrated increased anxiety-like phenotypes [13]. This suggests that the observed reduction in Cnr1 expression demonstrated here may contribute to our observation of anxiety-like behaviour following PAE.
Ultimately, these findings provide a mechanism by which the long-term change in Cnr1 expression is maintained following PAE. They also suggest that the alteration of neurodevelopmentally-important miRNAs can influence the long-term function of biological pathways that influence cognition and behaviour. Epigenetic regulators of gene expression may then be affected by PAE, subsequently exerting pleiotropic effects on numerous gene targets that then contribute to the longterm and variable neurobehavioural effects associated with FASD.
Competing interests
The authors declare no competing financial interests.
|
2016-05-04T20:20:58.661Z
|
2013-08-02T00:00:00.000
|
{
"year": 2013,
"sha1": "22006e92def9a7ccb70ed6d56c1e77b60f49e077",
"oa_license": "CCBY",
"oa_url": "https://clinicalepigeneticsjournal.biomedcentral.com/track/pdf/10.1186/1868-7083-5-14",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "22006e92def9a7ccb70ed6d56c1e77b60f49e077",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18779124
|
pes2o/s2orc
|
v3-fos-license
|
Pou4f1 and Pou4f2 Are Dispensable for the Long-Term Survival of Adult Retinal Ganglion Cells in Mice
Purpose To investigate the role of Pou4f1 and Pou4f2 in the survival of adult retinal ganglion cells (RGCs). Methods Conditional alleles of Pou4f1 and Pou4f2 were generated (Pou4f1loxP and Pou4f2loxP respectively) for the removal of Pou4f1 and Pou4f2 in adult retinas. A tamoxifen-inducible Cre was used to delete Pou4f1 and Pou4f2 in adult mice and retinal sections and flat mounts were subjected to immunohistochemistry to confirm the deletion of both alleles and to quantify the changes in the number of RGCs and other retinal neurons. To determine the effect of loss of Pou4f1 and Pou4f2 on RGC survival after axonal injury, controlled optic nerve crush (CONC) was performed and RGC death was assessed. Results Pou4f1 and Pou4f2 were ablated two weeks after tamoxifen treatment. Retinal interneurons and Müller glial cells are not affected by the ablation of Pou4f1 or Pou4f2 or both. Although the deletion of both Pou4f1 and Pou4f2 slightly delays the death of RGCs at 3 days post-CONC in adult mice, it does not affect the cell death progress afterwards. Moreoever, deletion of Pou4f1 or Pou4f2 or both has no impact on the long-term viability of RGCs at up to 6 months post-tamoxifen treatment. Conclusion Pou4f1 and Pou4f2 are involved in the acute response to damage to RGCs but are dispensable for the long-term survival of adult RGC in mice.
Introduction
Glaucoma, a retinal degeneration disease characterized by progressive loss of retinal ganglion cells (RGCs), affected over 60 million people worldwide in 2010 and the number will increase to about 80 million by 2020 [1]. As the second leading cause of blindness, glaucoma is responsible for millions of blindness worldwide and most clinical glaucoma cases are in advanced conditions due to the inconspicuous early symptoms and the lack of effective early diagnosis. Understanding of the molecular mechanism underlying glaucomatous optic neuropathy is crucial for the diagnosis and treatment of glaucoma. The three closely related Class IV POU-homeodomain (POU4F) transcription factors, POU4F1, POU4F2 and POU4F3, are expressed in developing and adult RGCs, and are key components of a regulatory cascade of RGC development and survival [2,3]. During retinal development, Pou4f2 expression starts in more than 80% RGC precursors at embryonic day 11.5 (E11.5), a time when RGCs are first generated [2,4]. Afterwards, Pou4f1 and Pou4f3 are expressed in 80% and 20% developing RGCs, respectively [2,5,6].
Targeted deletion of Pou4f2 leads to a loss of about 80% RGCs accompanied by severe axonal defects and abnormal visual driven behavior [4,[7][8][9][10][11][12]. Pou4f1 has been shown to control the dendritic stratification pattern of selective RGCs, loss of Pou4f1 resulted in the alteration of dendritic stratification and the ratio of monostratified:bistratified RGCs [12]. Although the deletion of Pou4f3 alone does not affect the generation and survival of RGCs, Pou4f2/ Pou4f3 compound mutant exhibited more severe RGC loss than Pou4f2 mutant, suggesting a redundant role of Pou4f3 in regulating the survival of RGCs [13]. Thus each POU4F gene plays a distinctive role in RGC development and survival. But whether POU4F factors are required for the survival of adult RGC remains unknown.
Previous studies have revealed that similar to other neuronal degeneration diseases, the progressive death of RGCs in glaucoma is through apoptosis pathway [14][15][16][17][18][19] mediated by the BCL2 family proteins [20]. BAX, the pro-apoptotic BCL2 member required for the normal death of RGC during development [21,22], has been identified as a major mediator of RGC death in glaucoma [14,23,24]. Deficiency of BAX gives long-term protection of RGC soma and slows axonal loss in glaucoma mouse models [14,23]. On the other hand, the pro-survival factors of BCL2 family, such as BCL2 and BCL-X, promote cell survival by preventing the activation of their pro-apoptotic relatives [25][26][27][28][29]. Previous evidences have stated that POU4F1 promotes the expression of BCL2 pro-survival gene and suppresses BAX activation to protect neurons from programmed cell death [30][31][32]. Meanwhile, overexpression of pro-survival factors BCL-X and BCL2 protects RGCs from death during development and after axonal injury in the adult [18,[33][34][35]. Interestingly, optic nerve crush leads to a rapid decrease in the expression of POU4F proteins in rat RGCs [36]. Therefore, it is conceivable that loss of POU4F factors could result in the progressive degeneration of adult RGCs and render RGCs more sensitive to the optic nerve injury and accelerate the apoptosis of RGCs.
In order to investigate the role of POU4F factors in adult RGCs, we focused on POU4F1 and POU4F2 whose combined expression covers almost all RGCs in the adult retina. We generated Pou4f1 and Pou4f2 conditional null alleles (Pou4f1 loxP/loxP and Pou4f2 loxP/loxP ) and used tamoxifen-inducible CreER to inactivate Pou4f1 or Pou4f2 or both in adult mice. We showed that Pou4f1 and Pou4f2 were effectively deleted two weeks after tamoxifen treatment. Further analysis of RGCs in retinal section and flat mount samples surprisingly revealed that deletion of Pou4f1 or Pou4f2 or both had no effect on the total number of RGCs at test timepoints from two weeks to six months after tamoxifen treatment. Furthermore, examination of RGCs in controlled optic nerve crush of Pou4f1/ Pou4f2 compound null mice also revealed that deletion of Pou4f1 and Pou4f2 did not accelerate the apoptosis of RGCs. Therefore, our results strongly argue for the dispensable role of Pou4f1 and Pou4f2 in regulating the survival of RGCs in adult mice.
In order to investigate the role of Pou4f1 and Pou4f2 in adult RGCs, we deleted Pou4f1 and Pou4f2 by intraperitoneal injection of tamoxifen at P30 at the dosage of 5 mg/40 g bodyweight for five consecutive days. Their control Pou4f1 loxP/loxP and Pou4f2 loxP/ loxP littermates were given oil vehicle only. We collected the retinas from Pou4f1CKO, Pou4f2CKO, and control mice at one week and two weeks after injection and performed retinal whole mount immunolabeling experiments to evaluate the deletion efficiency ( Fig. 2). Immunostaining with anti-POU4F1 and anti-POU4F2 in Pou4f1CKO mice revealed that one week after injection, about 20% POU4F1 + RGCs were left ( Fig. 2B and M) compare to the control ( Fig. 2A). At two weeks after injection, very few POU4F1 + remained, indicating a nearly complete deletion of Pou4f1 in retina ( Fig. 2E and M). Similarly, in Pou4f2CKO mice, POU4F2 was expressed in about 28% RGCs one week after tamoxifen treatment ( Fig. 2I and M) and in a very few RGCs two weeks after treatment ( Fig. 2L and M). Interestingly, deletion of Pou4f1 in adult RGCs did not affect the expression of Pou4f2 and vice versa (
Deletion of Pou4f1 or Pou4f2 or Both does not Affect the Survival of Adult RGCs Under Normal Conditions
To investigate the role of Pou4f1 in adult RGCs, we collected retinas from Pou4f1CKO and control mice at two weeks, four weeks, three months and six months after tamoxifen treatment respectively. RGC markers anti-TUJ1 and anti-ISL1 were used to label RGCs in flat mounted retina and DAPI was used to label all cells in the GCL (Fig. 4). After quantification of each cell marker, we found that there was no significant change in the number of RGCs labeled for TUJ1 ( Previous studies have shown that POU4F transcription factors are redundantly required for the differentiation and survival of RGCs during development [5]. Therefore, we sought to test whether the absence of RGC death in Pou4f1CKO or Pou4f2CKO mice was due to the overlapping expression of Pou4f1 and Pou4f2 in a majority of the RGCs. We generated the Pou4f1/Pou4f2 DoubleCKO mice and used the same strategy to label the RGCs in the GCL (Fig. 6). After quantification, we found that deletion of Pou4f1 and Pou4f2 did not impact the number of TUJ1 + RGCs ( To analyze the effect of Pou4f1 and Pou4f2 deletion on RGC axonal elongation, we dissected optic nerve from control and doubleCKO mice six months after tamoxifen treatment, and observed that the optic nerves in the control and doubleCKO mice appeared similar in size (Fig. 7A, D). Furthermore, SMI32 immunstaining of the retinal wholemounts revealed similar RGC axon bundles in the neural fiber layer in the control and doubleCKO retinas (Fig. 7B, E). GFAP immunolabeling (Fig. 7C, F) also revealed no difference in glial activation in both control and doubleCKO mice. Thus, our results suggest that Pou4f1 and nerve crush (CONC) in the DoubleCKO and control mice four weeks after tamoxifen treatment and analyzed the number of apoptotic cells labeled by anti-activated caspase 3 in the GCL of flat mount retinas (Fig. 8). At 3 days after CONC, there was a significant difference between control and the DoubleCKO mice (Fig. 8B, E). However, contrary to the accelerated cell death in Table 1).
Discussion
POU4F family members are crucial factors controlling the development and survival of a variety of neurons in both central and peripheral nervous systems. Deletion of each Pou4f gene results in the neuronal death phenotypes during the development of different organs [4,9,[37][38][39][40]. POU4F family has also been associated with human degeneration disease, such as the progressive hearing loss caused by an 8-base pair deletion in human POU4F3 resulting a truncated protein [37]. Mutant POU4F3 loses most of its transcriptional activity and ability to bind to DNA in a nondominant-negative manner [41]. In the retina, all three POU4F members are expressed only in RGCs. Based on their continuous expression in adult RGCs and the similarity in structure and function of all POU4F members, we hypothesized that like the mutation of POU4F3 results in the hearing loss [37], loss of POU4F function may affect the survival of adult RGCs. Since POU4F proteins promote neuronal survival by activating the pro-survival genes and inhibiting the pro-death genes [31,32,[42][43][44], POU4F proteins are the candidates to rescue RGC from death in glaucoma.
In our study, we investigate the role of Pou4f1 and Pou4f2 in adult RGCs by generating the novel Pou4f1CKO, Pou4f2CKO and DoubleCKO mouse models, in which the expression of Pou4f1, Pou4f2 and both can be deleted in adult by tamoxifen-inducible Cre recombinase. Expression of Pou4f1 and Pou4f2 are significantly reduced one week after tamoxifen treatment and are ablated two weeks after treatment (Fig. 2). Strikingly, deletion of Pou4f1 or Pou4f2 or both does not affect the number of RGCs in retina at two weeks to six months after tamoxifen treatment (Fig. 4-6). In addition, although the apoptosis of RGCs seems delayed in DoubleCKO mice at three days after CONC, there is no significant difference in five days after CONC (Table 1). All these results suggest that unlike developing RGCs during embryogenesis, the survival of adult RGCs in normal and CONC retinas might not require Pou4f1 and Pou4f2 or at least not Pou4f1 and Pou4f2 alone. During RGC development, Pou4f2 is expressed in the ganglion cell precursors and is required for the normal differentiation and axon pathfinding of RGCs and for the expression of Pou4f1. Targeted mutation of Pou4f2 enhanced apoptosis of RGCs and resulted in a loss of 70% RGCs. However, in our experiments, deletion of Pou4f2 does not affect the survival of RGCs, suggesting that the role of Pou4f2 in RGCs survival might be restricted only in developing RGCs but not the mature RGCs. Among other possible factors that could compensate for the loss of POU4F1 and POU4F2 function, POU4F3 plays an essential role in the survival of RGCs during development [13]. In adult mice, the expression of POU4F3 remains in RGCs and could play a role in the survival of adult RGCs. However, unlike the expression of POU4F1 and POU4F2 in most RGCs, POU4F3 is expressed in far fewer RGCs [2]. Thus, it is unlikely that POU4F3 could substitute for POU4F1 and POU4F2 in Pou4f1CKO, Pou4f2CKO and DoubleCKO mice. The LIM-homeodomain transcription factor ISL1 is expressed in developing RGCs and functions synergistically with POU4F to regulate the development and survival of RGCs during embryogenesis [38]. In adults, ISL1 expression persists in RGCs, starburst amacrine cells, and ON-bipolar cells. It would be interesting to test if ISL1 could compensate for the loss of POU4F1 and POU4F2 in Overall, our results imply that the survival mechanism of RGCs differs in adults from developmental stages. During neurodevelopment, the three members of the POU4F subfamily transcription factors are broadly expressed, either overlappingly or singularly, in a variety of nervous systems and are essential for the development and survival of neurons. The Pou4f1 knockout mice die at birth due to the severe defects in dorsal root ganglion (DRG), trigeminal ganglion (TG) and selective nucleus in brain [39,40]. Thus though Pou4f1 has a crucial role in neuron survival, axonal projection and subtype specification during development of both central and peripheral nervous system [40,[45][46][47][48][49][50][51], its function in adult neurons remains unknown. Our Pou4f1CKO mice provided a new platform to study the role of Pou4f1 after birth and both Pou4f1CKO and Pou4f2CKO mice could be the powerful tools for the investigation on the function of Pou4f1 and Pouf42 in both development and adult stages.
Animals
To generate the Pou4f1 conditional knockout (Pou4f1 cko ) mice (Fig. 1A), a 7.7 kb NheI-XbaI fragment containing the complete coding sequences was used as the 59 homologous arm and was subcloned at the XbaI site of the cloning vector. A loxP site was inserted into 59 of the first exon and a Frt-loxP-flanked Neomycin (Neo) cassette was inserted downstream of the coding sequence. A diphtheria toxin A (DTA) cassette was inserted upstream of 59 homologous arms and a 5 kb 39 arm XbaI-SacII fragment was (Fig. 1C). Targeted ES cells were then injected into mouse blastocysts to obtain chimeras. Chimeras were bred with wild-type C57BL/6J mice to generate Pou4f1 cko mice. Pou4f1 cko mice were then crossed with Flippase mice (The Jackson Laboratory, Stock# 009086) to remove Neo cassette and to generate the Pou4f1 cko mice.
To generate the Pou4f2 conditional knockout (Pou4f2 cko ) mice (Fig. 1B), we used a 6.5 kb EcoRI-HindIII fragment containing the entire coding sequences as the 59 homologous arm and subcloned it into the NotI and SalI sites of the cloning vector. A loxP site was inserted 59 of the first exon. Frt-loxP-flanked Neo cassette and DTA cassette were inserted in the vector for positive and negative selections. A 4.3 kb HindIII fragment as the 39 homologous arm was placed at the HindIII site of the vector. Similar to making Pou4f1 cko mice, targeted Pou4f2 cko ES cell clones were obtained and confirmed by Southern blot (Fig. 1D), and Pou4f2 cko mice were generated. Pou4f2 loxP mice were obtained by breeding Pou4f2 cko mice with mice expressing Flippase.
This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The animal protocol was approved by the University Committee of Animal Resources (UCAR Protocol No. 101414) at the University of Rochester. All surgery was performed under sodium pentobarbital anesthesia, and all efforts were made to minimize suffering. Embryos were designated as E0.5 at noon on the day at which vaginal plugs were observed. The day of birth was considered as P0.
Histochemistry and Immunohistochemistry
Staged mouse embryos were dissected and immediately fixed in 4% paraformaldehyde (PFA) in PBS at 4uC for 2-3 hours.
Samples were embedded and frozen in OCT medium (Tissue-Tek) after dehydration in graded sucrose and sectioned at 14 mm thickness. Before adult retina samples were harvested, vascular perfusion was performed to eliminate the blood remain in the retinal vessels, and then retinas were dissected and fixed in 4%PFA. Retinal flat mount immunostaining was performed as previously described [52]. Dilution and sources of antibodies used in this study were: mouse anti-POU4F1 (
Statistical Analysis
Cell number quantification of different retina cell markers was performed with retina sections and flat mounts from at least three age-matched animals for each cell type. Data are represented as mean6SEM. Statistical analysis was performed using paired twosample Student's t-test. A value of P,0.05 was considered statistically significant.
|
2016-05-04T20:20:58.661Z
|
2014-04-15T00:00:00.000
|
{
"year": 2014,
"sha1": "8c022cf5120dc471a3ae82e2d08b48203489822e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0094173&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8c022cf5120dc471a3ae82e2d08b48203489822e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
11117914
|
pes2o/s2orc
|
v3-fos-license
|
Comparison in bone turnover markers during early healing of femoral neck fracture and trochanteric fracture in elderly patients
Healing of fractures is different for each bone and bone turnover markers may reflect the fracture healing process. The purpose of this study was to determine the characteristic changes in bone turnover markers during the fracture healing process. The subjects were consecutive patients with femoral neck or trochanteric fracture who underwent surgery and achieved bone union. There were a total of 39 patients, including 33 women and 6 men. There were 18 patients (16 women and 2 men) with femoral neck fracture and 21 patients (17 women and 4 men) with trochanteric fracture. Serum bone-specific alkaline phosphatase (BAP) was measured as a bone formation marker. Urine and serum levels of N-terminal telopeptide of type I collagen (NTX), as well as urine levels of C-terminal telopeptide of type I collagen (CTX) and deoxypyridinoline (DPD), were measured as markers of bone resorption. All bone turnover markers showed similar changes in patients with either type of fracture, but significantly higher levels of both bone formation and resorption markers were observed in trochanteric fracture patients than in neck fracture patients. BAP showed similar levels at one week after surgery and then increased. Bone resorption markers were increased after surgery in patients with either fracture. The markers reached their peak values at three weeks (BAP and urinary NTX), five weeks (serum NTX and DPD), and 2–3 weeks (CTX) after surgery. The increase in bone turnover markers after hip fracture surgery and the subsequent decrease may reflect increased bone formation and remodeling during the healing process. Both fractures had a similar bone turnover marker profile, but the extent of the changes differed between femoral neck and trochanteric fractures.
Introduction
There has been considerable interest in the assessment of bone turnover using biochemical markers, and measurement of various bone turnover markers has recently become easier for clinical use. Serum bone-specific alkaline phosphatase (BAP) has been used for the evaluation of bone formation, while the breakdown products of type I collagen have been reported to be specific and sensitive bone resorption markers. 1 It is now possible to use these markers to evaluate bone turnover in patients with osteoporosis and other bone diseases. 2 Recently, we reported that bone turnover markers were significantly increased in elderly women with back pain. 3 Vogt et al. reported that only one-third of patients with vertebral fractures knew of their existence. 4 In patients with osteoporosis, vertebral fractures frequently cause back pain .5-7 Therefore, evaluation of bone turnover using biochemical markers in elderly women may allow us to detect the influence of unrecognized fractures. 3 Although longitudinal changes in bone turnover markers after fracture have been reported and it has been pointed out that these markers increase during fracture healing, [8][9][10][11][12][13][14][15][16][17] there have been few investigations into the changes in bone turnover markers in patients with fragility fractures. Generally, it is considered that the process of fracture healing (amount of callus formed, time until bone union, etc.) is differ-ent for each fracture site. However, only a few studies have compared the changes in bone turnover markers between different fragility fractures. Also, although the bone resorption markers available for clinical testing have increased, there have been no reports on the responses of various bone resorption markers during the healing of fragility fractures. Hip fractures are the most common type of fragility fracture and can cause many serious or even fatal complications in elderly patients. 18,19 Hip fractures can be divided into trochanteric and femoral neck fractures. These two types of fracture occur in adjacent parts of the proximal femur, but have quite different clinical features. It is well known that nonunion is common in patients with femoral neck fractures, while it is rare in those with trochanteric fractures. We considered that these two types of hip fracture might be a useful model for assessing differences in the responses of bone turnover markers after fragility fracture. Recently, some authors have reported on the changes in bone turnover markers after femoral neck and trochanteric fractures. 9,12,13 However, there have been no reports comparing changes in bone-specific turnover markers during the healing of femoral neck and trochanteric fractures.
At Suwa Red Cross Hospital, multiple pins are used to stabilize femoral neck fractures in most patients, even when the fracture is displaced. In the present prospective study, various bone turnover markers were measured after surgery for femoral neck and trochanteric fractures, with the purpose of determining whether there were characteristic short-term changes in these markers during the healing of proximal femoral fractures.
Design and Methods
The patients in this prospective study were consecutive patients with femoral neck or trochanteric fracture who underwent surgical intervention and achieved bone union at Suwa Red Cross Hospital between January and December 2003. We performed universal hip replacement (UHR) in patients with femoral neck fractures who were in a poor physical and/or mental state. We also performed UHR in patients with subcapital fractures and excluded them from this study. Patients who were bedridden both before and after surgery due to physical and/or mental problems were also excluded.
A total of 39 patients (33 women and 6 men) were followed up after surgery and achieved bone union. They formed the participants in this study. Their age range was 56-96 years, with an average age of 78.3 years. There were 18 patients (16 women and 2 men) with femoral neck fractures and 21 patients (17 women and 4 men) with trochanteric fractures. The average age of the former group was 75.8 years and that of the latter group was 80.8 years ( Table 1). All of the patients with femoral neck fractures were treated using multiple pins, while the patients with trochanteric fractures were treated using compression hip screws (CHS).
Patients with both types of fracture were permitted movement in a wheelchair as soon as possible after surgery. Patients with trochanteric fracture were usually allowed to start weight bearing from two weeks after surgery, while patients with femoral neck fractures usually commenced weight bearing from four weeks and were encouraged to gradually increase this. Bisphosphonate therapy was not initiated until eight weeks after surgery. At six months after surgery, all patients were assessed for clinical and radiological evidence of fracture healing. When radiological evidence of bridging callus, sclerosis, and/or remodeling at the fracture site was confirmed by two independent orthopedic surgeons and the patient could walk without pain we assumed that bone union was complete.
Serum BAP was measured as a bone formation marker. Urine and serum levels of N-terminal telopeptide of type I collagen (NTX), urine levels of C-terminal telopeptide of type I collagen (CTX), and urine levels of deoxypyridinoline (DPD) were measured as markers of bone resorption. The levels of NTX (Osteomark, Osteox International, Seattle, WA), DPD (Metrar DPD EIA Kit, Quidel Corporation, San Diego, CA, USA), and CTX (Frelisa‚ CrossLaps, Nordic Bioscience Diagnostics, A/S, Herlev, Denmark) were measured using the enzyme-linked immunosorbent assay (ELISA). At the first examination, intact parathyroid hormone (i- PTH) was measured in an immunoradiometoric assay and 25(OH) vitamin D (VitD) was measured in a competitive radioimmunoassay (except one case of trochanteric fracture).
Article
In principle, samples of venous blood and spot urine were collected on the following six occasions: within 24 hours after injury, and one, two, three, five, and eight weeks after surgery. Because pre-operative data were limited, these data were excluded from the object of the examination, and showed as a reference data. The spot urine samples were collected from the second morning urine, avoiding the first morning urine. Samples were stored at -20°C until analysis. Immunoassays were performed by SRL Inc. (Tokyo, Japan).
All patients gave informed consent to undergo examination and medical treatment. This study was carried out prospectively and in accordance with the Helsinki Declaration, and approved by the ethics committees of Suwa Red Cross Hospital. Differences in the age and the bone turnover markers at each examination between patients with femoral neck and trochanteric fractures were assessed using Student's unpaired t-test. Statistical significance was set at a probability value of less than 0.05.
Results
Patients' background data are shown in Table 1. The values of i-PTH and VitD showed no significant differences between the two types of fracture. A correlation between i-PTH and VitD was observed, but it was not significant (p=0.053). All of the bone turnover markers were increased after surgery in patients with either type of fracture, but there were significantly higher values of both bone formation and resorption markers in the patients with trochanteric fractures than in those with femoral neck fractures ( Table 2). All of the bone turnover markers showed a similar pattern of changes in both fractures, but the actual values in bone turnover markers was smaller in patients with femoral neck fracture than in those with trochanteric fracture. BAP (Figure 1) showed similar levels in both fractures at one week after surgery and then increased to reach 41.9±17.2 in femoral neck fracture patients and 60.0±28.4 U/L in trochanteric fracture patients at three weeks after surgery. BAP levels were significantly higher in trochanteric fracture patients than in neck fracture patients from two weeks after surgery. In femoral neck fracture patients, BAP decreased to 29.5 U/L after eight weeks. In trochanteric fracture patients, however, BAP was 45.7±21.5 U/L at eight weeks and was still high compared with that in patients with femoral neck fractures.
Urinary NTX (Figure 2) was increased after surgery and showed a similar pattern in both fractures, but urinary NTX levels were significantly higher in trochanteric than neck fracture patients from two weeks after surgery, except at five weeks. At three weeks after surgery, urinary NTX increased to 146.8±78.9 in femoral neck fracture patients and to 215.3±93.5 from 129.4±76.5 nmolBCE/ nmol·CRE in trochanteric fracture patients. Urinary NTX reached its peak at three weeks after surgery and remained high compared with the reference values until eight weeks in both types of fracture. Serum NTX (Figure 3) showed similar levels in trochanteric and femoral neck fracture patients at one week after surgery, unlike urinary NTX and the other bone resorption markers. Also unlike urinary NTX, serum NTX increased later and reached a peak at five weeks after surgery, remaining high at eight weeks. Serum NTX reached 30.6±9.9 in femoral neck fracture patients and 38.9±10.5 nmolBCE/l in trochanteric fracture patients. Serum NTX was only significantly higher in trochanteric fracture patients than in femoral neck fracture at eight weeks after surgery. Urinary and serum NTX revealed different patterns of change over time.
Urinary DPD (Figure 4) increased after surgery and showed the same pattern in both types of fracture, but DPD levels were significantly greater in trochanteric than in femoral neck fracture patients from one to eight weeks after surgery. DPD increased to reach 12.3±5.2 in femoral neck fracture patients and 21.7±10.9 nmol/mmol·CRE in trochanteric fracture patients at five weeks after surgery.
Urinary CTX ( Figure 5) increased after surgery and showed the same pattern in both types of fracture, but CTX levels were significantly greater in trochanteric fracture patients than in femoral neck fracture patients from two weeks after surgery. Urinary CTX reached 933±473 at two weeks in trochanteric fracture patients and 640±317 at three weeks in femoral neck fracture patients, after which it decreased.
At six months after surgery, all of the patients achieved bone union of each fracture site. Despite an initial diagnosis of bone union, re-evaluation of one patient with femoral neck fracture led to a subsequent diagnosis of pseudarthrosis of femoral neck after 13 months post surgery, and UHR was subsequently performed. Another patient with femoral neck fracture resulted in osteonecrosis of femoral head despite having achieved bone union of the femoral neck. No patients presented surgical site infection during the observation period.
Discussion
Bone turnover markers may reflect the fracture healing process. The changes in bone turnover markers during fracture healing are believed to be greater than those that occur during the physiological remodeling cycle. Many authors have reported that bone turnover markers are increased after fracture. [8][9][10][11][12][13][14][15][16][17] Femoral neck and trochanteric fractures occur in adjacent parts of the proximal femur in elderly people, but have quite different clinical features. Therefore, we selected these two hip fractures to investigate the changes in bone turnover markers after bone fragility fracture in this prospective study. We found that each bone turnover marker had the same pattern of changes in the two different types of hip fracture. On the other hand, both bone formation and resorption markers showed significantly higher values in patients with trochanteric fracture than in those with femoral neck fracture.
Generally, it is considered that fracture might directly influence bone formation and that immobilization following fracture may induce an increase in bone resorption. Bone resorption markers are strongly related to physical activity. [20][21][22][23] Theiler et al. found that bone resorption markers were significantly higher in institutionalized and physically inac-tive patients compared with those who were ambulatory and physically active. 22 Other authors have reported that bone resorption markers are increased by bed rest, 20-23 while bone formation markers decrease with bed rest. 20 Decreased physical activity usually leads to an increase in bone resorption and the inhibition of bone formation.
The patients with trochanteric fracture in this study were older than those with femoral neck fracture, as in previous reports. 16 It was reported that in elderly women with osteoporosis, urinary NTX levels increased with aging but BAP levels did not change. 3 However, the effect of aging on bone resorption markers is slight. 3 In this study, patients with both types of fracture were permitted movement in a wheelchair as soon as possible after surgery, and patients with femoral neck fracture delayed weight bearing for longer than those with trochanteric fracture. In elderly patients, physical activity is usually decreased. And many elderly patients often have difficulty walking without weight bearing. The differences in physical activity, age, and post-operative therapy had various conflicting influences on bone resorption markers in this study.
With respect to post-operative therapy, physical activity might be slightly greater in trochanteric fracture patients than in those with femoral neck fracture. Bone resorption markers were significantly higher in the trochanteric fracture group than in the femoral neck fracture group despite the difference in physical activity and post-operative management. Thus, the decrease in physical activity due to post-operative therapy did not influence the changes in bone resorption markers after proximal hip fracture. Accordingly, the changes in bone resorption markers due to fracture healing might exceed those related to physical activity in elderly patients with hip fracture.
In general, it is believed that appropriate weight bearing helps bone union and it may increase bone formation. Therefore, it was hypothesized that the increase in bone forma-tion markers might occur earlier with early weight bearing. In this study, all bone turnover markers showed the same pattern of change in both types of fracture although the time when weight bearing was initiated differed between the two patient groups. The difference in initiation of weight bearing did not affect the pattern of change in bone formation marker levels. However, bone formation marker levels in trochanteric fracture patients were significantly greater than those in femoral neck fracture patients.
In this study, multiple pinning methods were used for femoral neck fractures, and CHS was performed for trochanteric fractures.
Multiple pinning surgery was performed as open surgery, like CHS. However, the surgical invasiveness of CHS is greater than that of multiple pinning methods. Evaluation of the effects of surgery based on C-reactive protein and cytokine levels was limited to a short period. 24,25 Therefore, if the degree of surgical invasiveness affects bone turnover markers, the changes in bone turnover due to different surgical procedures might be limited to a short period.
Previously, Hosking had reported that the increase in total alkaline phosphatase was similar during the healing of femoral neck and trochanteric fractures. 8 However, Nakagawa has recently reported that post-operative total alkaline phosphatase levels were significantly higher in trochanteric fracture patients than in those with femoral neck fracture during the healing process. 16 It has been reported that the fracture of small bones (such as the wrist and ankle) does not cause marked changes in bone turnover markers. 10,11 The relative extent of bone formation and remodeling during the fracture healing process would determine the changes in bone turnover markers. In patients with trochanteric fracture, radiographs show that callus formation and/or remodeling during the healing process is more extensive and dynamic than in those with femoral neck fracture ( Figure 6) and these differences would be reflected in the levels of bone formation markers. Furthermore, the difference in the area of fracture might affect bone turnover markers. In the present study, bone turnover marker levels (including BAP) were significantly higher in patients with trochanteric fracture than in those with femoral neck fracture. The changes in bone turnover markers corresponded to the radiographic findings in femoral neck fracture, and the results of this study supported Nakagawa's data but not those of Hosking. In this study, we also found that urinary and serum NTX levels had a different pattern of change. Recently, Akesson et al. have used a fracture model to assess changes in bone turnover markers, and they reported that serum and urinary osteocalcin (Oc) levels showed different changes after fracture ,15 i.e. urinary Oc increased at 6-9 weeks but serum Oc increased at 4-7 months after surgery. From the results reported by Akesson et al. and the findings of our study, some bone turnover markers might show different changes in the urine and serum during fracture healing.
Based on the results of the current study, bone fragility fractures might directly affect both bone formation and resorption marker levels. The changes in bone turnover markers after fragility fracture at different sites might have a similar pattern but differ with respect to the extent of change.
In this study, pre-operative data were not obtained in some patients. Therefore, the Article Figure 6. The difference in callus formation between trochanteric fracture and femoral neck fracture during each healing process in radiographs. In trochanteric fracture, increased radiodensity along the fracture line, which is regarded as callus formation, is observed at three months after surgery (shown by arrows). By contrast, the finding is inconspicuous in femoral neck fracture (shown by an arrow). value of pre-operative data analysis would be limited. However, post-operative data were obtained from almost all patients and thus can be considered reliable.
In conclusion, an increase in bone turnover markers after surgery for hip fracture and the subsequent decrease may reflect bone formation and remodeling during the fracture healing process. We identified significant differences in bone formation and resorption marker levels between patients with trochanteric and femoral neck fractures, even though these both occur at adjacent sites in the proximal femur. Our findings may reflect differences in the amount of callus formation and/or remodeling during the healing of these fractures. After fragility fracture at different sites, the changes in bone turnover markers may show a similar pattern, but differ in extent.
|
2014-10-01T00:00:00.000Z
|
2009-10-10T00:00:00.000
|
{
"year": 2009,
"sha1": "ad7a68a4fe5c9d2067b4cc37124aa46c9ddf9594",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4081/or.2009.e21",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad7a68a4fe5c9d2067b4cc37124aa46c9ddf9594",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235753915
|
pes2o/s2orc
|
v3-fos-license
|
Is the Implementation of Good Corporate Governance Able to Improve Earnings Quality
The purpose of this research is to analysis the effect of implementing good corporate governance on quality information of earnings. The population of this study is all non-financial firm that are included in the fast-growing company. We found in this study that board structure and process have a positive contribution on quality of earnings information. Ownership control and characteristics negatively affect on quality of earnings information. Size of the firm has a positive influence on earnings quality. Firm size affects the relationship of good corporate governance to earnings quality.
INTRODUCTION
Quality of earnings information largely determines the accuracy of decision making. Earnings management is a form of presentation of financial information that is not in accordance with the reality of achievements of the firm. Schipper in Subramanyam and Wild [1], reveals that earnings management is a deliberate interference of management in the process of determining earnings to meet personal goals. Earnings Management made by companies can cause earning that it is reported that the company is not qualified, which is done to give a positive signal to the public to get good grades, so as to increase the value of the firm.
The reliability and integrity of financial information can be maximized by monitoring mechanisms within the firm through Good Corporate Governance [1]. Several studies have proven a negative relationship between ownership concentration and earnings quality [2], [3]. On the other hand, Irawati & Sudirman [4] and Morck in Niu [5] stated that the more concentrated the ownership, the better the firm's earning quality. While research conducted by Natalia & Laksono [6] state that ownership of the management has no effect on quality of earnings information.
Abbadi, et, al. [7] state that the implementation of the board characteritics will improve the quality of corporate earnings information. Taktak and Mbarki [8] also have the same opinion that board characteristics can minimize managerial cheating through earnings management practices. In contrast to three previous studies by Chiang, et al. [9], and Oktaviani, et al. [3] stated that the board of directors had no effect on quality of earnings information.
METHODS
The population of this study was 40 non-finacial and non state-owned companies that were nominated for "100 fastest growing companies" for 2016-2018 which selected by the Infobank Research Bureau. The data used in this study were collected and sourced from the official site of the Indonesia Stock Exchange. Data also gained from the official website of the firm to obtain minutes of the results of the General Meeting of Shareholders (GMS) and annual reports, and Infobank's Official Website (www.infobank).
RESULT AND DISCUSSION
The results of statistical analysis for model 1 can be shown in the table 2 below. Hypothesis test results from the regression equation of the research model 1 above show that the board significantly influences discretionary accruals and directly influences the quality of the company's earnings. OWN has a significant influence on discretionary accruals and directly affects the quality of the company's earnings. Table 3 shows the results of the multiple regression analysis for the research model 2. Based on the regression equation above and partial hypothesis testing, it can be seen that there is no significant effect of the board variable on the earnings quality variable represented by DA. There is no significant effect of the OWN variable on the earnings quality variable represented by DA. Size significantly influences discretionary accruals and directly affects earnings quality. In this case, the size of a company significantly influences the quality of the company's earnings.
The Influence of Board Structure and Process (BOARD) on the Earnings Quality
The results of the research model 1 show that BOARD has a negative effect on discretionary accruals that will indicate whether or not the quality of a profit reported by the company. While the research model 2 after entering the variable size shows that BOARD has not significantly to discretionary accruals. The results of this
Advances in Economics, Business and Management Research, volume 179
study are in consistent with research by Abbadi et al. [7].
The board meeting, which in this study contained the board of commissioners meeting. In addition, there are indicators of the structure and implementation of the audit committee (audit committee reputation) which consist of audit committee existence (disclosure of profile and permanent committee audit), the frequency of audit committee meetings, the ability of the audit committee, and the reputation of the audit committee. The results are concsistent with research findings by Nazir & Afza [10] and Grassa [11].
In the results of model 2 shows that the BOARD has no relationship with earnings quality after entering the variable size, this shows that the size of a company have influence the relationship between BOARD and information of earnings quality. In this study, small companies tend to have high BOARD scores, so if a variable size is included, then there will be no significant effect on discretionary accruals which indicate the earnings quality of the company. This is connsistent with research conducted by Natalia & Laksono [6] who conducted research on corporate governance mechanisms in this case the board structure of earnings management practices calculated using the Modified Jones' Model 1991 in the burdened business sector for the 2008-2011 period with the result that the board structure (board size, independent commissioner) has no influence on discretionary accruals. According to Natalia & Laksono [6] in the implementation of company operations, the existence of independent commissioners was less effective because of the possibility that the average banking sector business entity listed on the Indonesia Stock Exchange (IDX) of 2008-2011 appointed independent commissioners only to fulfill regulations. In addition, the results of this research are also consistent with Hoang, et. al. [12] that BOARD has no effect on information of earnings quality. According to Hoang, et al. [12] BOARD has no effect, among others, because the independent commissioners are part-time bodies so that they meet only occasionally and each does not know each other well. So that it is possible that the commissioner boards does'n have the right time to thoroughly comprehend the business and company issues, management allows to obscure the problem. In addition, audit quality included in the BOARD indicator is ignored by the company because the company only wishes to improve the company's performance so that it is good in the eyes of investors and ignores the existence of the big four KAP, so that good BOARD implementation does not have a significant effect on the reported earnings quality of the company in financial statements.
The Effect of Ownership and Control Characteristics (OWN) on the Earnings Quality.
Based on the results of the regression analysis of models 1 and 2, OWN achievers on the earnings quality of the company have inconsistent results. The results of research model 1 show that the OWN variable has a significant positive effect on discretionary accruals. While the results of research model 2 show that OWN has not significant effect on discretionary accruals. The indicators in this study are the magnitude of controlling ownership, managerial ownership, and institutional ownership. The results of research model 1 means that the greater the OWN score, the greater the value of discretionary accruals or when the company's shares are widely owned by management and institutions, the greater the chance for management to intervene in the determination of earnings which results in higher discretionary accruals, causing earnings quality has declined. These results are consistent with Irawati & Sudirman [4].
The concentration of ownership which according to Asward and Lina [13] can be an internal mechanism for disciplining management and being an effective oversight in this study failed to be as expressed by Asward and Lina [13]. In addition, greater management and institutional ownership that is expected to reduce agency conflict and moral hazard in this study fails. Mellado & Saona [2] also gives the opinion that controlling owners will provide financial statement information for personal purposes, and concentrated ownership will indicate the low quality of financial information. Sulistiawan, et al. [14] revealed that in reference to corporate fraud, there is the term triangle that causes fraud (fraud triangle) which consists of opportunity, pressure, and rationalization. If seen from the analysis of this results that OWN has negatively effect on profitability, the fraud triangle that is suitable for this translation is an opportunity Sulistiawan, et al. [14] and Mellado & Saona [2] Advances in Economics, Business and Management Research,volume 179 The results of research model 2 show that the own variable has no effect on earnings quality. That is, the size of the shareholding score will not affect the motivation of company managers to intervene in determining company profits. This result is supported by research conducted by Okaviani, et al. [3].
Effect of Firm Size on the Earnings Quality
Based on the regression analysis of research model 2, the results show that size has a negative and significant effect on discretionary acrruals. These results mean that the larger the size of the company, the better reported earnings quality. The results of this study are in line with research conducted by Dira & Astika [15].
In addition, other research conducted, the researchers states that politically large-scale companies will be more likely to carry out political cost transfers in the framework of political processes compared to smallscale companies [16]. The impact of this political process is the choice of better accounting procedures by large companies. Another study conducted by Lys, et al. [17] that signaling theory predicts a positive influence between company size and the integrity of financial reporting. Large-scale companies are more reliable when reporting financial information to get a positive signal in the public eye. Large-scale companies get more public attention, so when it comes to cheating large-scale companies are likely to think about the impact that occurs that is broad public confidence in the company. Besides being in line with some of the studies above, this result is also in line with research conducted by Abbadi, et al. [7].
Effect of Firm Size in the Relationship between Board Structure and Process (BOARD) and Ownership and Control Characteristics (OWN) on Earnings Quality
The results of the analysis of models 1 and 2 show inconsistent results. In research model 2 after adding the size variable, it shows no significant effect between BOARD variable on earnings quality and OWN variable on earnings quality. The inconsistency of these results shows that, variable size affects the relationship between BOARD and OWN on earnings quality becomes meaningless.
The firm size that affects the relationship between board structure and process and ownership control and characteristics shows that as size increases, or the larger the size of a company, the ideal fulfillment of the implementation of good corporate governance will be better. In this case the implementation of board structure and process and ownership control and characteristics. However, in this study the implementation of good corporate governance tends to be stagnant in the board structure and process section and has decreased in the ownership control and characteristics section as the size of the company is represented by the total assets of the company. The majority of fast-growing companies (2016-2018) experienced an increase in high GCG values in 2013-2015. This Board Score tends to be unchanged in most companies because companies tend to use the same rules every year. The regulation is adjusted to the new government regulation. The most influential regulation is the composition of the board of commissioners. The majority of fast-growing companies (2016-2018) only use the minimum limit of the provisions of the members of the commissioners that have been set by the government, which is a minimum of 30%. In fact, ideally the larger the size of a company, the company should be able to meet more than the minimum requirements, so that in this case the accountability of independent commissioners is ignored and the results of hypothesis testing in the research model 1 that is the influence of the board on the company's profit quality becomes less meaningful with this size. In addition, there are other factors that make the board less meaningful, an increase in other subindicators, in this case an increase in the implementation of the sub-committee committee under the independent commissioner namely the audit committee and the remuneration committee. The bigger the company, the implementation of the assistant committee of the board of commissioners tends to get better. This makes the board score stagnant (no significant change) from year to year and makes the board relationship and earnings quality meaningless.
CONCLUSION
Based on the results above, it can be concluded that board structure and process has a significant positive effect on the quality of fast-growing company earnings. Ownership control and characteristics have a significant negative effect on earnings quality. Size has a significant Advances in Economics, Business and Management Research, volume 179 positive effect on earnings quality. Size affects the relationship of good corporate governance to earnings quality Based on the above conclusions, the researchers offer a number of suggestions, including: 1) it is better if the companies that fall into the category of fastest growing companies for 3 years in a row namely 2016-2018 to improve the implementation of good corporate governance (GCG), especially on board structure variables and process; 2) investors should carefully select the company. Companies that have many awards may not necessarily have quality earnings for future company performance predictions; 3) variables of this research are still not able to define the variation of variables that affect earnings0 quality, so further researchers can also develop research by making variable size a moderating variable in the relationship of good corporate governance to earnings quality. Future researchers should only focus on certain sectors in conducting research like this. This research uses too much the corporate sector but is not supported by the number of companies in one sector.
|
2021-07-06T17:24:03.124Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "2c1e93f7dfff730aaca3193c4ffb884f0ee9a755",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125957820.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2c1e93f7dfff730aaca3193c4ffb884f0ee9a755",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
231596073
|
pes2o/s2orc
|
v3-fos-license
|
A systematic approach to context-mapping to prepare for health interventions: development and validation of the SETTING-tool in four countries
Effectiveness of health interventions can be substantially impaired by implementation failure. Context-driven implementation strategies are critical for successful implementation. However, there is no practical, evidence-based guidance on how to map the context in order to design context-driven strategies. Therefore, this practice paper describes the development and validation of a systematic context-mapping tool. The tool was cocreated with local end-users through a multistage approach. As proof of concept, the tool was used to map beliefs and behaviour related to chronic respiratory disease within the FRESH AIR project in Uganda, Kyrgyzstan, Vietnam and Greece. Feasibility and acceptability were evaluated using the modified Conceptual Framework for Implementation Fidelity. Effectiveness was assessed by the degree to which context-driven adjustments were made to implementation strategies of FRESH AIR health interventions. The resulting Setting-Exploration-Treasure-Trail-to-Inform-implementatioN-strateGies (SETTING-tool) consisted of six steps: (1) Coset study priorities with local stakeholders, (2) Combine a qualitative rapid assessment with a quantitative survey (a mixed-method design), (3) Use context-sensitive materials, (4) Collect data involving community researchers, (5) Analyse pragmatically and/or in-depth to ensure timely communication of findings and (6) Continuously disseminate findings to relevant stakeholders. Use of the tool proved highly feasible, acceptable and effective in each setting. To conclude, the SETTING-tool is validated to systematically map local contexts for (lung) health interventions in diverse low-resource settings. It can support policy-makers, non-governmental organisations and health workers in the design of context-driven implementation strategies. This can reduce the risk of implementation failure and the waste of resource potential. Ultimately, this could improve health outcomes.
4
was assessed by the degree to which application of the tool resulted in context-driven adjustments in the implementation strategies of subsequent FRESH AIR lung health interventions. We discussed outcomes until consensus was reached.
Reflexivity
The content expert panel was multidisciplinary (see above) and included members from the Netherlands, Uganda, Kyrgyzstan, Vietnam, and Greece. Members represented both sexes, diverse ages, and ranged from students to professors. The diversity in the team stimulated collection of rich data through the different perspectives. As hierarchies within this diversity could be at play, we repetitively emphasised that every person's input was equally valuable.
BMJ Publishing Group Limited (BMJ) disclaims all liability and responsibility arising from any reliance Supplemental material placed on this supplemental material which has been supplied by the author(s) (b) Describe any methods used to examine subgroups and interactions n.a.
(c) Explain how missing data were addressed n.a.
(d) If applicable, describe analytical methods taking account of sampling strategy 7,8 (e) Describe any sensitivity analyses n.a.
Results
Participants 13* (a) Report numbers of individuals at each stage of study-eg numbers potentially eligible, examined for eligibility, confirmed eligible, included in the study, completing follow-up, and analyzed Other analyses 17 Report other analyses done-eg analyses of subgroups and interactions, and sensitivity analyses n.a. Swahili. The prevalence of smoking daily is 11.5% for males 1.8% for females.
Discussion
The tuberculosis prevalence is 159:100 000. Data on chronic respiratory disease are limited, but one study in a rural Ugandan district found a COPD prevalence of 16.2%, strongly related to biomass fuel use. Health expenditures are 7.2% of GDP; physician density is 12 per 100,000 but in Jinja this is much lower. Public health service delivery in Jinja is financed by the government and donor funds.
Daily challenges are inadequate funding, poor infrastructure, limited essential equipment, limited staff, and absenteeism. Health is grassroot structured, with village health teams (non-medical background) in the communities, health centre (HC) II run by nurses and midwives, HC III led by a senior clinical officer which should have a lab and HC IV as a referral centre, with emergency surgery and inpatients facilities. Next are regional referral centre with specialists and national specialist/teaching hospitals.
Vietnam
Vietnam is a communist lower-middle income country. It is densely populated and the population is rapidly growing. Around 34% of all people live in urban areas. Since Vietnam shifted from a centrally planned to a market economy, people living below the poverty line declined from 70% to below 6% since 1986. Life expectancy is 76 years, with 70% of the population being <35 years old. Most (85.7%) of the people are Kinh (Viet), and speak Vietnamese. Over 70% of the people engaging in farming or farm-related work. Literacy is around 94.5%.
Polite behaviour is highly valued in Vietnam, especially showing respect towards elders. In general, women are expected to avoid tobacco and alcohol. Among youth, 1.2 (female) to 3.6% (male) smoke BMJ Publishing Group Limited (BMJ) disclaims all liability and responsibility arising from any reliance Supplemental material placed on this supplemental material which has been supplied by the author(s) tobacco. Among adults, this is 1.2 (female) to 38.7% (male). Vietnam is ranked 12 th among tuberculosis high-burden countries, with a prevalence of 89:100,000. COPD is ranked the third cause of death with a prevalence of 6.7%, of whom 25-45% never smoked. Around 4-5.7% of the people are estimated to have asthma. Vietnam adopted the target of universal health coverage in 2010, and has a grassroots system of village health committees (non-medical background), health stations for primary care, then larger health centres, district hospitals and larger referral hospitals in larger cities.
Health expenditures are 7.1% of GDP, and physician density 1.19:1000 people. Biomedicine and traditional medicine are both popular.
The Ben Luc and Can Giuoc district, in the Long An province west of Ho Chi Minh City, were selected as study settings. Long An is located in the Mekong River Delta, and has a tropical climate with a rainy and a dry season.
Greece
Although officially a high-income country, Greece has severely suffered from the austerity following 2008. In 2013, around 27.5% of the people were unemployed. In 2016 the poverty rate was 14.4%.
Greek life expectancy is 81 years old. Literacy is 97.7%. Around 77% of the people live in rural areas.
The vast majority of Greek are Greek Orthodox.
Daily tobacco smoking prevalence among the youth is 19.3 (male) and 13.3% (female), and for adults 49.7 (male) and 23.9%, the highest in Europe. Prevalence of chronic respiratory disease is high and there is evidence this has deteriorated since the austerity. The Greek healthcare system is strongly hospital centred; there is no referral system to specialists.
2.2% of all physicians are pulmonologist vs. 3.6% general practitioner, mostly in rural areas. Rural and semi-urban areas have ambulatory centres, which can be a mix of public and private services. 70% of hospital beds are in the public sector. Although formerly free of charge, at 2011 admission fees for state hospitals were introduced, co-payments for medication were increased. Meanwhile, reduced household income and employment rates led to reductions in insurance healthcare coverage. The For the 'traditional' Greek setting, the Heraklion district in Crete was selected. Transportation in rural areas is not well organized and many people do not possess a car, limiting access to healthcare.
Roma
The other selected setting is a Roma camp in Heraklion municipality, with 600 inhabitants the largest concentration of Roma in Crete. Almost half of these are children. The primary communication language is Greek, with 65% of the Roma population using only Greek, while 35% communicates in both Romani and Greek. It is estimated that the majority lives below the poverty line. Unemployment rates are high, and only 2 inhabitants work as official employees. The majority of those working do so in the peddler trade, often illegally. Twenty of the 140 houses are brick-built, the rest are improvised constructions. There is a water supply, but no sewage or garbage system in the camp and electricity is only available when the costly generator is in use. Cooking occurs on gas and heating by wooden stoves.
Traditionally, women are housewives. Of Roma 13-19 years old, marriage or cohabitation rate is 70%.
Among the 283 Roma minors, 145 attended school. Most older Roma are illiterate. On average, Roma start smoking at age 13. The prevalence among adults is 83%.
The Roma population is served by the Support Center for Roma and Minority Groups. Other primary care or hospital facilities are not withing walking distance. Care is not continuous by medical personnel, and in order to provide services various voluntary organizations help out. Smoking prevalence for Kyrgyz men is around 40% (very low for women), and almost the entire rural population uses solid fuels for cooking and heating, especially in the highlands. Kyrgyzstan has the highest respiratory mortality of the European Respiratory Society 'White Book'. We selected its lowest region, Chui (∼750 m above sea level), as a lowland setting. Its neighbouring region, one of its highest regions, was selected as highland setting (∼2050 m above sea level): Naryn. Since we conducted this study, data of another of our studies has demonstrated that COPD prevalence in rural Chui and Naryn was 10.4% and 36.7% respectively. Nationally, TB prevalence is 196:100,000.
In Kyrgyzstan, 6.5% of GDP is spent on health. Physician density is 1.97:1000, but lower in both Naryn and Chui. Healthcare is organized from Village Health Committees in communities, to family group practices and family medicine centres, and a limited number of general practice centres in primary care. In small villages and remote areas, primary health services are provided by trained nurses in feldsher-midwifery posts (FAPs). Furthermore, there are referral hospitals. Health costs are formally covered, but salaries for healthcare staff remain low and informal co-payments contribute substantially to healthcare spending. Private expenditures account for around 50% of total health expenditure, followed by state funding (around 30%). Since the significant decline in health workers since the 1990s, there is a shortage of personnel, particularly in remote areas. Low salaries for health impact motivation and quality of care and many workers migrate to other countries. We employed the following definition: 'An area with a population density of < 250 inhabitants/km 2 or a total population <2500, and in the absence of particular reasons to classify the area differently, such as a highly sophisticated infrastructure. In Vietnam, the population density was overall much higher and villages <10.000 inhabitants were considered rural. To our knowledge, no established definition of 'rural' exists, possibly because it is context-specific.
Therefore, we considered definitions used in scientific papers and policy reports of several institutes internationally (United Kingdom, Canada, United States, and the Netherlands) and combined those to make a suitable distinction between rural and urban in our countries. [1][2][3][4][5] References: • Information on the data collection for the RAP
Planning and structure
We broadly followed the guidelines for RAP described by Beebe. 1 The research team preferably stayed near the study settings to avoid long travel time. Before the fieldwork started, all research tasks were divided and the work schedule (See Work Schedule) was discussed. We split into several smaller groups, each involved in different field activities with different informants throughout the day.
At the end of each day, we held a systematic preliminary evaluation with the entire team to bring all findings together. An intensive short meeting at the end of the afternoon, after data collection and before dinner, appeared most effective and allowed for timely adaptations.
Our RAP lasted around five days per setting; previous experience with the technique prescribed that it should last a minimum of four days. More than five consecutive days of more than five hours of interviewing per day was experienced ineffective.
A data matrix (see data analysis) helped structure the input of the debriefing sessions and helped decide in which areas data saturation had occurred and which data were still lacking. Unexpected emerging themes or informants were also identified. All results were triangulated, and discrepancies were discussed. The research materials (e.g. topic lists) and Work Schedule were then adjusted accordingly. Alongside a logbook was kept listing all decisions made during the team meetings, as well as all steps agreed upon in the research process, serving as a memory guide and helpful during data analysis.
Description of each field method
Throughout the field activities, we considered appropriate locations; during interviews and focus groups we wanted informants to feel they could speak freely, so generally these were held in a private place: at home for community members, or in a private consultation room with HPs. We ensured that preferably no other person was present in the room, but in some cases, this was not feasible considering cultural politeness. We noted this down in our field notes so we would be aware of it during the analyses.
After some small talk (participants were unknown to all but the community researchers) we introduced the study aim, explaining we would like to learn from the participants expertise on the topic of breathing and breathing problems in the community. Where possible, we introduced ourselves as 'researchers' and avoided to mention some positions often perceived high in hierarchy, such as professors or medical doctors. The non-local origin of the Dutch researchers was mostly obvious; therefore, we conducted some activities without their presence and compared answers.
After consent was given, interviews and focus groups were audio-recorded, and anonymity in the recordings was ensured. They were held in the local language, unless the informant would be fluent in
• Interviews
Semi-structured interviews with the healthcare professionals enabled in-depth exploration of the topics; interviewees often speak more freely in smaller settings, for example because they experience less limitations due to hierarchy. Interviews also provide an opportunity for acquiring insights from key stakeholders that have an in-depth view or an overview of the situation due to their position (e.g. a church leader or community leader). We did not carry out repeat-interviews.
• Focus groups
Through focus groups perspectives of multiple informants were explored, and discussion among the informants provided valuable information. Informants were generally of the same level in hierarchy to enable them to speak freely (e.g. only community members excluding community leaders). We held BMJ Publishing Group Limited (BMJ) disclaims all liability and responsibility arising from any reliance Supplemental material placed on this supplemental material which has been supplied by the author(s) focus groups with only men, only women, and mixed groups, as well as younger, middle-aged, older and mixed-groups. This also helped to explore potential gender-or age-related differences in perceptions. Key questions provided a lead for the discussion, and these were tailored to the local reality and the flow of the dialogues. When the discussion showed that more in-depth exploration with an informant about a certain topic would be desirable, a subsequent in-depth interview was scheduled.
• Observations
The observations were direct, non-participatory, and structured. Observations revealed insights that could be hard to detect otherwise because of self-serving bias. (Potential differences between observed behaviour and verbally stated behaviour of healthcare professionals during consultations could be detected.)
• Document analysis
Relevant available documents were collected and used to triangulate other data sources. In this way e.g. a guideline regarding chronic respiratory symptoms could be compared to the stated behaviour of a healthcare professional, which could in turn be compared to an observed behaviour during a consultation. Selection of document depended on availability, and included materials like teaching curricula, policy documents, local guidelines, and relevant advertisements. Translators translated relevant paragraphs of documents in verbatim. A paragraph was considered relevant when it mentions anything related to the definition, cause, prevention, diagnostics, treatment, follow-up and prognosis of lung disease (when feasible, e.g. no books were translated).
• Questionnaires
The questionnaires developed for the survey were also pilot-tested during the rapid assessment. The input did not only serve to improve the design of the questionnaire itself, but the answers served as input for the qualitative analysis.
• Information on sampling for the survey CM sampling followed a three-stage design, based on the Expanded Program of Immunization (EPI) method. 2,3 Please note, this design is originally intended for a vaccination programme and based upon the assumption that every one in seven households has a child between 1-2 years old. However, we did not encounter a more suitable strategy in literature and decided to adopt this methodology anyway.
During the first stage of sampling, 30 standard geographic units were randomly selected in each setting, proportionally to their population size. We excluded villages where previous FRESH AIR activities had been performed to prevent bias. In Uganda, the district governors stated that information on villages in the municipality and their inhabitants was strictly confidential, so we enrolled to their paid randomization service performed by the Ugandan Bureau Of Statistics. Second, using a simple random approach, seven households were chosen from each geographic unit. We used a random number generator (by atmospheric noise) 4 to select ten (Greece) or seven (Kyrgyzstan, Vietnam) numbers. In case of a double number (this occurred once in the Kyrgyz lowland setting ang twice in the Greek setting), the process was repeated. Third, if more than one person was found eligible for inclusion per household, lots were drawn to determine which person was invited to participate. In case none of the residents in the household were present or if they did not want to participate, the neighbouring house was approached. In this way we enhance an equal distribution between informants of more remote areas and more densely populated areas.
After using the three-stage sampling approach for inviting CMs for participation, we invited all eligible HPs from the nearest health facilities in Uganda and Kyrgyzstan. In Vietnam, HPs were recruited pragmatically during a district health meeting. In Greece, with many HPs, we were able to randomized general practitioners (GPs) at the individual level.
Appendix 4. Theoretical framework
A theoretical framework should guide the specific content to be explored to fulfil the study's aim, and methodological orientation. 1,2 The theoretic framework we used in this study was composed from the Health Belief Model (HBM), 3,4 the Theory of Explanatory Models of Illness (EM) 5 and the Theory of Planned Behaviour (TPB). 6 The Health Belief Model by Hochbaum intends to explain and predict health behaviour by focusing on beliefs of individuals. 3 The model consists of several key concepts: the individuals' sociodemographic characteristics, the individuals' perceptions of susceptibility to disease, the perceived illness severity and the perceived benefits and barriers of performing certain behaviour. Rosenstock 4 added the aspect of self-efficacy to the model; the perceived capability of performing the behaviour. The HBM implies that these factors, combined with certain internal and external cues to action (e.g. 'pain' or 'the illness of a friend') lead to certain health behaviour. Kleinmans' Theory of Explanatory Models of Illness (EM) provides a useful addition to this research framework, as it addresses individuals' emotions. 5 It focusses on the beliefs one holds about one's symptoms (illness), the personal and social meaning one attaches to these symptoms, one's expectations about what will happen to him/her, what the care providers will do, and one's own therapeutic goals. This theory therefore helps to elucidate how perspectives can differ across cultures and backgrounds, e.g. between patients and doctors.
Reflection on the framework As we consolidated three validated frameworks, the overall resulting framework had not been validated before we used it in our study. We experienced the framework in the six different settings in four different countries as effective and comprehensive. However, a more pragmatic framework may be more user-friendly.
Appendix 5 Research Materials
Please note, the full versions of the research materials are provided in a separate file: Supplementary The versions provided are in dual language (English-Russian). However, each country had its own translated version: English (Uganda), Vietnamese, Russian and Kyrgyz (Kyrgyzstan), and Greek. Topic lists for key informants varied as they were tailored to the specific informant.
Topic lists and observation forms were not pilot tested in the field, but iteratively improved after application. This way, we ensured all data from the field were used in the analyses. Please note, topic lists for key informants were tailored to the specific informant, and therefore they were all different.
The vignette in the topic lists was tailored to each setting (e.g. a neutral and common name for the person in the vignette was chosen). Also, we alternated age and sex to see if this would influence the perceptions of the participants.
For the vignette in the survey, we could not alter age nor sex. Therefore, we described a person in the population group where COPD was most prevalent in rural Uganda: a woman aged 35 (due to its relation with biomass fuel use). The age in the vignette in the other countries was adjusted according to the country's life expectancy. The Framework Method guided the analysis of our qualitative data. In this method, data are structured by a matrix consisting of rows (cases), columns (codes) and 'cells' (summarised data). This structure enables data to be systematically reduced by case and by code. It allows for both data comparison across cases and within individual cases. The broad and systematic structure is particularly suitable in this FRESH AIR research where multiple data sources were used (interviews, observations, etc.). The format also suits large datasets with a holistic approach because the overall pictures as well as its details will be shown. We conducted the stages of the Framework Method as detailed below.
Transcription:
Field-researchers, hence familiar with the theoretical perspectives of the study, transcribed in verbatim. Where English translations were used, only the original (local) language was transcribed and then translated to English.
Familiarisation with the interview:
Audio-recordings in English or containing English translations were listened to, and all translated transcripts and contextual/reflective notes were read. Analytical notes or thoughts were noted in the margins.
Coding:
Coding was mainly deductive based on our pre-composed Beliefs and Behaviour theoretic framework, although we also coded inductively (open for generating new codes) to explore the unexpected. Two researchers independently coded the first few transcripts and after discussion one researcher coded the transcript, and another researcher checked the transcripts.
We used Atlas.ti version 7.5.15.
Developing a working analytical framework:
after the first few transcripts were coded, the labels were compared, codes were grouped into clearly defined categories (by a tree diagram).
5.
Applying the analytical framework: subsequent transcripts were indexed using the categories and codes.
6. Charting data into the framework matrix: Two researchers chartered the data, and consistency was ensured by comparing the summary styles. References to interesting or illustrative quotations were added. 7. Interpreting the data: emerging themes were discussed with other members of the research team. We used the interactive data platform SharePoint (2016, Microsoft Office) for securely sharing audiofiles and text documents. Gradually, ideas about characteristics of and differences between the data were developed. Relations, connections and causality were further explored and interpreted and conclusions are drawn. 8. Member checks: 2-3 participants per informant group (e.g. community members, healthcare professionals) were planned to be performed throughout the rapid assessment process, verifying preliminary results of the study. Due to rurality of the settings we did not do this after all.
We kept an analysis-logbook which detailed coding definitions, decisions made, and how the researchers perceived and examined the data. Logbooks served to improve transparency and reproducibility. The verbatim transcription process in the in-depth analysis was time-consuming; it took about 4-6 months to complete. Moderators (factors which have influenced the degree of fidelity) Methodology complexity Simplicity and understandability of the tool were enhanced by co-designing it with end-users and local stakeholders. Facilitation strategy An intensive one-day training was delivered before application of the methodology, and each researcher had a research manual with key instructions. Researchers with ample experience led the team, enabling adequate application of the methodology. Feedback on fieldactivity techniques was regularly given during the daily RAP debriefings and throughout data collection for the survey. Two team members missed the initial training and received a catch-up half a day training instead.
For logistical reasons, one member from the lowlands was replaced. The new member received a catch-up half a day training instead.
The local members intended to add two research interns to the team halfway the RAP. As continuity in the RAP is vital for its iterative nature, the interns received only observatory or logistical tasks.
For logistical reasons, the researcher from the Roma community did not participate in the training and received a rapid prebriefing instead.
Co-creation of the study design and materials with local stakeholders enhanced the methodology to be a context-sensitive. Quality of the delivery The quality of the delivery was enhanced by numerous measures, e.g. a careful translation process of research materials (translation, back-translation, comparison and adjustments), piloting questionnaires to improve their understandability, continuous reflection on the methods (for the RAP at least daily during the debriefings), continuous feedback from stakeholders, a structured analysis process, use of the secure online data sharing platforms Sharepoint (qualitative data) and REDCap (survey). A strong sense of teamwork made fieldresearchers highly committed and dedicated to strive for excellence. All steps were completed within budget. were strong and dedication to learn was very high. A strong learning curve was visible.
centre enabled to gain trust from the Roma and access the camp. However, as we relied on their collaboration, we could not access the camp for sufficient time to collect all data.
Participant responsiveness
Team members: highly motivated in each setting; very dedicated to learn from the training and perform the intensive RAP. Stakeholder engagement group members: stakeholders informally reported and demonstrated to feel ownership and provided continuous feedback in our co-creation process. Study participants: emotional and cognitive response to the study activities and materials was positive, also see recruitment.
Recruitment
Tailored recruitment methods enabled high recruitment rates and very few refusals (one group of four elderly women declined the invitation to participate in the Kyrgyz lowlands, and provided the reason they did not want to wait for us, younger researchers, to complete preparations for the focus group. Also, we stopped one interview with a Vietnamese woman who had verbally given consent, but appeared to feel very uncomfortable with us carrying a voice recorder). Due to the high participation rate, we refreshed the informedconsent procedure to ensure the voluntary aspect of participation was sufficiently emphasized; participation rates remained equally high afterwards. This could be explained by the non-invasive nature of the research, the friendly, rural cultures, the involvement of community researchers which facilitated building trust, the rapport built by the researchers and possibly the small compensations for participations (a bar of soap in Uganda, a small reimbursement in Vietnam, and Dutch biscuits and travel reimbursement in Greece and Kyrgyzstan ➔ Insights from local team members were important for the identification of knowledgeable and influential representatives for the stakeholder engagement groups.
2 ➔ The Rapid Assessment Process proved a time-efficient and effective method.
➔ The use of multiple methods resulted in richer data with higher validity. As an illustration: a rural nurse shared during an interview that she used a spirometer [a relatively sophisticated device] during respiratory consultations. However, observations of the consultations revealed that the device she meant to describe was a [basic] peak flow meter instead.
➔ Finding an evidence-based randomisation method for low-resource, rural areas was challenging (Appendix 3).
3
➔ Teaming up researchers with local stakeholders for the development of the research materials increased the materials' relevance and validity: the researchers promoted the use of theoretical frameworks and validated questionnaires, whereas the local stakeholders promoted a fit with the context.
➔ Piloting the questionnaires resulted in substantial improvements in the content validity and understandability.
4
➔ The RAP debriefing sessions were highly valued; discussing the daily findings from multiple perspectives substantially deepened our understanding.
➔ Team members in low-resource settings tend to have multiple concomitant commitments, likely to result in a high member turn-over. However, to benefit from trainings and to allow for rapid, iterative data collection, a stable team is required.
Emphasising the importance of a stable team to our local colleagues paid off. It resulted in a well-trained team and a strong team spirit.
5
➔ Pragmatic analyses, mostly based upon our RAP debriefings, were essential for timely informing the implementation strategies of the FRESH AIR interventions.
The time-consuming transcription and translation of qualitative data served the more in-depth analyses, which informed stakeholders and a scientific audience in a later stage.
6
➔ Although we were aware of the importance of continuous communication with our stakeholders, we cannot emphasise enough how important frequent contact is for stakeholder engagement. Particularly, contact with our foreseen end-users facilitated use of the study's findings. In hindsight, we wished we would have done it even more.
|
2021-01-14T06:16:25.055Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "d6fcf37d87ca6d49b3de67c0d5292ce54bfd844a",
"oa_license": "CCBYNC",
"oa_url": "https://gh.bmj.com/content/bmjgh/6/1/e003221.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c77dc5d07c8b1e5e21a41ea333d057e2bdd4132b",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
92244467
|
pes2o/s2orc
|
v3-fos-license
|
A New Genus, Ectemnoides, for Seven Species of Australian Gondwanan Simuliidae (Diptera) With Description of a Novel Form of Larval Attachment
Abstract A segregate of Australian simuliids of Gondwanan provenance with unusual attributes is assigned to a new genus—Ectemnoides. Only one species was originally known, that from females of the eastern Australian (Victoria) Paracnephia umbratorum (Tonnoir) and its presumed larvae, which we fully confirm here.Three previously unknown related species, Ectemnoides acanthocranius n. sp., Ect. faecofilus n. sp. and Ect. uvulatus n. sp. from Western Australia are described.These four species have numbers of unusual synapomorphies. Of note is the habit of larvae of Ect. umbratorum and Ect. faecofilus to attach to the ends of constructed threads apparently composed of larval fecal pellets and other material, enclosed within a salivary silk matrix.The four species all have short spine-like setae on the larval head capsule and their body physiognomy is unusual, with head large in relation to a tubular body, with that being semitransparent when alive.Three otherWestern Australia simuliids, Ect. absitus n. sp., Ect. princeae n. sp. and Ect. sp. A., possess more typical attributes. Known larvae of the first two of these simuliids lack the marked head setae; however, in common with Ect. acanthocranius of the previous segregate, larvae of the three taxa lack the anal sclerite and have markedly low numbers of hooks comprising the posterior circlet. Details are given for distribution and, where known, bionomics.Trichomycetes are recorded for the second time from simuliids in Western Australia, from Ect. sp. A. Brief character analysis is provided, as are comments regarding historical biogeography.
Only one species dealt with in this paper had originally been formally recognized. Tonnoir (1925) described Simulium umbratorum based on female adults from Mt. Dandenong, Victoria (immediately east of Melbourne). The species was distinctive from all then known Australian Simuliidae, possessing an almost straight vein CuA and with the pretarsal claw tooth directed distinctly laterally. As for many Australian simuliids at the time, immature stages were unknown. Edwards (1931) discussed similarities between the Australian S. aurantiacum Tonnoir and similar South American forms, and placed them all in his subgenus Cnephia. Smart (1945) included certain Australian simuliids in Cnephia (as a genus) and Mackerras and Mackerras (1949) accepted that, assigning S. umbratorum to Cnephia, placing the species in their terebrans-group. They noted, however, problems with fitting Australian simuliids into Cnephia as defined by Smart (1945). Later, they (Mackerras and Mackerras 1950) transferred umbratorum to the aurantiacum group and in 1952 they described larvae of unusual gestalt, putatively of umbratorum.
As the history suggests, taxonomic placement of umbratorum has been moot. Crosskey (1987Crosskey ( , 1989 considered the species to be of undetermined genus along with the other Australian 'Cnephia' and at that time to be Prosimuliini. Confirmation that the unusual larvae proposed for umbratorum was still not forthcoming. However, Zwick (1997) obtained and described larval material similar to that described by Mackerras and Mackerras (1950), and collected a few pupae, one of which contained a pharate male adult and managed a preparation of the genitalia. Portions of that material were examined for this work. Although male umbratorum had not previously been described, wing and leg characteristics from the pharate male, plus a poorly developed female pupa, confirmed association of those immatures with known females of umbratorum. Zwick (1997) also discussed taxonomic placement, noting that umbratorum neither fitted Cnephia (as redefined by Crosskey 1969), nor Gigantodax Enderlein. Crosskey and Howard (1997) on the basis of similarity of larval labral fans of the Australian 'Cnephia' strenua Mackerras and Mackerras to those of the South African Paracnephia thornei (De Meillon), transferred umbratorum along with the other Australian 'Cnephia' to the latter genus-at the time still in Prosimuliini and unnecessarily amended the species name to umbratora to agree with gender of the genus name. Later, Paracnephia was transferred to Simuliini (Adler and Crosskey 2008).
The objectives of this paper are to 1) reassign umbratorum and related undescribed species within Australian Gondwana Simuliidae to a new genus, 2) confirm and expand on the larval behavior of some of the species that attached to a thread, and 3) offer comments about historical biogeography and relationship of Ectemnoides to other austral simuliid genera.
Materials and Methods
Apart from older pinned specimens of Ectemnoides umbratorum (Tonnoir) in the Australian National Insect Collection (ANIC), CSIRO, Canberra, we examined material of this species collected by P. and H. Zwick Craig et al. (2012) and are based on Adler et al. (2004); with some exceptions, such as use of claw 'tooth', not thumb. For designation of wing veins (e.g., Fig. 9) we follow Cumming and Wood (2017) and de Moor (2017). We report on the a:b ratio, where a = base of Sc to rm, b = rm to wing tip.
A term applied to the anterolateral arms of the male ventral plate is 'basal arm' ). These arms attach to their respective paramere (e.g., Fig. 55,102), via what we term a 'paramere connector'. The marked development of that structure here is only seen elsewhere in Australian simuliids in Bunyipellum gladiator (Moulton and Adler) (Craig et al. 2018a). Similar expression, however, occurs in other simuliids such as Prosimulium, Greniera, and Tlalocomyia (e.g., Adler et al. 2004).
The pupal gills of Ect. acanthocranius, Ect. faecofilus, Ect. umbratorum, and Ect. uvulatus have a unique elongated delicate tubular structure arising from the basal fenestral region. We refer to this as the 'fenestral diverticulum ' (e.g.,Figs. 24 and 25). The probable homolog is discussed later.
For hypostomal teeth, we use the numbering system employed in Craig et al. (2018a) where the median tooth is deemed '0' and those lateral on either side are numbered in sequence '1, 2, 3, etc.'. Thence the 'lateral or corner tooth' is '4' and the so-called 'paralateral teeth' are designated '5-8' (Fig. 38).
Distributions (Fig. 229) are based on literature localities (e.g., Horne and Pettigrove 1989), databases (e.g., Environmental Protection Agency, Victoria, Australia), label data on collection material (e.g., ANIC) and localities given by various authors herein. Where possible, coordinates are given for localities in degree decimal format (e.g., S32.8708° E116.4524°), with significant decimals indicating accuracy; at best 30 m. If localities for species are widespread, figure captions mention the locality for the material illustrated.
Type Material
Holotype Tonnoir (1925) stated that the holotype was in the Cawthron Institute (Nelson, New Zealand) and that three paratypes were in his possession. Mackerras and Mackerras (1949: 385) recorded that the holotype was in Canberra, as does Bugledich (1999) with that and other types in ANIC. The holotype was not available for our examination, so exact label data is unknown, but no doubt as for the paratype. Tonnoir (1925) gave the type locality as Fern Tree Gully, Mt. Dandenong, (Melbourne), Victoria, 25. x. 1921. (ca. S37.8800° E145.3200° elev. 255 m). This site is southwest in the Dandenong Ranges National Park (east of Melbourne and adjacent to Upper Ferntree Gully suburb) and probably was Fern Tree Gully Creek. As are many of the earlier localities for Australian simuliids, the creek, albeit in a National Park, is now seriously impacted by human activities and there are no simuliids in the foul trickle of water exiting the Park. Tonnoir would weep!!!
Etymology
Not given by Tonnoir (1925), but refers to 'from the shade'. Possibly in reference to the color of the head and thorax, described as 'testaceous' (brick colored), in contrast to the slightly yellower abdomen and legs, or, equally likely where the specimens were originally collected. The emendation by Crosskey and Howard (1997) of umbratorum to umbratora was unnecessary. Tonnoir (1925) reported that the type material had been collected while sweeping plants. There are no published reports that the females feed on vertebrates, but two specimens from Narbethong (ANIC) are labeled 'biting'. The females collected by J.K.M. were netted while flying around his head. Zwick (1979) gave an account of the life cycle, with which we fully concur. In short, larvae develop during the austral winter (July, August) with pupation and eclosion of adults in early spring-September and October. Larvae inhabit streams with sandy substrate, or with stones mixed in with sandy sections, but with steady gentle flow that allows growth of macrophytes (e.g., Fig. 43), as noted by Mackerras and Mackerras (1952) and Horne and Pettigrove (1989). Pupae (with widely spread and long thin filaments, Fig. 22) recovered from the Acheron River, were on submerged macrophytes covered by green algae. Zwick also suggested that larvae might feed by browsing the 'Aufwuch' (biofilm), a suggestion similar to that of Horne and Pettigrove (1989). The later account is puzzling given they appear to have actually reared larvae through pupae to adults. Additionally, they commented that the head and mouthparts of larvae of Ect. umbratorum were similar to non-filter feeding larvae of Twinnia Stone and Jamnback, which lack labral fans (Craig 1974). This is curious as larvae of Ect. umbratorum (Fig. 31) clearly have quite large delicate labral fans.
Bionomics
It was J.K.M. who drew attention to the unusual behavior of larvae attaching themselves to the substrate via a long thin thread (e.g., Fig. 44) and Moulton and Adler (1997) comment about this (as 'Cnephia' terebrans-larvae of that species are still unknown) in relation to independently evolved stalks employed by larvae of the North American Ectemnia. Adler et al. (2004) similarly comment, again noting that larvae abandon the thread prior to pupation. Examination (J.K.M., D.A.C., D.C.C.) of the thread using scanning electron microscopy shows it to consist of apparent extraneous material and fecal pellets covered by a silk matrix. The larvae attach to the extreme apex of the thread, which can be upwards of 10 cm long, typically attached to the apex of an elongated leaf of Water Ribbons (Cycnogeton procerum (R. Br.) Buchenau-previously known as Triglochin procera. See von Mering and Kadereit 2010) and undulates in the current (Figs. 43 and 44) (see Supp Material for brief video). The thickness of the thread increases with length and shows distinct changes in diameter. This might indicate a molt to the next larger instar. We will report in detail on the attachment threads elsewhere.
Of note is that when alive the larvae are semitransparent with alimentary canal contents visible (Fig. 44). The body is held straight with the labral fans widespread-larvae do not twist the body as is typical for simuliid larvae attached to hard substrates (e.g., Chance and Craig 1986). When preserved, however, larvae assume a markedly more curved posture (e.g., Fig. 30) than typical of dead simuliid larvae. The behavior of attaching to a thread is no doubt integral to the unusual physiognomy of the larvae.
For the three main localities in the Grampians, namely Wannon River, Glenelg River, and Fyans Creek, all had Water Ribbon leaves trailing in the flow. While the Wannon site had a more rocky substrate (Fig. 43), there were stretches of sand. None of the sites had markedly deep water (ankle to knee deep), and temperatures ranged from 9.8 to 15.0°C and velocities from 44 to 80 cm/s. Of interest is that attempts in 2014 to recollect material from previously known sites failed. Similarly, although timing was appropriate, nothing was obtained from near Narbethong, neither the Acheron River, nor further south at The Otways.
The rarity of pupal material is of note. Moulton and Adler (1997) and Adler et al. (2004) comment that larvae abandon their thread prior to pupation. Apart from the pupal material obtained by the Zwicks, the only other specimen obtained was a partial exuviae from a kick sample of the sandy substrate of the Glenelg River (Syphon Road) site where there were numerous larvae. In agreement with the those statements, we are of the opinion that since pupae are not found on the attachment thread, pharate pupae detach and drift, either into vegetation (e.g., as for the Zwick pupae previously), or down to the substrate. Crosskey (1990) briefly mentions such pupation behavior in other simuliids. This is one aspect of the bionomics of Ect. umbratorum that needs closer examination.
Remarks
The original adults of Ect. umbratorum netted by Tonnoir (1925) were all female. Similarly for Mackerras and Mackerras (1950), who, however, in 1952 illustrated possible larvae of Ect. umbratorum (their Figs. 1-5). Rearing of larvae through to adults reported by Horne and Pettigrove (1989) is inconclusive since no details are given. Firm confirmation of that association was not until Zwick (1997) and in this study.
Although only minimal pupal material was available, there is no question that the pupae are those of Ect. umbratorum. This is confirmed by pharate pupal gill structure from last instar larvae that fully agrees with the pupal stage, including the previously unnoticed unusual basal fenestral diverticulum (Figs. 24 and 25), as far as known, unique in Simuliidae and synapomorphic for Ect. umbratorum, Ect. faecofilus, Ect. uvulatus, and Ect. acanthocranius. Also unique for Australian Simuliidae are double setae on the pupal frontal plate (Figs. 26, 27, and 60) of Ect. umbratorum and Ect. faecofilus; synapomorphic for those two species. These setae are absent in other Ectemnoides species. Further, the pharate male that was recovered by Zwick (1997), while still the only male known, has wing characteristics that agree with the known female wing (Fig. 9), namely only moderately expressed spinous hairs on the costa, plus vein CuA barely sinuous. Other character states gleaned from the pharate female pupae, such as head proportions, calcipala, pedisulcus, claw tooth, genital fork, spermatheca with pigmentation partially down the seminal duct and lack of clear area at the junction, confirm the association to known Ect. umbratorum female adults and thence, via the pupal gill, fully confirming association to the larvae described by Makerras and Mackerras (1952).
The anal circlet of merely some 390 hooks is of interest and probably relates to the larval habit of attaching to a thread in moderate velocity water. Palmer and Craig (2000) give the lowest hook number for simuliid larvae, known at the time, as ca. 700. Moulton et al. (2004), reported some 400 hooks for last instar larvae of B. gladiator, similar to that of Ect. umbratorum larvae. As shown later, larvae of other Ectemnoides species have even fewer hooks. When not dissected, as shown in Fig. 42, the circlet is more angulate, not circular as more typical for simuliid larvae-no doubt to accommodate attaching to a tubular thread. Campaniform sensilla associated with the anal sclerite and circlet of hooks, for E. umbratorum and other larvae of the genus, have a common arrangement. The ventral arms of the anal sclerite have two sensilla dorsolaterally, with two more between that and the circlet of hooks. There is, then, one further around the circlet and another laterally halfway around. With the absence of an anal sclerite in other members of the genus, the sensilla while retained, are more evenly arrayed (e.g., Fig. 226).
Mouthparts of the female adult are well expressed, with the clypeus that houses muscles that work the blood-pumping cibarium, large and domed (Fig. 3) The mandible possesses teeth on each edge; however, these are markedly small (Fig. 6). The cibarium is well sclerotized. Character states that, overall, indicate blood feeding. The sensory vesicle on the maxillary palp is, however, small and this structure is generally assumed to be a CO 2 receptor (McIver 1987) involved in host detection. The expression of the abdominal tergites of the female are equivocal regarding blood feeding (Craig et al. 2012); i.e., while they are medium sized, indicating a tendency to non-blood feeding, they are not strongly sclerotized.
Pupa (based on two immature female pupae, one fully developed male pupa and one exuviae). Poorly known. Body: length; female 2.2 mm (Fig. 57), cuticle essentially colorless. Head: male frontal plate covered with barely visible minute clear tubercles; facial setae elongated and curled, frontal setae doubled (Fig. 60), ratio of basal width to length 1.0:2.1, basal width to maximum width 1.0:1.5. Thorax: anterior dorsal shield with tubercules as for frontal plate, dorsocentral setae long, curled apically (Fig. 61). Gill (Figs. 58, 59, and 61): total length 2.4-2.6 mm; as long as pupal body with two thin light-brown trunks arising close to base, various; fenestral diverticulum thin-walled, transparent (Fig. 59); trunks at approximately half-length further divided into 6 and 8 long thin filaments, respectively. Surface pseudoannulated to annulated proximally, finely annulated distally. Abdomen (Fig. 62): chaetotaxy and armature similar to Ect. umbratorum (Fig. 29); tergites and sternites with light tuberculation; tergite IX with two short blunt terminal spines and well-expressed spine comb; sternite IX with straight and bifurcated setae, with slightly curled tips, but not as grapnel hooks. Cocoon (based on a single pupa from Rosa Brook plus one from Northridge Creek). Sparse thin silk filaments over posterior abdomen; extraneous material from substrate incorporated.
Type Material
Holotype Dried (Peldri II) from alcohol. Micro-pinned teneral male adult (Fig. 45). Label (Fig. 46) Behavior of the larvae is similar to those of Ect. umbratorum that tend to attach to submerged plant material, such as leaves and twigs on the end of a thread (e.g., Fig. 44). Other simuliids known to cooccur with Ect. faecofilus are Austrosimulium spp. and Paracnephia tonnoiri (Drummond). Given the dates of collection of Ect. faecofilus, plus that streams inhabited are definitely ephemeral, the species is likely a late winter, univoltine species.
Carey Brook (Fig. 78) is a typical stream in southwestern Western Australia, with brown low-velocity water, sandy substrate, and trailing macrophytes, namely C. procerum (Water Ribbon).
Remarks
There are differences in larval head proportions, coloration (Figs. 64 and 65), antennal proportions (Figs. 66 and 67) and hypostoma between Rosa Brook and Carey Brook material, all indicative that there is more than a single species. The types of Ect. faecofilus from Northcliffe Road correspond to the Rosa Brook form. Since we did not have a full suite of stages from Carey Brook, we refrain from erecting separate entities.
The low number of hooks (ca. 240) in the posterior circlet, plus that of Ectemnoides sp. A, with some 220 (see description) are probably records for Simuliidae. All species dealt with in this work inhabit slow to moderate velocity water and possess extremely low numbers of hooks in the circlet; synapomorphic for them all.
Of interest is that the apparently delicate fenestral diverticulum at the base of the pupal gill does not necessarily become detached or damaged during the pupal stage. It can still be observed, whole, on pupal exuviae (Fig. 59). This is in contrast to Ect. acanthocranius (Figs. 106 and 120), albeit there, exuviae were obtained via rearing.
In Western Australia, the known distribution of Ect. faecofilus is restricted to and well within the Warren bioregion (Kauri Forest bioregion). This region is mainly coastal sand plains between Cape Naturaliste and Albany, and for most of its extent it is within 10 kilometers of the coast. North of Point D'Entrecasteaux it extends farther inland. To the north and east is the Jarrah Forest bioregion. Trayler et al. (1996), in discussing conservation of aquatic fauna of the Warren Bioregion, note that rivers of the southwestern corner of Australia arise on an ancient flat semi-arid plateau and flow sluggishly toward the coast. Then there is steeper topography and increased rainfall, coastal lowlands, and lagoons. For simuliids, they list only Aust. furiosum (Skuse) and P. tonnoiri as occurring in the region.
Etymology
In reference to the numerous short substantial setae on the larval cranium.
Bionomics
This species occurs in a low-lying, small, first-order stream with sand and rock substrate, slow to moderate velocity, and trailing vegetation. Although the body form and ventrally directed posterior abdominal proleg and circlet of hooks are markedly similar to Ect. umbratorum and Ect. faecofilus, larvae of Ect. acanthocranius attach directly to trailing plant material in typical simuliid fashion. However, one fecal thread is included in the collections-perhaps from Ect. faecofilus? Pupation does not occur on the plant-presumably in the substrate as incompletely known for Ect. umbratorum. Larvae from Quinine Creek reared (J.K.M.) to adult stage in Petri dishes spun only a few criss-crossed silken strands that served to anchor the pupa.
Remarks
The wing shows considerable differences from those of Ect. umbratorum and Ect. faecofilus, where for those, vein CuA is essentially straight and the a:b ratio is different (cf. Figs. 8, 9, 85, and 86). Ect. acanthocranius wing also lacks the spiniform setae on the costa that are present in Ect. umbratorum and Ect. faecofilus. The expression of the fenestral diverticulum on the pupal gill is of some interest. While present, albeit poorly so in the pharate state (Fig. 120), it appears absent in the fully exposed gill (Fig. 106); an assumption is, being so delicate, it is easily broken off. Molecular data (J.K.M., unpublished) show that Ect. acanthocranius is the sister species to Ect. princeae + Ect. absitus, in agreement with distribution of some character states-discussed later.
Etymology
In reference to the 'uvula'-shaped structure (derived from Latin 'uvae' for 'grape') on the postgenal cleft apex.
Pupa (based on six mature pupae and exuviae). Body (Fig. 162); length, male 3.6-3.7 mm, female 3.4-36 mm, dark brown. Head: frontal plate of female quadratic (Fig. 166), basal width ratio of frons to vertex 1.0:1.3, male more ovoid ratio 1.0:1.5 (Fig. 167), male apparently smooth, but covered with barely visible minute tubercles; muscle scars positive and distinct; female with pitting and corrugations; frontal setae absent, facial setae present, stiff and curled apically. Thorax: anterior dorsal shield with crinkled cuticle, dorsocentral setae neither markedly developed nor with curled tips (Fig. 165). Gill (Fig. 163): markedly longer than pupa, total length 4.3-5.3 mm; fenestral diverticulum absent, three fine trunks arising directly from base, ca. 10 short fine filaments arising from basal third of main trunks. Surface reticulated on main trunks, finely pseudoannulated on filaments (Fig. 164). Abdomen (Fig. 168): overall yellowish brown, cuticle substantial, covered with minute tubercles and corrugations; armature poorly developed; terminal spines on tergite IX short, substantial, sharply tapered, curved anteriorly, grapnel hooks present; spine combs essentially absent, present only as sporadic spines well lateral on posterior tergites; pleurites essentially absent, small one present on segment VI, that on segment V closely attached to extended tergite; concertinaed pleural cuticle markedly brown. Cocoon (Fig. 162). Silk fibers fine; close-fitting shapeless bag covering the abdomen, occasionally reaching anteriorly to mid thorax, anterior edge not well formed; two layers, inner one coarsely woven, outer one finer with extraneous material from substrate variously incorporated.
Etymology
Named in honor of Jane Prince, who first discovered the species while conducting ecological studies (Prince 1980) in the Jarrah Forest.
Bionomics
Ect. princeae is known only from small first-order streams in the Jarrah Forest SE of Perth. The type locality of Spice Brook was a shallow seepage stream strewn with fallen leaves on which the larvae and pupae were found in great abundance. The other localities, with smaller populations, were more stream-like, with trailing grasses, leaf packs and snags.
There were no other simuliids present at the type locality, but at the other two sites Ect. princeae was associated with Ect. absitus and Nothogreniera occidentalis (Makerras and Mackerras) (Craig et al. 2018b).
Remarks
The female has substantial mouthparts with mandibles toothed on both edges. This, in conjunction with the poorly expressed abdominal tergites, suggest that Ect. princeae females probably blood feed. There is, however, no information on that. Lacking a tooth on the claw indicates that , if a blood feeder, Ect. princeae is probably not ornithophilic. The male is unusual in that it shows a slight branching of the Rs vein (Fig. 158). Further, the paramere connector is markedly curved and expressed, and parameral spines are absent. The pupal gill while of similar overall expression to those of Ect. acanthocranius, Ect. faecofilus, Ect. umbratorum, and Ect. uvulatus, has three long trunks and lacks the fenestral diverticulum. As noted elsewhere, this suggests that the diverticulum, when present, is a poorly expressed third trunk. While the larval body is somewhat similar to those other four species in being slightly elongated, the posterior proleg is directed posteriorly, not ventrally. Similar to Ect. absitus described next, the larval mandibular serration and sensillum are cone-like and distinct on a raised base. The hypostomas of these species are also similar to Ect. absitus with the anterolateral edges of the 'hypostoma' not formed by teeth. Adult female (based on eight reared specimens). Body (Fig. 180): dried; overall black to dark brown; total length 2.4-2.9 mm. Head (Fig. 182): black; width 0.87-0.89 mm; depth 0.61-0.63 mm; frons broad, narrowest just above antennae; slightly pollinose; postocciput black, vestiture of sparse, short black hairs; frons:head ratio 1.0:5.5. Eyes: interocular distance 0.13-0.16 mm; ommatidia black, diameter 0.016 mm; ca. 34 rows across and down at mideye. Clypeus: width 0.19 mm; dark brown; vestiture of sparse dark hairs. Antenna (Fig. 183): overall, blackish brown; total length 0.72 mm; scape and pedicel segments bead-like, flagellomere I broad, remainder slightly tapering, apical flagellomere IV small. Mouthparts: moderately substantial, ca. 0.3× length of head depth, labrum pale; maxillary palp (Fig. 184), total length 0.76 mm, palpomeres I & II small, palpomere III darker brown than remainder, not extended beyond articulation with palpomere IV; proportional lengths of palpomeres III-IV 1.0:0.7:1.5; sensory organ ovoid, 0.3× palpomere III length, opening 0.3× vesicle width; lacinia with ca. 16 teeth on each side; mandible expanded apically with ca. 16 outer and 34 inner teeth, fine and subequal in size (Fig. 185); cibarium cornuae broadly flared and substantially sclerotized, median depression with central projection (Fig. 186). Thorax: length 0.9-1.4 mm; width 0.9-1.1 mm; markedly blackish brown; postpronotal lobe with sparse fine hair longer than vestiture on scutum, scutum overall with even sparse fine small hairs; scutellar depression black; scutellum with vestiture of sparse very fine hairs; postnotum concolorous with scutellum, vestiture absent; postpronotal and antepronotal lobes and proepisternum haired; pleuron and anepisternal membrane dark brown, latter without hairs; katepisternal sulcus well expressed and deep. Wing (Fig. 188): hyaline, lacking color; length 3.2-4.0 mm; width 1.4-1.6 mm; anterior veins dark; a:b ratio 1.0:2.8; costa lacking spiniform setae; Rs not branched; M 1 double, but not markedly so; CuA markedly sinuous. Haltere: stem pale with dark base, knob tan. Legs: evenly dark brown, not markedly hirsute, but in particular hairs long on the fore coxa; hind basitarsus with marked row of ventral stout spines; calcipala present as small projection with clump of stout spines (Fig. 187); intersegmental plate ventrally between basitarsus and tarsomere II small; pedisulcus absent, tarsomere II elongated, variable, ratio of apical width to length 1.0:2.8-3.7; claws small, basal tooth merely as small knob, if at all, rounded heel substantial (Fig. 189). Abdomen (Fig. 190): overall dark brown, vestiture sparse; basal scale (tergite I) dark gray; remaining segments generally dark gray; tergites dark brown, tergite II, 3× wider than long, tergite III twice as wide as long, tergites IV-VI quadratic, tergite VII twice as wide as long, curved anteriorly. Genitalia: sternite VIII evenly pigmented; hypogynial valves triangular, inner margins slightly sinuous and diverged, markedly strengthened edges (Fig. 191); genital fork with narrowed straight anterior arm, knee-bend on lateral arm well expressed, lateral apodeme as elongated ridge, posterolateral expansions rounded laterally, angular medially (Fig. 192); cerci as for Ect. princeae (e.g., Fig. 152); spermatheca ovoid, dark brown, externally smooth, internal acanthae (length ca. 4.6-9.0 µm, width ca. 0.11 µm), region surrounding junction with spermathecal duct not markedly enlarged or sculpted (Fig. 193).
Etymology
In reference to absence of spine-like setae on the larval cranium, plus absence of the anal sclerite.
Bionomics
Immatures are found in small streams with intermittent riffle areas where they use trailing grasses, sticks, and fallen leaves as substrate. Pupation occurs on that substrate and the cocoon is well developed. Associated with Ect. absitus were the following simuliids: Austrosimulium sp., B. gladiator, Ect. acanthocranius, Ect. faecofilus, Ect. princeae, Ect. sp. A, N. occidentalis and an undescribed Paracnephia sp.
Remarks
Differences in expression of the larval mandible (Figs. 215 and 216) and hypostoma (Figs. 219 and 220), between populations from Quinine Creek and the Hotham tributary indicate the latter is distinct. Without more material, however, we decline to erect a new species.
Ectemnoides sp. A (Figs. 223-226)
Not given full species rank because of lack of material. Based on a single partial exuviae of female pupa and attached larval cuticle, including hindgut intima.
Bionomics
Of all larvae examined in total for this study, only this specimen (larval hindgut still attached to pupal exuviae) was noted to contain fungal trichospores. There appear to be two forms-a longer one (Fig. 227) that is perhaps Furculomyces westraliensis (Harpalles, Trichomycetes), described for chironomid larvae from Western Australia (Lichtwardt and Williams 1992). The broader shorter ovoid one (Fig. 228) is perhaps Zancudomyces culisetae (previously known as Smittium; Wang et al. 2013). Numbers of trichomycetes from eastern Australian Austrosimulium have been described by Lichtwardt and Williams (1990) and such are also known for New Zealand species (Fig. 145) and Ect. absitus (Fig. 188) are all similar with poorly-expressed to absent spiniform setae on the costa, with CuA markedly sinuous and in the female, Rs unbranched. One male of Ect. princeae (Fig. 158) showed rudimentary Rs branching.
Ect. umbratorum adults have a small, well-formed calcipala and wrinkled cuticle at the pedisulcus position (Fig. 11), Ect. faecofilus has a smaller calcipala and little sign of a pedisulcus (Fig. 53) and Ect. acanthocranius has virtually no calcipala and no evidence of a pedisulcus (Fig. 87). Ect. princeae shows little evidence of a calcipala (Fig. 146), similar to Ect. absitus (Fig. 187). For the last three species with the reduced calcipala, there is an aggregation of spines (continuation of the ventral row of stout spines along the basitarsus) on the calcipala; similar to that known for N. fergusoni, previously the only known Australian simuliid to exhibit such a condition (see Tonnoir 1925: 221, his Fig. 2, F, G. H;Craig et al. 2018b). The two species of Ectemnoides that possess a calcipala, albeit small, lack the spine aggregation-indicating perhaps that, whatever is the function of the calcipala, when too small, it is assumed by the spines?
The claws of Ect. umbratorum female have a small but distinct tooth and minor heel (Fig. 12), that for Ect. faecofilus is unknown, whereas Ect. acanthocranius claw has a barely evident tooth with a heel similar to Ect. umbratorum (Fig. 88). The basal tooth in Ect. princeae is various; absent at times (Fig. 147) and Ect. absitus has no sign of the tooth and a very smooth heel (Fig. 189).
Details of male genitalia of Ect. umbratorum are not fully known, but may lack paramere connectors, parameres, and spines (Fig. 21). For Ect. faecofilus, while present, these structures are poorly expressed (Fig. 55), but for Ect. acanthocranius (Fig. 102), Ect. princeae (Fig. 161), and Ect. absitus (Fig. 203) these are markedly expressed and together with those of B. gladiator, unique within Australian Simuliidae. Such distinct paramere connectors are similar to those seen in some Prosimulium, Greniera, Stegopterna, and Tlalocomyia (e.g., Adler et al. 2004). We are of the opinion that when fully known for Ect. umbratorum the connectors will be similar to those in Ect. faecofilus.
Pupa The pupal gills of Ect. umbratorum, Ect. faecofilus, Ect. uvulatus, and Ect. acanthocranius, have a basal fenestral diverticulum (e.g., Figs. 25 and 59) unique in Simuliidae. In Ect. acanthocranius the diverticulum is poorly expressed (Fig. 120) and often missing from the deployed pupal gill (Fig. 106). The diverticulum is absent from gills of Ect. princeae (Fig. 163), even though the gills are of similar expression to the previous four species; namely long thin trunks with fine filaments, albeit Ect. princeae has three trunks in comparison to the others with two. The gill of Ect. absitus lacks a fenestral diverticulum and the gill is of more typical expression (Fig. 206). The homology of that diverticulum is not entirely clear, but as we have suggested previously it is highly likely a poorly expressed third gill trunk.
Unique to Ect. umbratorum and Ect. faecofilus pupae are the double frontal setae on the ovoid frontal plate (Figs. 26,27,and 60); in Ect. acanthocranius they are absent (Figs. 103 and 104) and the frontal plate angulate in the female. For pupae of Ect. princeae and Ect. absitus, the setae are also absent (Figs. 166,167,207,and 208) and again, the frontal plate angulate in the female.
Abdominal chaetotaxy and armature for all species are poorly expressed. The terminal spines (tergite IX) are small, albeit sharp (e.g., Figs. 29 and 62) and the IX sternal setae are not expressed as grapnel hooks; however, sometimes apically curved. All pupae have a spine comb on tergite IX, variously expressed.
Larvae Heads of Ect. acanthocranius, Ect. faecofilus, Ect. umbratorum, and Ect. uvula all have short substantial spine-like setae (e.g., Figs. 32 and 37). These are elongated and substantial 'secondary sensilla' (Craig 2005). All too, have the head of similar or larger diameter to the elongated body (Figs. 30,63,and 109) and labral fans with long stems. The posterior circlet of hooks is markedly directed ventrally-character states unique in Australian Simuliidae. In Ect. faecofilus and Ect. umbratorum this physiognomy appears to associated with the habit of attaching to the end of a thread (e.g., Fig. 44). This behavior is not yet known for Ect. acanthocranius or Ect. uvulatus. Other larvae are various; those of Ect. princeae, while possessing a ventrally directed circlet of hooks, has a more substantive body (Fig. 169); that for Ect. absitus is of more typical body shape for simuliids and has the circlet of hooks directed posteriorly (Fig. 210), that also is more typical.
All larvae have unusual spinous teeth on the mandibles. Rather than being more typically elongated and spine-like, they are sawtoothed in expression (Figs. 36,70,114,174,215,and 216), markedly so with multiple tips in Ect. acanthocranius (Fig. 114). The mandibular serrations and sensillum are unique in Ect. acanthocranius, Ect. faecofilus, Ect. umbratorum, and Ect. uvulatus larvae, being hair-like and proximal to a blade-like region (Figs. 36, 70, 114, and 131), rather than distal, typical in other simuliids. They are with a small inverted Y-shaped anal sclerite has some 580 hooks and B. gladiator possess only ca. 440 hooks (Craig et al. 2018a).
Ectemnoides adults, with exceptions, match moderately well the diagnosis of Paracnephia by Crosskey (1969), a subgenus of Prosimulium at the time. In the recent key by de Moor (2017), Ectemnoides adults key out to Paracnephia; however, pupae and larvae do not. Wing veins and setae agree well, e.g., shape of CuA is variously sinuous in both Paracnephia and Ectemnoides. However, the mesepisternal sulcus in Ectemnoides, while of similar proportions to a typical Prosimuliini (Crosskey 1969, his Fig. 18), is more directed posteriorly as in Simuliini (Crosskey 1969, his Fig. 19); similarly, the katepisternal sulcus is similar to the latter tribe.
The calcipala and tarsal claws of Ectemnoides are at certain variance with Paracnephia; calcipala are small to well developed in Paracnephia, small to essentially absent in Ectemnoides; the tooth on females claws is well expressed in Paracnephia, poorly developed, various and/or absent in Ectemnoides. The female hypogynial valve shape of both genera are markedly dissimilar (Crosskey 1969, his Fig. 34)-in Procnephia and Paracnephia the lobes are widely separated, divergent and broadly rounded, whereas for Ectemnoides they can be closely aligned, or not, but are not so shaped (e.g., Figs. 14 and 90). The male gonstyli of Paracnephia and Ectemnoides not only differ in shape, but in Ectemnoides there are only two terminal spines, with many in the former genus. Presence of parameral plates is not mentioned by Crosskey (1969) and while possibly absent from Ect. umbratorum, they are variously expressed, even markedly so (e.g., Fig. 203), in other species of Ectemnoides.
Pupae of Paracnephia and Ectemnoides both lack pleurites; Crosskey (1969) illustrates two hooks only on sternites VI and VII of Ethiopian Prosimulium with a substantial hook and base in the pleural region. While Ectemnoides pupae are similar in hook number on those two tergites, there are only fine setae on the pleural region (Fig. 29). Grapnel hooks (= biramous anchor-like hooks) are not expressed as such in Ectemnoides. The small non-sinuous terminal spines of segment IX in Ectemnoides would fit the Paracnephia brincki-group (Crosskey 1969, his Fig. 44), but Ectemnoides pupae lack the spine comb on tergite V.
The pupal gills are markedly different. While Ectemnoides has similar number of terminal filaments, the branching is distinctly different from any Paracnephia s.s. species even where some possess long main trunks; plus there appears to be nothing like the fenestral diverticulum unique to Ectemnoides umbratorum,Ect. faecofilus,Ect. acanthocranius,and Ect. uvulatus (e.g.,Fig. 25) in Paracnephia. The gill branching pattern in Ectemnoides is reminiscent of some seen in Prosimulium and Stegopterna .
With the marked morphological variance between Paracnephia and Ectemnoides we have had no hesitation in assigning P. umbratorum and allied species to Ectemnoides.
While we named Ectemnoides because larvae of some species use a thread reminiscent of the stalk produced by Ectemnia larvae, there is much to indicate that the structures and associated behaviors are autapomorphic and the two genera not closely related. Formation of the stalk used by Ectemnia larvae is well studied (Wolfe andPeterson 1959, Stuart andHunter 1998). Briefly, the larvae construct the stalk from salivary silk and may include extraneous material. The stalk is maintained and extended, with the larva attached near the apex. Pupation occurs on the stalk. Larval body form is adapted to this way of life; a concave hypostoma is probably used to manipulate the salivary silk, the abdomen has lateral flanges that produce a ventral groove, no doubt to allow close proximity to the stalk. Likewise, lacking the anal sclerite perhaps allows more flexibility of the anal circlet to contact the stalk.
There are no observations on formation of the threads of Ectemnoides larvae. However, larvae attach directly to the end of the thread, are fully extended in the direction of flow and do not twist the body (Fig. 44). The thread contains fecal pellets, plus other material and is covered in salivary silk. The diameter of the threads shows distinct changes along the length, suggestive of a larval molt with concomitant increase in size of fecal pellets. Larvae detach to pupate, apparently in the substrate. All of these attributes are of marked difference to Ectemnia larvae. Ectemnoides larvae that produce a thread have a hypostoma that is overall concave, with the medial teeth depressed in relation to those lateral; the teeth; however, are well expressed, unlike those of Ectemnia. Mackerras and Mackerras (1949) erected a terebrans group that consisted then of C. umbratorum, C. terebrans, C. sp. A., C. fergusoni and C. fergusoni var. They noted that C. umbratorum could well be assigned to the aurantiacum group and indeed that was done (Mackerras and Mackerras 1950). Female adults of the original terebrans group (Mackerras and Mackerras 1949) are superficially similar to those of Ectemnoides, namely in dark coloration, and similarly to N. fergusoni and N. occidentalis (Craig et al. 2018b). So, a question arises as to the taxonomic validity of the two remaining species of the original terebrans group viz. P. terebrans and Paracnephia sp. A. We are of the opinion that they do not belong in Ectemnoides. Examined in detail by D.A.C. (personal observation, 2014), both of those species, e.g., have distinctly different genital forks from species in Ectemnoides and the pedisulcus is markedly more accentuated. Indeed, unpublished molecular data (J.K.M.) indicates that P. terebrans (Tonnoir 1925) is related to P. pilfreyi (Davies and Györkös 1988) plus a now known sister species from Western Australia. Those latter two species, plus P. terebrans and Paracnephia sp. A, will be grouped together in a new genus presently under consideration by the current authors.
Given that many simuliid species have been shown to be complexes of cryptic species (e.g., Adler et al. 2004, plus many others) and that those in other Australian simuliid genera show morphological variation, as do Ectemnoides species, it is expected that some of the entities dealt with in this work will also be complexes. Indeed, this is one aspect of the Australian Gondwana simuliids that needs investigation; such, however, will require extensive collecting over considerable periods of time. To mitigate this issue, specimens examined from localities other than the type locality have not been included in the primary type series.
|
2019-04-03T13:06:36.644Z
|
2018-07-01T00:00:00.000
|
{
"year": 2022,
"sha1": "2f47a30f5d2669d72a104fc5382f186f71dc2107",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/isd/article-pdf/2/4/4/25435714/ixy009.pdf",
"oa_status": "HYBRID",
"pdf_src": "BioOne",
"pdf_hash": "d7a39e2703937020b229b32ab26e09443ba7255b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
270513388
|
pes2o/s2orc
|
v3-fos-license
|
Atomic order of rare earth ions in a complex oxide: a path to magnetotaxial anisotropy
Complex oxides offer rich magnetic and electronic behavior intimately tied to the composition and arrangement of cations within the structure. Rare earth iron garnet films exhibit an anisotropy along the growth direction which has long been theorized to originate from the ordering of different cations on the same crystallographic site. Here, we directly demonstrate the three-dimensional ordering of rare earth ions in pulsed laser deposited (EuxTm1-x)3Fe5O12 garnet thin films using both atomically-resolved elemental mapping to visualize cation ordering and X-ray diffraction to detect the resulting order superlattice reflection. We quantify the resulting ordering-induced ‘magnetotaxial’ anisotropy as a function of Eu:Tm ratio using transport measurements, showing an overwhelmingly dominant contribution from magnetotaxial anisotropy that reaches 30 kJ m−3 for garnets with x = 0.5. Control of cation ordering on inequivalent sites provides a strategy to control matter on the atomic level and to engineer the magnetic properties of complex oxides.
Supplementary Information
This description of the site preference model has been summarized from the work of Herbert [3][4] (1) GIA occurs in complex oxides in which spin-orbit coupling (SOC) makes the magnetic properties sensitive to the type and arrangement of electrons in neighboring ions.Strong SOC also explains the existence of magnetocrystalline anisotropy. 5) GIA originates from the rare-earth ion sites (c sites) due to the different orientations of the coordination dodecahedra within the unit cell with respect to neighboring magnetic cations (Fe 3+ and RE 3+ ).At the growth surface, differently oriented c sites present different coordination constellations to incoming cations.Each unit cell has 24 c sites comprising 12 different orientations.The 12 types of sites can be obtained by geometrical operations on one site, which will be referred to as X1.
o X2 are obtained by inversion, X3 obtained by reflection in the x-y or x-z plane, and X4 by a combination of the two reflections.
o Y1, Y2, Y3, Y4, Z1, Z2, Z3, Z4 are obtained by cyclic permutations of the axes on the operations to generate the X family (e.g.switch x for z axis and repeat the above procedure).
Table S1 lists the relative coordinates of each of the dodecahedral sites in the cubic garnet unit cell.A .cif file of the unit cell with labeled sites is also available upon request.(3) Contributions to magnetic anisotropy from each site is found by summing cos ⃗ ⃗ for each of its nearest magnetic neighbors, where ⃗ is the unit vector between the two ions and ⃗ is the magnetization of the neighboring ion.Summing for all RE sites we get the following equation for magnetic energy of a garnet with mixed A and B RE ions, relative to the unmixed A garnet: where C and C are constants and α are direction cosines.N are the number of B ions in the X, Y, and Z sites, and: where N is the number of formula units of (A1-xBx)3Fe5O12 in the crystal (x is the fraction of B ions in the crystal).
o (001) Sites fall into two groups, with 2/3 of them in the α group and 1/3 in the β group. α: It is interesting to note that for the (111) and (001) cases, the magnetic anisotropy is uniaxial with the unique axis perpendicular to the growth face.
(5) Concentration (x) dependence -In the (110) ordering scheme (as an example), A and B ions would have different "sticking coefficients" in each of the α, β, and γ sites.
Then, each site would contain some fraction of B ions: for the α sites and analogous expressions for the other sites.
At low concentrations, the energy equation can be simplified and approximated for weak site preference to depend on concentration quadratically, scaling as x(1-x).
The symmetry reduction of the unit cell for ordering on different growth surfaces is visualized in Fig. S1.As the symmetry of the growth surface is reduced from (111) → (110) → (112), the dodecahedral sites are split into 2, 3, and 4 degenerate site groups, as described above.The symmetry of the unit cells of resulting crystalline materials grown on these surfaces have lower symmetry than the unordered cubic cell (space group: Ia3 d).Considering a garnet film which includes all the symmetry variants of the atomic ordering, the space groups of these ordered cells are summarized in Table S2.
These site-ordered, symmetry-reduced structures are not conventional superlattices but they represent a 3D ordering based on the geometrical orientation of the 24 dodecahedral sites in the unit cell.The ordered structures are formed during the growth of these thin films by the preference of arriving RE ions to order into inequivalent dodecahedral sites due to steric factors.that the growth-induced anisotropy is robust up to high temperatures 4 , which indicates that the growth-induced order is retained to high temperatures.The order and the resulting anisotropy can be lost by a sufficiently high temperature anneal (e.g.>600˚C) to allow for RE diffusion within the structure.growth directions.
Supplementary Note 2: Strain calculation and additional high-resolution x-ray diffractograms
Reciprocal space mapping of the (642)+ reflection shows that the thickest film with the largest lattice mismatch (EuIG, 42 nm) is fully strained within the plane of the film, since the film and substrate peaks have the same qx value, as shown in Fig. S2(a).Therefore, it is assumed in all calculations that thinner films or those with less lattice mismatch will also be fully strained.The out-of-film plane spacing, related to 2θ of the (444) reflection (Fig. S2(b)), can be used to calculate the lattice parameter and strain of the film using expressions for a rhombohedral distortion.
The presence of Laue oscillations on the film peak as well as low spread in the rocking curve, as shown in Fig. S2(c), indicate high crystalline quality, low degree of mosaic spread, and planar top and bottom interfaces.Mixed garnet films of all compositions show similar rocking curve widths, indicating similar quality (Table S3).With the Globalfit software, the film peak is fitted assuming the film is fully relaxed, as shown in The GGG lattice parameter used in fitting is as = 1.2376 nm.We calculate film lattice parameter and strain geometrically (without literature values for stiffness) since we can assume that the films are fully strained to match the substrate in-plane lattice parameter.The lattice parameter can be used to determine the composition by the lever rule based on the strained lattice parameters of the end members, EuIG and TmIG.From the fitted thickness, the growth rates of the EuIG and TmIG are found to be 239 shots per nm and 614 shots per nm, respectively.Table S3 outlines the shot ratios, compositions, thicknesses, and strains of the REIG films, and the rocking curve widths.For the determination of Ms for films grown on GGG by magnetometry, it is well known that the paramagnetic signal from GGG must be subtracted, or the substrate must be thinned down to reduce this signal. 6Fig. S4 shows the VSM hysteresis loop for TmIG before and after linear background subtraction of each of the four tails (for a symmetric full loop).
The error associated with measuring the Ms of a given film can be estimated by the following contributions.Sample mounting error ( ± , units of emu/V) is estimated from the sample standard deviation of measured VSM calibration factors.Fitting error ( ± , units of V) is the standard error for the linear fit of the saturated branches of the hysteresis loops.The total error, ( ± , units of emu), is thus the combination of these two errors: This analysis is only applicable for easy axis loops (in the field range of 0.02 T).Beyond ~ 0.2 T, GGG has a non-linear background, so it is difficult to determine the hard axis saturation, which according to SMR measurements occurs for these garnets at most around 0.34 T. This is why only easy axis loops are reported.Table S4 reports the coercivities of the unpatterned films from the easy axis loops.Saturation field is extracted from SMR by numerically fitting the signal to a macrospin model, which determines the equilibrium magnetization direction for any applied field magnitude and direction 8 .Error is estimated to be the field step.The data and fit of a representative curve are shown in Fig. S5(c).
The magnetoelastic constants, , were determined from SMR measurements of anisotropy on series of EuIG and TmIG films grown on (111) substrates with different lattice parameters, including GGG (as = 1.2376 nm), YSGG (as = 1.2426 nm), NGG (as = 1.2505 nm), GSGG (as = 1.2554 nm) 4,9 .Fig. S6(a)(b) shows the strain series of EuIG films.From the linear variation of anisotropy as a function of strain we find the value of by linear regression of this equation: From this analysis, for EuIG and TmIG are found to be (2.45 ± 0.8)×10 -7 and (-1.1 ± 0.1) ×10 -6 if is taken to be that of YIG (766 GPa) 10 .The calculated magnetostriction coefficients are about a factor of five lower than the reported values for these materials.This could indicate a non-ideal stoichiometry or could deviate from the bulk value for YIG.Now, assuming there are n such axes, all with the same polar angle , with azimuthal angles evenly distributed, i.e. = , the total energy is The second term cancels out, proven below using complex numbers.
But we can write the sum as a finite geometric series, and the numerator cancels out, since = 1, thus the second sum in the energy equation is zero.
By the same logic, the geometric sequence term cancels out, thus we are left with .The final energy expression for n axes becomes: The system has three important angles -1) the vertical axis which is the symmetry axis of the cone drawn out by the anisotropy directions (set to be the z-axis, = 0), 2) the magnetization angle and 3) the anisotropy cone angle.Thus, both azimuthal angles are important and appear in the expression.
Considering the anisotropy cone angle fixed, we can write the energy: In the system, average energy is determined by ε/n, and is uniaxial in nature.
Description of the planes of order are shown in Fig. S7.
Supplementary Note 6: Scanning transmission electron microscopy (STEM)
It is essential to consider the role of the symmetry-allowed variants of the cation-ordering in order to analyse the structure and properties of the film.As described in Supplementary Note 5, for a (111) film an individual variant of the site order would yield a tilted anisotropy whereas the combination of the three variants produces PMA.Variants also affect the interpretation of XRD and STEM measurements.The (111) EuTmIG films follow the scheme of Fig. S8a,c.STEM along the [1 10] in-plane zone axis, Fig. S9, shows no elementally resolved site ordering because the sample consists of all three variants.The intensity in a STEM image results from averaging all the atoms in the column along the beam direction.Collecting information through all three variants averages out any compositional ordering through the thickness of the TEM lamella.Indeed, we can conclude that the correlation length of these variants is on the scale of a few unit cells or less.The lamella of the (111)-oriented film was ~10 nm thick, but we do not observe cation order, indicating that the size of the variants is smaller than 10 nm, and may be on the scale of one or a few unit cells (the unit cell size is ~1.2 nm).
This precludes visualizing the c site order in (111) garnet films unless we had a lamella that is thinner than the correlation distance of the site-order.For the (110)-oriented EuTmIG film, compositional analysis was carried out by EDS, analyzing the peaks of Eu, Tm, and Fe chosen to minimize overlap (Fig. S12).Then, non-linear principal component analysis was applied to reduce Poisson noise 11 .Lastly, the filtered image was slightly blurred with a Gaussian filter.Evidence of ordering can be seen even in the raw images (Fig. S12), but image processing makes the order immediately evident to the reader.Quantitative analysis of HAADF images of the film was also performed.The HAADF is sensitive to atomic number, Z, such that the averaged column intensities scale with approximately Z 2 and the number of atoms present in the column 12 .Tm peaks should be higher intensity than Eu, since ZTm = 69 and ZEu = 63.
First, a linear background subtraction was applied to reduce the intensity change due to the thickness gradient of the TEM lamella created during ion beam preparation.Then, atom columns were identified and fit using open-source code "Pycroscopy" 13 .Lastly, atom columns were masked with circles, and the intensity of each atom column was summed to minimize error in intensity measurement 14 .These atom columns were then binned into A, B, C, and D types, and averaged.Peak identification and an intensity histogram are shown in Fig. S13.
To examine the site occupancy of columns A,B,C and D, 1131 columns were identified.A and C columns both contain sites from the group, but have different densities of atoms so different contain and group sites, with C columns higher intensity than the B columns.Recalling (Note 2) that there are twice as many sites as + sites, we expect for a site-ordered EuTmIG with x = 0.5 that Tm should occupy all the + sites and ¼ of the sites, and Eu should occupy ¾ of the sites.The summed intensities were averaged for the two groups of atoms to give intensities that should be greater for the Tm-containing sites, as shown in Table S6.The analysis does not yield a statistically higher intensity for all the Tm-containing sites, which is inconsistent with the clear site-ordering observed in EDS.The discrepancy may be explained if there are cation vacancies (VRE) or Fe antisite defects (FeRE) in the RE columns: even a small amount of these point defects would give a larger effect on column intensity than the Z-contrast between the Tm and Eu.Indeed, when we look at the Fe EDS, we see a slight indication that excess Fe could be preferring β and γ sites over α sites.
We also compared the Eu and Tm distributions locally in the hexagonal rings of A and B columns.atoms and there would be a statistical fluctuation in the number of each atom in the columns, but the overall trend is clear and was also found in other sections of the image.
Concerning the role of defects, we did not observe any dislocations across the entire visible lamellae of (111) or (110)-oriented films.This is consistent with many other TEM investigations of epitaxial RE, Bi or Y garnets in our prior work 15,16 .Furthermore, the strain in the films does not relax even for thicknesses of 10s of nm, according to the RSM data, suggesting that dislocations do not form.Hence we do not believe dislocations play a major role in the site occupation or growth-induced anisotropy.Fe 2+ is another possible point defect arising from oxygen deficiency.We performed X-ray absorption spectroscopy on TbIG films in a prior work 16 which showed that amount of Fe 2+ was less than about 3%.Further, we would expect all the films to show similar amounts of Fe 2+ or other point defects such as vacancies because they Structural relaxation for the three garnets (mixed garnet and two end members) was performed, accounting for the energy, per-atom forces, and stress computed by spin-polarized, collinear calculations at each update of the atomic positions.Then, a final spin-polarized, non-collinear calculation was performed on this static relaxed structure to evaluate the energy with a specific magnetization orientation.Including energy changes due to magnetostriction would have required accounting during structural relaxation for the energy, per-atom forces, and stress that arise from orientation of the magnetization.Such a method for structural relaxation was deemed prohibitively expensive since it would entail minimizing the energy and ensuring per-atom forces approach zero with spin-polarized, non-collinear electronic structure calculations performed at each update of the atomic positions rather than merely at the final step at the end of the structural relaxation.As a result, the calculation method presented does not include this magnetostriction and elastic energies.However, the mixed garnet has a smaller magnetostriction coefficient than the end members, and magnetostriction would therefore be unlikely to account for the larger anisotropy energy observed for the mixed garnet by DFT.Fig. S15 shows that for a site-ordered unit cells with fixed RE composition (Eu1.5Tm1.5Fe5O12),as the degree of ordering of the RE on distinguishable sites increases, the intensity of the (110) order peak also increases with a quadratic dependence.The peak intensity depends on the structure factor which varies with the difference in atomic number of the ordered RE cations.For the completely mixed case, there is no difference in the average atomic number on each site, so no peak is seen.Lastly, to explain the observation of weak (110) peaks in the asymmetric scans of endmember EuIG and TmIG films grown on (111) GGG (Fig. 3), we consider the effects of point defects.
Generally, removing atoms from a unit cell reduces the symmetry of the cell, resulting in the appearance of structurally forbidden peaks.Fig. S16 shows the consequence of removing a single oxygen atom from the unit cell of GGG, for example.Even the removal of one atom in 160 produces sufficient change in the structure factors to produce peaks which were absent in the perfect crystal.
We do not expect many defects in the Czochralski-grown GGG substrates, and the (110) peak of the GGG in Fig. 3 is absent.However, YIG or REIG films growth by pulsed laser deposition often exhibit non-ideal cation stoichiometry or oxygen deficiency. 16The presence of a weak (110) peak for the endmembers EuIG and TmIG is readily explained by such point defects.Point defects are also expected to contribute to the (110) peaks in the EuTmIG film, but the much greater intensity of the peak is attributed to the Eu/Tm site ordering.For the GGG substrate and the films, (220) and (440) reflections are present as expected since these even reflections are not systematically absent.The (110) peak (at 2θ = ~10°) is normally forbidden for the Ia3 d cubic garnet crystal structure, but it is present for all samples.This is due to the well-known phenomenon of umweganregung, which causes the appearance of normally forbidden peaks in symmetric scans due to reasons other than symmetry reduction in the crystal (Fig. S17b). 19The presence of the umweganregung peak prohibits us from using the (110) reflection to verify cation order in these films, since it is also present in the substrate.[It is important to note, however, that the (110) peak can still be used to diagnose order in the (111) films as shown in Fig. 2 because it is collected in a skew geometry, which reduces the umweganregung peak.] In contrast, the higher order reflection, (330), is much less prone to umweganregung.This peak is therefore a good diagnostic of cation order.We note that (330) peaks exist for the EuIG and EuTmIG films on GGG, but not the uncoated GGG substrate (Fig. S17c).Moreover, the dspacings of these (330) peaks are consistent with the higher lattice parameter of the EuIG compared to the EuTmIG, unlike the (110) peaks, giving us confidence that the (330) peaks are not umweganregung peaks from the substrate.The (330) peak of the EuTmIG is significantly higher than that of the EuIG.We believe that the EuIG shows some (330) intensity due to point defects (Supplementary Note 8), but the higher intensity of the EuTmIG peak is a result of RE site ordering.Thus, we can verify cation site order in the (110)-oriented EuTmIG from both XRD and STEM.
Note 1 : 2 Note 2 : 8 Note 3 : 4 : 5 : 6 :Note 7 : 8 . 9 .
Callen's theory of growth-induced anisotropy (GIA) Strain calculation and additional high-resolution x-ray diffractograms Vibrating sample magnetometry background subtraction and error propagation Note Spin Hall magnetoresistance and anisotropy calculations Note Derivation of uniaxial anisotropy from three tilted anisotropies Note Scanning transmission electron microscopy (STEM) Density Functional Theory (DFT) Calculations Note Simulations of X-ray Diffraction (XRD) from ordered garnets Note Relationship between XRD and STEM data for verification of cation order References Supplementary Note 1: Callen's theory of growth-induced anisotropy (GIA)
( 4 )
For different growth faces, c sites fall into symmetrically inequivalent categories according to how the site symmetries are reduced at the surface.The anisotropy energy can be simplified for each case in which a B ion is substituted in an A site.The c sites are reduced into groups α, β, γ, δ of degenerate sites according to the symmetry of the neighbors around each site at the growth surface.The sets of equivalent sites and the simplified expressions for energy for several growth surfaces are: o (110) Sites fall into three groups, with 2/3 of them in the α group and 1/6 in each of the β and γ groups.α: NX1 = NX3 = NY2 = NY4, NX2 = NX4 = NY1 = NY3 (5)
Fig. S1 .
Fig. S1.Order schemes and inequivalent dodecahedral site groups for the [111], [110], and [112] Fringes also allow us to fit thickness of the film, analogously to Kiessig fringes in thin film reflectivity.Fig.S2(d)shows that only garnet peaks corresponding to the out of plane lattice direction are present over a wide range of angles, confirming the phase purity of the films.
Fig. S3(
Fig. S3(a), although reciprocal space mapping shows that films are fully strained.This fit gives us the relaxed lattice parameter, and hence d , the lattice spacing in the out of plane direction of the film (Fig. S3(b)):
Fig. S3 .
Fig. S3.Geometric representation of a strained (111) unit cell on a substrate.(a) Schematic of fully relaxed and fully strained films with the same out of plane lattice spacing.(b) Geometric relation between out of plane spacing and relaxed lattice parameter.(c) Geometric relation for in plane spacing, d.(d) Rhombohedral relation for strained film.
Fig. S7 .
Fig. S7.Views of the individual tilted anisotropy axes for the variants of the ordered (111) REIG system (top and side views).
Fig. S8 illustrates variants for the case (111) and (110) films.In (a) an ordered unit cell of one variant of the (111) case is shown with a reference plane in red; red and blue balls represent the inequivalent c-sites.In Fig. S8(c), multiple unit cells are shown for the (111) ordering, each corresponding to one of the three variants.Because of the superposition of the three variants, there is no global order in the c-sites that can be measured by EDS unless the spatial extent of the variants is large compared to the sample.In contrast, for the case of (110) order, Fig. S8(b) shows one variant in which red, blue and purple balls represent the three sets of inequivalent c-sites .Fig. S8(d) shows a film consisting of the two variants.If we image along one specific zone axis it is possible to identify the order by EDS: we can distinguish red from blue/purple sites, since long range ordering of the red site motif prevails, but blue sites cannot be distinguished from purple.
Fig. S10 .
Fig. S10.[11 1] zone axis of (110) grown EuTmIG on GGG.(a) View of interface with 10 nm scale bar.(b) View of interface with 5 nm scale bar.(c) View of interface with 2 nm scale bar.In each figure, the interface appears darker.
Fig. S12 .
Fig. S12.Unprocessed EDS of Eu, Tm, and Fe in the ordered EuTmIG film and total EDS spectra.
Fig. S13 .
Fig. S13.Atom column intensity comparison.(a) selected atoms from a single image (b) histogram of atom column intensities for A, B, C, and D sites.(c) EDS of the (110) EuTmIG, showing site preference for Eu on A sites and Tm on B sites.(d) Extracted intensity line scan for Eu and Tm along the path of the hexagon in (c).
Fig.S13(c,d) shows the analysis of one region of the sample with a hexagon of dodecahedral sites.Columns 1, 2, 4 and 5 contain sites from the group and columns 3 and 6 contain sites from the group.The line plot clearly shows that columns 3 and 6 contain the least Eu and most Tm, and the other columns have more Eu and less Tm.The peaks are not exactly the same for columns 3 and 6 (or for columns 1, 2, 4 and 5) because the column contains ~20
Fig. S16 .
Fig. S16.Simulated powder diffraction of gadolinium gallium garnet in the perfect state (upper panel) and with one oxygen atom removed from the unit cell, forming a vacancy (lower panel).
Table S1 .
Fractional coordinates of labelled dodecahedral sites in the garnet unit cell
Table S2 .
Crystallographic space groups of cation-ordered garnets.
Table S3 .
Summary of structural properties for REIG films
Table S6 .
Averaged atom column intensity and relevant statistics.
|
2024-06-16T06:18:16.883Z
|
2024-06-14T00:00:00.000
|
{
"year": 2024,
"sha1": "efdf76efd91ce0e16c51bdbce2b0fce8ef730235",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "13227ec57c738e69686340f2f566e8f815bf9448",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
847552
|
pes2o/s2orc
|
v3-fos-license
|
CFBDS J005910.90-011401.3: reaching the T-Y Brown Dwarf transition?
We report the discovery of CFBDS J005910.90-011401.3 (hereafter CFBDS0059), the coolest brown dwarf identified to date. We found CFBDS0059 using i' and z' images from the Canada-France-Hawaii Telescope (CFHT), and present optical and near-infrared photometry, Keck laser guide star adaptive optics imaging, and a complete near-infrared spectrum, from 1.0 to 2.2 $\mu$m. A side to side comparison of the near-infrared spectra of CFBDS0059 and ULAS J003402.77-005206.7 (hereafter ULAS0034), previously the coolest known brown dwarf, indicates that CFBDS0059 is ~50+/-15K cooler. We estimate a temperature of Teff ~ 620K and gravity of log g ~ 4.75. Evolutionary models translate these parameters into an age of 1-5 Gyr and a mass of 15-30 M_Jup. We estimate a photometric distance of ~13pc, which puts CFBDS0059 within easy reach of accurate parallax measurements. Its large proper motion suggests membership in the older population of the thin disk. The spectra of both CFBDS0059 and ULAS J0034 shows probable absorption by a wide ammonia band on the blue side of the $H$-band flux peak. If, as we expect, that feature deepens further for still lower effective temperatures, its appearance will become a natural breakpoint for the transition between the T spectral class and the new Y spectral type. CFBDS0059 and ULAS J0034 would then be the first Y0 dwarfs.
Introduction
Observed stellar and substellar atmospheres cover a continuum of physical conditions from the hottest stars (∼ 100 000 K) to the coolest known brown dwarfs , >T8 Warren et al. (2007)). There remains however a sizeable temperature gap between the 600-700 K ULAS0034 and the ∼100 K giant planets of the Solar System. Many of the currently known extrasolar planets populate this temperature interval, characterized by complex atmospheric physics: matter and radiation in these cold, dense, and turbulent atmospheres couple into a very dynamical mix, where molecules and dust form and dissipate. Current atmosphere models are rather uncertain in this unexplored temperature range and they will significantly benefit from observational constraints. Two major physical transitions are expected to occur between ∼ 700 K and ∼ 400 K and strongly alter the emergent near-infrared spectra (Burrows et al. 2003): NH 3 becomes an abundant atmospheric constituent and its near-infrared bands become major spectral features, and water clouds form and deplete H 2 O from the gas phase. The corresponding near-infrared spectral changes are likely to be sufficiently drastic that the creation of a new spectral type will be warranted (Kirkpatrick 2000). Kirkpatrick et al. (1999); Kirkpatrick (2000) reserved the "Y" letter for the name of that putative new spectral type.
To help fill this temperature gap, we conduct the Canada France Brown Dwarf Survey (CFBDS, (Delorme et al. 2008)), which uses MegaCam (Boulade et al. 2003) i ′ and z ′ images to select very cool brown dwarfs (and high redshift quasars) on their very red i ′ − z ′ colour. We present here our coolest brown dwarf discovery to date, CFBDS J005910.90-011401.3 (hereafter CFBDS0059), a >T8 dwarf with evidence for nearinfrared NH 3 absorption. Section 2 describes its discovery, and presents our follow-up observations: i ′ , z ′ , Y, J, H and K s photometry and astrometry of CFBDS0059 and (as a reference) ULAS0034, laser guide star adaptive optics imaging and a near infrared spectrum of the new brown dwarf. Section 3 discusses the kinematics and the dynamical population membership of CFBDS0059. Section 4 compares the spectrum of CFBDS0059 with those of Gl 570D (T7.5), 2MASS J0415-09 (T8) and ULAS0034, and in the light of synthetic spectra uses that comparison to determine its effective temperature, gravity and metallicity. We also examine in that section the new spectral features which appear below 700K, in particular an NH 3 band, and discuss new spectral indices for spectral classification beyond T8. Finally, section 5 summarizes our findings and sketches our near-future plans.
CFBDS
Field ultracold brown dwarfs are intrinsically very faint, and as a result they can only be identified in sensitive wide-field imaging surveys. They are most easily detected in the near infrared, and one could thus naively expect them to be most easily identified in that wavelength range. Brown dwarf spectra however very much differ from a black body, and their considerable structure from deep absorption lines and bands produces broadband pure near-infrared colours that loop back to the blue. At modest signal to noise ratios, those colours are not very distinctive. Brown dwarfs are therefore more easily recognized by including at least one photometric band under 1 µm. At those shorter wavelengths their spectra have extremely steep spectral slopes, and the resulting very red i ′ − z ′ and z ′ − J colours easily stand out.
As discussed in detail in Delorme et al. (2008), the CFBDS survey brown dwarf identification is a two-step process: -we first select brown dwarfs candidates on their red i ′ − z ′ colour in MegaCam images which cover several hundred square degrees. -J-band pointed observations of these candidates then discriminate actual brown dwarfs from artefacts, and astrophysical contaminants.
The i ′ − z ′ selection takes advantage of the wide field of the MegaCam camera 1 on the 3.6m CFHT telescope, and of the trove of deep observational material obtained with that instrument. We rely on existing i ′ images from the Very Wide component of the Canada France Hawaii Telescope Legacy Survey (CFHTLS-VW) and, for different fields, on existing z ′ images from the Red sequence Cluster Survey 2 (RCS2). We then match those with either new z ′ or new i ′ exposures to obtain i ′ and z ′ pairs. The two parent surveys also obtain g ′ and r ′ images, which for the RCS2 survey are contemporaneous with the z ′ exposure. We don't use those as primary selection tools, but the contemporaneous exposures from the RCS2 survey provide a welcome check that an apparently red source was not, instead, a variable brighter at all wavelenghts at the z ′ epochs. Since all our fields have low galactic extinction, the only other astrophysical point sources with a similarly red i ′ −z ′ are quasars at redshifts of z≈ 6, which represent the other motivation of our program (e.g. Willott et al. 2007). We discriminate between quasars and brown dwarfs with J-band photometry obtained on several 2 to 4m-class telescopes (Delorme et al. 2008, for details). These targeted follow-up observations also reject a number of remaining unflagged artefacts and they provide a refined spectral type estimate, thanks to the much higher signal-to-noise ratio which we typically achieve on z ′ − J than on i ′ − z ′ .
Observations
We first identified CFBDS0059 as a brown dwarf candidate when comparing a 360 s RCS2 z ′ exposure from 2005 December 27 with a 500 seconds CFBDS i ′ exposure from 2006 August 31. CFBDS0059 is undetected in the i ′ image to i ′ AB = 25.2 (5σ), in spite of a strong z ′ detection (z ′ AB = 21.93±0.05). The RCS2 survey obtains contemporaneous g ′ , r ′ and z ′ images, and we checked the g ′ and r ′ exposures for a counterpart. These images, which were obtained within 50 minutes of the z ′ observation, show no object at the position of CFBDS0059. This essentially excludes that the z ′ detection was due to a variable or slowly moving object with neutral colours. The i ′ AB − z ′ AB > 3.2 (5σ) lower limit was thus secure, and made CFBDS0059 a very strong candidate for follow-up.
J-band photometry and near-infrared spectroscopy
Our initial J-band imaging consists of five 20-second dithered exposures with the SOFI near-infrared camera on the ESO NTT telescope at La Silla on 2006 November 12. As discussed below, the photometric system of that instrument is non-standard. We used a modified version of the jitter utility within the ESO Eclipse package (Devillard 1997) to subtract the background and coadd the five exposures. We extracted photometry from the resulting image using PSF fitting within Source Extractor (Bertin & Arnouts (1996) and Bertin & Delorme, in preparation) and obtain J Vega =18.11±0.06. The resulting z AB − J AB = 3.0 colour confirmed CFBDS0059 as a strong very late-T dwarf candidate and we triggered H-band spectroscopic observations with NIRI (Hodapp et al. 2003) at Gemini-North. Those were obtained in queue mode on 2007 July 30 and immediately confirmed the very cool nature of CFBDS0059. We then requested J and K-band spectroscopy with the same instrument, which was obtained on 2007 September 1. All spectra were obtained through a ∼ 0. ′′ 7-wide slit, which produces a resolving power of λ ∆λ ∼ 500. The object is dithered along the slit. The H-band spectrum is the sum of 16 300−second integrations, while the Y + J and K band spectra each are the sum of 9 300−second integrations. Consecutive image pairs are pair subtracted, flat fielded using a median combined spectral flat and corrected for both spectral and spatial distortions. Spectra are extracted using a positive and a negative extraction box matched to the trace profile. A first wavelength calibration was obtained with argon lamp arc spectra taken at the end of the sequence, and the wavelength scale was then fine tuned to match the atmospheric OH-lines. Individual spec-tra extracted from image pairs were then normalized and median combined into final spectra. Per-pixel S/N of 25, 40 and 5 where achieved on the J, H and K-band peaks respectively. For all 3 wavelength settings the A0 star HIP10512 was observed immediately after the science observations to calibrate the instrumental spectral response and the telluric transmission.
Additional near-infrared and optical Photometry
The J filter of the SOFI camera on the NTT has a quite non-standard bandpass, for which the large colour corrections that result from the strong structrure in T dwarf spectra (e.g. Stephens & Leggett 2004) have not been fully characterized. To compare the spectral energy distribution (SEDs) of CFBDS0059 and ULAS0034 (Warren et al. 2007) we therefore prefered to obtain additional near-infrared wide-band photometry with WIRCam (Puget et al. 2004) on CFHT, which uses standard Mauna Kea Observatory infrared filters Tokunaga et al. 2002;Tokunaga & Vacca 2005, MKO system). The observations (2007 August 1st and 5th) used dithering patterns of ∼ 60 arcsec amplitude for total (respectively individual) exposure times of 300 (60), 150 (30), 300 (15) and 720 (20) seconds for the Y, J, H and K s bands. The skies were photometric and the seeing varied between 0.8 and 1.0". Table 1 summarizes the magnitudes of CFBDS0059 and ULAS0034 in all available bands. The WIRCam photometry of ULAS0034 agrees with the Warren et al. (2007) measurements within better than 1σ for H band and within 1.5 σ for J. The Warren et al. (2007) K-band measurement used a K filter, while our WIRCam measurement uses the narrower and bluer K s filter. The 0.15 mag difference between these two observations is approximately consistent with the Stephens & Leggett (2004) prediction for the effect of these different filter bandpasses at late-T spectral types. Similarly, the better short-wavelength quantum efficiency of the WIRCam detector can qualitatively explain the 0.2 magnitude (2σ) discrepancy between our Y photometry and the Warren et al. (2007) WFCam measurement. The near-IR colours of the two brown dwarfs are similar, except H − K s which is ∼0.5 magnitude bluer for CFBDS0059 than for ULAS0034. We will interpret the implications of this low K s flux when we examine the near-infrared spectrum.
Astrometry
CFBDS0059 and ULAS0034 are serendipitously just 6.3 degrees apart on the sky, and at similar photometric distances from Earth since they have similar spectral types and apparent magnitudes , and we initially entertained the idea that they might, perhaps, be part of a common moving group. The proper motion of CFBDS0059 however, measured between our 2005 Megacam z ′ and 2007 WIRCam K s images, and uncorrected for its parallactic motion, is µ α = +0.94 ± 0.06"/yr, µ δ = +0.18 ± 0.06"/yr (Table 2). ULAS0034 on the other hand moves by µ α = −0.12 ± 0.05"/yr and µ δ = 0.35 ± 0.005"/yr (Warren et al. 2007). The two proper motions are thus sufficiently different that the two brown dwarfs are clearly unre- lated. We checked for main sequence common proper motion companions to CFBDS0059, which would have provided welcome age and metallicity constraints (e.g. Scholz et al. 2003) but did not find any match within a 10 arcminute radius.
Keck Laser Guide Star Adaptive Optics Imaging
To search for binarity, we imaged CFBDS0059 on 16 January 2008 UT using the laser guide star adaptive optics (LGS AO) system (van Dam et al. 2006;Wizinowich et al. 2006) of the 10-meter Keck II Telescope on Mauna Kea, Hawaii. Conditions were photometric with better than average seeing. We used the facility IR camera NIRC2 with its narrow field-of-view camera, which produces an image scale of 9.963 ± 0.011 mas/pixel (Pravdo et al. 2006) and a 10.2 ′′ × 10.2 ′′ field of view. The LGS provided the wavefront reference source for AO correction, with the exception of tip-tilt motion. The LGS brightness was equivalent to a V ≈ 9.8 mag star, as measured by the flux incident on the AO wavefront sensor. Tip-tilt aberrations and quasi-static changes in the image of the LGS as seen by the wavefront sensor were measured contemporaneously with a second, lower-bandwidth wavefront sensor monitoring the R = 14.6 mag field star USNO-B1.0 0887-0010532 (Monet et al. 2003), located 32 ′′ away from CFBDS0059. The sodium laser beam was pointed at the center of the NIRC2 field-of-view for all observations.
We obtained a series of dithered images, offsetting the telescope by a few arcseconds, with a total integration time of 1080s. We used the CH4s filter, which has a central wavelength of 1.592 µm and a width of 0.126 µm. This filter is positioned near the H-band flux peak emitted by late-T dwarfs. The images were reduced in a standard fashion. We constructed flat fields from the differences of images of the telescope dome interior with and without continuum lamp illumination. Then we created a master sky frame from the median average of the bias-subtracted, flat-fielded images and subtracted it from the individual images. Images were registered and stacked to form a final mosaic, with a full-width at half-maximum of 0.09 ′′ and a Strehl ratio of 0.05. No companions were clearly detected in a 6 ′′ × 6 ′′ region centered on CFBDS0059.
We determined upper limits from the direct imaging by first smoothing the final mosaic with an analytical representation of the PSF's radial profile, modeled as the sum of multiple gaussians. We then measured the standard deviation in concentric annuli centered on the science target, normalized by the peak flux of the targets, and adopted 10σ as the flux ratio limits for any companions. These limits were verified with implantation of fake companions into the image using translated and scaled versions of the science target. Figure 2 presents the final upper limits on any companions. We employed the "COND" models of Baraffe et al. (2003) to convert the limits into companion masses, for assumed ages of 1 and 5 Gyr and a photometric distance estimate of 13 pc. We assumed any cooler companions would have similar (CH4s − H) colors to CFBDS0059.
Fig. 3.
Thin disk probability membership contours in proper motion space from the Besancon stellar population model. The contours are generated for synthetic stars with distances between 10 and 20 pc, belonging to the thin disk (small dots) and the thick disk (small stars, with the density of the latter increased by a factor of 10 for display purposes). Less than one halo star would appear on the plot. Based on their measured proper motion, the likelihood that CFBDS0059 (large open square) and ULAS0034 (large open triangle) belongs to the thin disk are 95% and > 99%.
Kinematics
We estimate a spectrophotometric distance for CFBDS0059 by adopting M J = 17.5 ± 0.5, based on an approximate T9/Y0 spectral type (discussed below) and on an extrapolation of the M J versus spectral type relation of Knapp et al. (2004) beyond T8 (2MASSJ0415 − 0935, hereafter 2M0415). This extrapolation is consistent with the Chabrier et al. (2000) models which predict ∆J ∼ 1.0 between brown dwarfs of T eff ∼ 750K and 625K (like 2M0415 and CFBDS0059). The resulting d = 13 ± 5 pc has significant systematic uncertainties, because spectral typing beyond T8 is just being defined, and especially because the linear 1 magnitude/subtype decline seen at earlier subtypes may not continue beyond T8. The adaptive optics observations exclude any companion of similar luminosity beyond 1.2 AU, but CFBDS0059 could still of course be a tighter binary. Its small distance fortunately puts CFBDS0059 within easy reach of modern parallax measurements.
We use the Besancon stellar population model (Robin et al. 2003) to generate synthetic stars between 10 pc and 20 pc for the thin (dots) and thick (star symbol) disk populations at the galactic position of CFBDS0059. Fig 3 shows their proper motions together with those of CFBDS0059 and ULAS0034. The contour lines show the probability that an object with a given proper motion belongs to the thin disk rather than the thick disk (halo membership probabilities are negligible). CFBDS0059, at its probable distance, is well within the 95% probability thin disk membership region, and ULAS0034 is within the 99% probability region. In spite of its somewhat high proper motion for an object beyond 10 pc, CFBDS0059 therefore most likely belongs to the thin disk. The mean age of the simulated stars in the region of the proper motion diagram occupied by CFBDS0059 is 4 Gyr, suggesting that it is an older member of the thin disk. That age is consistent with the 1 to 5 Gyr range derived below from comparison to COND evolutionary models (Baraffe et al. 2003). As any kinematic age for an individual star, this determination has large error bars, but it suggests that CFBDS0059 might be older than ULAS0034. (Burgasser et al. 2003, T8) and Gl 570D (Burgasser et al. 2000, T7.5), which successively were the coolest known brown dwarfs. Thanks to their earlier discovery, 2M0415 and Gl 570 have the best characterized atmospheric parameters (Saumon et al. 2006(Saumon et al. , 2007, and they provide the most solid baseline for a differential study. The Warren et al. (2007) spectrum of ULAS0034, kindly communicated by S. Leggett, was obtained with GNIRS on Gemini South and its λ ∆λ = 500 resolution matches that of our NIRI spectrum of CFBDS0059. We downloaded the Burgasser et al. (2003Burgasser et al. ( , 2002 OSIRIS spectra of the two other brown dwarfs from the Ultracool Dwarf Catalog 2 , and degraded their original spectral resolution of λ ∆λ ∼1200 to match that of the GNIRS and NIRI spectra. Stronger telluric absorption from the lower altitude telescopes explains the wider blanked regions in the corresponding spectra, but doesn't measurably affect any comparison: as illustrated by CFBDS0059, late-T dwarfs have essentially negligible flux wherever telluric H 2 O absorption matters. Because the OSIRIS spectra do not cover the Y band, we complement them by lower resolution spectra from Geballe et al. (2001) and Knapp et al. (2004) for λ < 1.18µm.
Atmospheric parameters
Atmospheric parameters of ultracool dwarfs are ideally determined from a combination of near and mid-IR information (e.g. Saumon et al. 2006Saumon et al. , 2007, but low resolution near-infrared spectra alone provide a useful proxy when mid-IR photometry and spectra are not (yet) available (e.g. Burgasser et al. 2006a;Leggett et al. 2007). Burgasser et al. (2006a) used a grid of solar metallicity cool brown dwarfs to calibrate two spectral ratios, H 2 O-J and K/H, which respectively measure the strength of H 2 O absorption at ∼ 1.15 µm and the flux ratio between the K and H peaks, to T eff and log g. Warren et al. (2007) however found that H 2 O-J essentially saturates below T eff = 750K, and therefore chose not to use this spectral index for spectral types later than T8. They demonstrate on the other hand that the Fig. 4. 0.9 µm -2.3 µm spectra of CFBDS0059 and the three other coolest brown dwarfs. The spectra are normalized to unit flux densities at their 1.27µm.peak, and vertically offset for clarity. The main T-dwarf spectral features are labeled. The temperatures of Gl 570D and 2M0415 are from the careful spectroscopic analyses of Saumon et al. (2006) and Saumon et al. (2007). Those of CFBDS0059 and ULAS0034 are from our W J versus J/K index (Fig. 6).
combination of the K/J index with the width of J-band peak, parametrised by their W J index, becomes a good T eff and log g diagnostic at T eff ≃900 K, and remains useful significantly below 750K. We adopt their method.
measurements for Gl 570D, HD3651B, 2M0415 and ULAS0034. To derive T eff and log g from the indices we use model indices from solar-abundance BT-settl atmospheric models (Warren et al. 2007;Allard et al. 2003, Allard et al. 2008. The model with NH 3 at chemical equilibrium abundance clearly produces too much absorption in the blue side of the H-band, confirming the finding of Saumon et al. (2006Saumon et al. ( , 2007 that non-equilibrium processes keep the NH 3 partial pressure well below its equilibrium value. We then use models that keep the abundances of NH 3 and N 2 at a fixed value in all parts of the atmosphere where the reaction timescale exceeds the mixing timescale, which typically occurs at the 600-800K temperature level. These "quenched" models agree much better with the observed band shape. As a first order correction for the remaining imperfections of the theoretical spectra, the model indices are shifted into agreement with the measurements of 2M0415 at the [T eff =750K; log g=5.00 and [M/H]=0] determined for that brown dwarf by Saumon et al. (2007). The T eff =800K and log g=5.35 resulting from this calibration for Gl 570D are consistent with the T eff =800-820K and log g=5.1-5.25 derived by Saumon et al. (2006) from a complete spectral analysis. For HD3651B, T eff =820-890K and ∼ log g=5.1-5.3 resulting from this calibration are roughly consistent with the T eff =780-840K and log g=5.1-5.5 derived by Liu et al. (2007).
CFBDS0059 and ULAS0034 have very similar W J indices, but the new brown dwarf has a significatively smaller K/J index. Visual comparison of the two spectra (Fig. 4) confirms that CFBDS0059 does have a weaker K-band peak than any of the 3 comparison cool brown dwarfs. As widely discussed in the recent literature (e.g. Liu et al. (2007) , Fig. 3 in Burgasser et al. (2006a), or Fig. 3 in Leggett et al. (2007)), for a fixed metallicity a weaker K-band peak is evidence for either a lower temperature or a higher gravity. The W J index lifts this degeneracy: it indicates, again assuming identical chemical compositions for the two brown dwarfs, that CFBDS0059 is cooler by ∼50±15 K and has a ∼0.15±0.1 higher log g than ULAS0034.
As also discussed by Warren et al. (2007), the above uncertainties only reflect the random errors in the spectral indices. They are appropriate when comparing two very similar objects, like CFBDS0059 and ULAS0034, since systematic errors then cancel out. They must on the other hand be increased to compute absolute effective temperatures and gravity: one then needs to account for the uncertainties on the 2M0415 parameters which anchor the Fig 6 grid (T∼ ±25K and log g ∼ ±0.2; Saumon et al. (2007)), and for the uncertainties in the atmospheric models which may distort the grid between its anchor point [T eff =750K; log g=5.00 and [M/H]=0] and the ∼600K region of interest here. We conservatively adopt T eff =620±50K and log g=4.75±0.3.
This 2-parameter analysis obviously cannot determine all three main atmospheric parameters (T eff ,, log g and metallicity). As discussed by Warren et al. (2007), it actually determines the temperature with no ambiguity but leaves a combination of [M/H] and log g undetermined, and they demonstrated that in the W J versus J/K plot metallicity is degenerate with surface gravity, with ∆(log g) ≡ −2∆[M/H]. CFBDS0059 is thus definitely cooler than ULAS0034, but from W J versus J/K diagram it can have either higher surface gravity or lower metallicity. This degeneracy affects the full JHK-band spectrum, where any metallicity vs gravity difference is at most very subtle. It is however lifted by the shape of the Y-band peak (Figs. 3 of Burgasser et al. (2006a) or Leggett et al. (2007)), since lower metallicity shifts the Y-band flux density peak of submetallic brown dwarfs significantly blueward. Fig 5 shows no such shift, and the two objects therefore have similar metallicities. Fig. 7 overlays the observed CFBDS0059 spectrum with the synthetic spectrum for the closest point of the solar metallicity atmospheric model grid. Except on the red side of the H-band, model and observations agree well, boosting our confidence in the derived atmospheric parameters. The main remaining predictive shortcoming of the models is their overestimated absorption on the red side of the H-band peak. The principal opacity source in this region is the methane band centred at 1.67 µm, for which comprehensive theoretical predictions are available, but only for transitions from the vibrational ground state (as will be discussed in detail in Homeier et al., in preparation). To make up for the missing absorption from higher bands, which constitutes a significant fraction of the opacity at brown dwarf temperatures, a constant empirical correction factor was used. This correction must in turn lead to some overestimate of the CH 4 absorption as we reach the lower end of the T dwarf temperature range. Another possible source of errors are uncertainties in the models' temperature profile. The BT-Settl models self-consistently describe gravitational settling, convective turbulence, advection, condensation, coalescence and coagulation of condensates to predict the formation and vertical extent of cloud layers (Allard et al. 2003;Helling et al. 2007). In late T dwarfs these clouds are predicted to reside deep in the optically thick part of the atmosphere. Their opacity is thus not directly visible in the spectrum, but it may still impact the thermal structure, and thus the relative abundance especially of temperature-sensitive species like CH 4 . Another (less serious) disagreement between the models and the observed spectra occurs in the Y band. The models overestimate the flux on the blue side of the Y-peak, and they imperfectly reproduce the general shape of the peak. As discussed below, the opacities in that band are dominated by pressure-broadened wings of the 0.77 µm K I line on the blue side and CH 4 on the red side.
For Baraffe et al. (2003) evolutionary models, the T eff ≃570-670K and log g ≃ 4.45-5.05 determined above translate into an age of 1-5 Gyr and a mass of 15M Jup (for 1 Gyr) to 30M Jup (for 5 Gyr). The kinematics of CFBDS0059 suggests that it belongs to an older population, and therefore slightly favour a higher mass and older age. Fig. 5 zooms on the Y, J and H-band peaks of the four cool brown dwarf spectra. The published OSIRIS spectra of Gl 570D and 2M0415 do not cover the Y band, which instead is plotted from the lower resolution spectra of Geballe et al. (2001) and Knapp et al. (2004). For easier comparison, the CFBDS0059 and ULAS0034 Y-band spectra are smoothed to that resolution.
Individual spectral features
Direct comparison of the four spectra can be used to shed light on incipient new features and atmospheric chemistry. Features which are seen in both CFBDS0059 and ULAS0034 are likely to be real even when their significance is modest in each object, and those which are absent or weaker in the two hotter brown dwarfs, can reasonably be assigned to low temperature molecules. Conversely, features which disapear in the two cooler objects trace higher temperature species.
As discussed above, the Y-band spectra of CFBDS0059 and ULAS0034 do not differ much. Given the strong sensitivity of that band to [M/H] that implies that the two objects have similar chemical compositions. The shape of the Y-band peaks of these two coolest brown dwarfs on the other hand differ from that of Gl 570D and 2M0415, with the CFBDS0059 and ULAS0034 peaks extending further into the blue. The dominant absorbers in the blue wings of the Y-band peak is the pressure-broadened wing of the 0.77 µm K I line (e.g. Burgasser et al. 2006a), which must weaken as K I depletes from the gas phase under T eff =∼700K. As anticipated by Leggett et al. (2007), the slope of the blue side of the Y-band peak therefore shows good potential as an effective temperature diagnostics beyond spectral type T8.
The strength of the J-band K I doublet is a good gravity estimator in ultracool dwarfs (e.g. Knapp et al. 2004), because an increased pressure at a fixed temperature favors KCl over K (Lodders (1999)) and consequently weakens atomic potassium features. At T eff ∼ 750-800K the the J-band K I doublets remain weakly visible and useful as a gravity proxy (Fig. 7 of Knapp et al. (2004)). At T eff < 700K on the other hand, the K I doublets have completely vanished at the resolution of the current spectra (Fig. 5), even at the probably lower gravity of ULAS0034. Potassium is thus mostly converted to KCl (or perhaps other compounds) in the relevant photospheric layers.
The strongest new feature is wide absorption on the blue side of the H-band, at λ < 1.565µm. It is conspicuous in CFBDS0059 and well detected in ULAS0034, and with hindsight is weakly visible in the 2M0415 spectrum (Fig. 5). It is however clearly stronger at T eff < 700K. To visually emphasize this broad feature, we bin the spectra to R=∼100 and overlay the four H-band spectra (Fig. 8, left panel). Absorption sets in at ∼ 1.585µm and becomes deeper for λ < 1.565µm. These wavelengths overlap with strong H 2 O and NH 3 bands. Either molecule could a priori be responsible for the absorption.
Near infrared ammonia signatures
Those molecules unfortunately have imperfect opacity data, and the NH 3 laboratory line lists in particular are incomplete below 1.7µm. Computed ammonia opacities are therefore strictly lower limits. Leggett et al. (2007) compare synthetic spectra computed with and without NH 3 opacity, using the Irwin et al. (1999) line list for λ < 1.9µm, and find that ammonia absorption in cold brown dwarfs strongly depletes the blue wing of the H band (their Fig.10). Similarly, Fig. 9 of Saumon et al. (2000) plots synthetic H-band spectra with and without NH 3 opacity, and find differences in two wavelength ranges: the NH 3 -rich model is significantly more absorbed for λ < 1.565µm and it has weaker but significant absorption in the [1.5725 − 1.585µm] range. Fig 8 right panel plots two BT-Settl models for [T eff = 600K; log g = 4.75], without any near-infrared NH 3 opacity, and with NH 3 opacity for that molecule at its chemical equilibrium abundance. As discussed above the BT-Settl models do not reproduce the observed H-peak shape very well, and a quantitative comparison is thus difficult. The comparison of Fig. 7. Overlay of the CFBDS0059 spectrum with the solar metallicity [T eff =600-650K; log g=4.75] BT-settl synthetic spectrum. The two spectra are scaled to agree at their 1.27µm flux peak. The "quenched NH3" models are chemical equilibrium models which enforce a constant abundance of ammonia in the cooler regions of the atmosphere. the two models nonetheless confirms the Saumon et al. (2000) conclusion that ammonia produces strong absorption below ∼ 1.57µm and weaker residual out to 1.595µm. These model predictions qualitatively match the behaviour seen in Fig 8, left panel.
To emphasize the changes in brown dwarfs spectra when their effective temperature decreases from ∼800 to ∼600 K, we plot in Fig 9 the ratio of the spectra of CFBDS0059 and Gl 570D. The signal to noise ratio of the resulting K-band spectrum is too low for detailed analysis, and we therefore focus on the Y, J and H flux peaks. To avoid confusion from changes in the temperature-sensitive methane bands, we also mostly ignore the parts of the spectrum affected by CH 4 absorption bands, hatched in dark and light grey for respectively stronger and weaker bands. Fig. 10 shows the equivalent plot for ULAS0034, which is very similar.
The H-band spectrum ratio prominently shows the new absorption band, which outside the CH 4 band closely matches the 300 K NH 3 transmission spectrum of Irwin et al. (1999). Both spectra are strongly absorbed between 1.49 and 1.52 µm and rebound from 1.52 to 1.57µm. Water absorption, by contrast, is a poor match to the features of spectrum ratio. The strongest water absorption (as computed from the HITRAN molecular database for a 600 K temperature) occurs below 1.49µm, at significantly bluer wavelengths than the CFBDS0059 absorption feature.
Some weaker but still significant bands of the Irwin et al. (1999) laboratory ammonia spectrum occur in the J band. Those again match features of the CFBDS0059/Gl 570D flux ratio, but that agreement is much less conclusive: water and ammonia absorptions overlap on the red side J-band peak, and CH 4 absorption affects the blue side of the peak. A 1.25-1.27 µm feature is seen on both flux ratios and on the ammonia transmission, and could be due to ammonia since it is clear of any strong H 2 O absorption band. The slight wavelength shift between the laboratory and observed features however leaves that association uncertain. Detailed synthetic spectra based on fully reliable opacities will be needed to decide whether NH 3 absorption matters in the J band at the effective temperature of CFBDS0059. The main pattern in the Y-band is a blue slope, Fig. 8. Left: H-band spectrum of the four cool brown dwarfs binned to R=∼100. The spectra are normalized at λ = 1.59µm. The integration intervals for the numerator and denominator of our proposed NH 3 − H index are marked. Right: BT-Settl synthetic spectra for [T eff = 600K; log g = 4.75] with and without near-infrared NH 3 opacity; the NH 3 abundance is at its chemical equilibrium value.
which reflects the weaker pressure-broadened K I wing in the cooler brown dwarf. The weak 1.03µm NH 3 band is not seen.
Ammonia is easily detected in mid-infrared SPITZER spectra for all spectral types cooler than ∼T2 (Roellig et al. 2004;Cushing et al. 2006), though significantly weaker than initially expected because mixing from lower atmospheric levels reduces its abundance in the high atmosphere below the local equilibrium value (Saumon et al. 2006). Weak nearinfrared absorption by ammonia has been tentatively detected by Saumon et al. (2000) in the T7p dwarfs Gl 229B, but CFBDS0059 and ULAS0034 provide the first incontrovertible evidence for a strong near-infrared NH 3 band in brown dwarf spectra.
This conclusion contrasts with Warren et al. (2007) finding possible but inconclusive evidence for ammonia in ULAS0034. The main difference between the two analyses is that Warren et al. (2007) focused on a higher resolution search, at a necessarily lower signal to noise ratio, for individual NH 3 lines between 1.5 and 1.58µm. We instead looked for the global signature of the absorption band, which only becomes obvious when looking at the full H-band spectrum. Table 3 lists for CFBDS0059 the spectral indices used by the spectral classification scheme of Burgasser et al. (2006b), which refines the previous schemes of Geballe et al. (2002) and Burgasser et al. (2002). These indices would imply a T8 classification, identical to that of 2M0415. As discussed above however, the near-infrared spectrum of CFBDS0059 demonstrates that it is over 100 K cooler than 2M0415 and shows clearly different spectral features. Based on the new indices we present later, CFBDS0059 should be assigned a later spectral type. The almost identical Burgasser et al. (2006b) indices of the two brown dwarfs instead reflect those indices measuring H 2 O and CH 4 absorption bands which saturate and lose their effective temperature sensitivity at the T8 spectral type of 2M0415. Beyond T8 the Burgasser et al. (2006b) classifica- Fig. 9. Flux ratio between CFBDS0059 and Gl 570D (black), together with the laboratory room temperature transmission spectrum of NH 3 (Irwin et al. 1999) (red, top panel) and the 600 K H 2 O transmission spectrum computed from the HITRAN molecular database (red, bottom panel). The grey bands mark the parts of the spectrum affected by strong (dark grey) or moderate (light grey) CH 4 absorption. tion scheme therefore needs to be extended, with new spectral indicators that do not saturate until significantly later spectral types.
Spectral type
Fully defining this extension is beyond the scope of the present paper, since two known objects beyond T8 are not enough to explore spectral variability, but one can nonetheless start exploring. Since the main new feature is NH 3 absorption in the blue wings of the H-band peak, we define a new NH 3 -H index as (1) Its numerator and denominator range are plotted in Fig 8. The numerator integrates the flux within the main NH 3 band and its denominator measures the bulk of the H-band peak (we note that in cooler objects, the denominator could be affected by some NH 3 absorption; its integration boundaries might thus need to be refined after such objects have been discovered). We compute this index for Gl 570D, HD3651B, 2M0415, ULAS0034 and CFBDS0059 (Table 4), and find that it strongly decreases from Gl570D to ULAS0034 and CFBDS0059 (which have very similar NH 3 -H).
Over the limited effective temperature range spanned by Gl570D, HD3651B, 2M0415, ULAS0034 and CFBDS0059, and as far as one can infer from just 5 examples, the NH 3 -H and W J indices correlate strongly (Fig. 11). The numerator of W J is centered at wavelengths where both ammonia (Fig. 10 of Leggett et al. (2007)) and CH 4 have significant opacity, and future modeling work should be able to establish whether the two indices probe the same molecule or not. Since the nearinfrared spectra of ULAS0034 and CFBDS0059 differ significantly more from that of the T8 2M0415 than the latter differs from the T7.5 Gl570D (as quantitatively demonstrated by Fig. 11), it is natural to assign a full spectral subtype to the interval between the two coolest brown dwarfs and 2M0415. By that reasoning, and if ULAS0034 and CFBDS0059 are consid- Table 3. Measured spectral classification indices for CFBDS0059 and ULAS 0034. The first six indices form the base of the Burgasser et al. (2006b) classification scheme, and the table includes the corresponding spectral classification on that scale. The W J index is a recent addition proposed by Warren et al. (2007). Fig. 11. NH 3 -H index versus W J index. The error bars represent the 1σ uncertainties of the measured spectral indices. ered as T dwarfs, their spectral type should be T9, or perhaps slightly later. The T spectral class however is quite unlikely to remain the last spectral type, since for sufficiently low effective temperatures atmospheric models predict major changes in visible and near-infrared brown dwarf spectra: NH 3 bands are predicted to appear in, and eventually to dominate, the nearinfrared spectrum, the strong pressure-broadened optical lines of Na I and K I are predicted to disappear as those atomic species get incorporated into molecules and solids, and water clouds are predicted to form and to largely deplete water from the gas phase (Burrows et al. 2003;Kirkpatrick 2005). Since spectral classification, for mostly practical reasons, is traditionally based on optical and near-infrared spectra, such a major transition will justify the introduction of a new spectral type, for which the Y letter has long been reserved (Kirkpatrick et al. 1999;Kirkpatrick 2000). If the λ ∼ 1.55 µm NH 3 band keeps deepening as the effective temperature decreases further, and eventually becomes a major spectral feature, its appearance at T eff ≃650 K will become a natural transition between the T and Y spectral classes. ULAS0034 and CFBDS0059 would then be the first Y dwarfs, and the prototypes for Y0 brown dwarfs, rather than T9. That decision will to some extent remain a matter of convention, but it must in any case wait until larger numbers of similarly cool brown dwarfs can document spectral trends in finer detail, and preferably over a wider effective temperature range.
Summary and conclusions
We have reported the discovery of CFBDS0059, a very cool brown dwarf, discovered in the CFBDS survey (Delorme et al. 2008). Its effective temperature is ∼50±15 K cooler than that of ULAS0034, most likely making it the coolest brown dwarf known at the present time. High spatial resolution imaging establishes than CFBDS0059 has no similarly bright companion beyond 0.09", and no companion with a contrast under 3.5 magnitude beyond 0.3" (respectively 1.2 and 3.9 AU at the 13 pc photometric distance). Its kinematics suggest, with significant error bars, a ∼4 Gyr age at which CFBDS0059 would be a ∼30M Jup brown dwarf. The atmospheric parameters of CFBDS0059 however are compatible with any age from 5 Gyr down to 1 Gyr, for which its mass would be ∼15M Jup . A trigonometric parallax measurement together with mid-infrared photometry and spectroscopy with SPITZER will significantly refine its physical parameters, as demonstrated by Saumon et al. (2007) for slightly warmer brown dwarfs.
We assign absorption in the blue wing of the H-band peaks of both ULAS 0034 and CFBDS 0059 to an NH 3 band. If that assignment is confirmed, and if, as we expect, the band deepens at still lower effective temperatures, its development would naturally define the scale of the proposed Y spectral class. ULAS 0034 and CFBDS 0059 would then become the prototypes of the Y0 sub-class.
The CFBDS survey has to date identified two brown dwarfs later than T8, CFBDS0059 and ULAS0034 (which we identified independently of Warren et al. (2007), Delorme et al. (2008)) in the analysis of approximately 40% of its final 1000 square degree coverage. We therefore expect to find another few similarly cool objects, and hopefully one significantly cooler one.
CFBDS0059 and ULAS0034 provide a peek into the atmospheric physics for conditions that start approching those in giant planets, and the future discoveries that can be expected from CFBDS, ULAS, and Pan-STARRS will further close the remaining gap. They also bring into a sharper light the remaining imperfections of the atmopsheric models, and emphasize in particular the importance of more complete opacity data. Our analysis relies on a room temperature absorption NH 3 spectrum, but higher excitation bands than can be excited at 300 K must matter in T eff = 600 K brown dwarfs. The eventual identification of ammonia absorption in the J band will also need complete opacity information for H 2 O and CH 4 and full spectral synthesis, since the bands of the three molecules overlap in that spectral range.
The spectral indices that define the T dwarf spectral class saturate below 700 K, and new ones will be needed at lower effective temperatures. We introduce one here, NH 3 -H, which measures the likely NH 3 absorption in the H band. Together with the W J index of Warren et al. (2007) and the slope of the blue side of the Y-band peak ), it will hopefully define a good effective temperature sequence. Metallicity and gravity diagnostics are less immediately apparent, but will need to be identified as well.
|
2008-03-03T10:34:11.000Z
|
2008-02-29T00:00:00.000
|
{
"year": 2008,
"sha1": "477b7220792047824c202539deb883abe6cbe3eb",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2008/18/aa9317-07.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "df0f03016ab5edde258285d0ae0593abbf3e1163",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.